text
stringlengths
559
401k
source
stringlengths
13
121
Model selection is the task of selecting a model from among various candidates on the basis of performance criterion to choose the best one. In the context of machine learning and more generally statistical analysis, this may be the selection of a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the data collected is well-suited to the problem of model selection. Given candidate models of similar predictive or explanatory power, the simplest model is most likely to be the best choice (Occam's razor). Konishi & Kitagawa (2008, p. 75) state, "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". Relatedly, Cox (2006, p. 197) has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". Model selection may also refer to the problem of selecting a few representative models from a large set of computational models for the purpose of decision making or optimization under uncertainty. In machine learning, algorithmic approaches to model selection include feature selection, hyperparameter optimization, and statistical learning theory. == Introduction == In its most basic forms, model selection is one of the fundamental tasks of scientific inquiry. Determining the principle that explains a series of observations is often linked directly to a mathematical model predicting those observations. For example, when Galileo performed his inclined plane experiments, he demonstrated that the motion of the balls fitted the parabola predicted by his model . Of the countless number of possible mechanisms and processes that could have produced the data, how can one even begin to choose the best model? The mathematical approach commonly taken decides among a set of candidate models; this set must be chosen by the researcher. Often simple models such as polynomials are used, at least initially . Burnham & Anderson (2002) emphasize throughout their book the importance of choosing models based on sound scientific principles, such as understanding of the phenomenological processes or mechanisms (e.g., chemical reactions) underlying the data. Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity. More complex models will be better able to adapt their shape to fit the data (for example, a fifth-order polynomial can exactly fit six points), but the additional parameters may not represent anything useful. (Perhaps those six points are really just randomly distributed about a straight line.) Goodness of fit is generally determined using a likelihood ratio approach, or an approximation of this, leading to a chi-squared test. The complexity is generally measured by counting the number of parameters in the model. Model selection techniques can be considered as estimators of some physical quantity, such as the probability of the model producing the given data. The bias and variance are both important measures of the quality of this estimator; efficiency is also often considered. A standard example of model selection is that of curve fitting, where, given a set of points and other background knowledge (e.g. points are a result of i.i.d. samples), we must select a curve that describes the function that generated the points. == Two directions of model selection == There are two main objectives in inference and learning from data. One is for scientific discovery, also called statistical inference, understanding of the underlying data-generating mechanism and interpretation of the nature of the data. Another objective of learning from data is for predicting future or unseen observations, also called Statistical Prediction. In the second objective, the data scientist does not necessarily concern an accurate probabilistic description of the data. Of course, one may also be interested in both directions. In line with the two different objectives, model selection can also have two directions: model selection for inference and model selection for prediction. The first direction is to identify the best model for the data, which will preferably provide a reliable characterization of the sources of uncertainty for scientific interpretation. For this goal, it is significantly important that the selected model is not too sensitive to the sample size. Accordingly, an appropriate notion for evaluating model selection is the selection consistency, meaning that the most robust candidate will be consistently selected given sufficiently many data samples. The second direction is to choose a model as machinery to offer excellent predictive performance. For the latter, however, the selected model may simply be the lucky winner among a few close competitors, yet the predictive performance can still be the best possible. If so, the model selection is fine for the second goal (prediction), but the use of the selected model for insight and interpretation may be severely unreliable and misleading. Moreover, for very complex models selected this way, even predictions may be unreasonable for data only slightly different from those on which the selection was made. == Methods to assist in choosing the set of candidate models == Data transformation (statistics) Exploratory data analysis Model specification Scientific method == Criteria == Below is a list of criteria for model selection. The most commonly used information criteria are (i) the Akaike information criterion and (ii) the Bayes factor and/or the Bayesian information criterion (which to some extent approximates the Bayes factor), see Stoica & Selen (2004) for a review. Akaike information criterion (AIC), a measure of the goodness fit of an estimated statistical model Bayes factor Bayesian information criterion (BIC), also known as the Schwarz information criterion, a statistical criterion for model selection Bridge criterion (BC), a statistical criterion that can attain the better performance of AIC and BIC despite the appropriateness of model specification. Cross-validation Deviance information criterion (DIC), another Bayesian oriented model selection criterion False discovery rate Focused information criterion (FIC), a selection criterion sorting statistical models by their effectiveness for a given focus parameter Hannan–Quinn information criterion, an alternative to the Akaike and Bayesian criteria Kashyap information criterion (KIC) is a powerful alternative to AIC and BIC, because KIC uses Fisher information matrix Likelihood-ratio test Mallows's Cp Minimum description length Minimum message length (MML) PRESS statistic, also known as the PRESS criterion Structural risk minimization Stepwise regression Watanabe–Akaike information criterion (WAIC), also called the widely applicable information criterion Extended Bayesian Information Criterion (EBIC) is an extension of ordinary Bayesian information criterion (BIC) for models with high parameter spaces. Extended Fisher Information Criterion (EFIC) is a model selection criterion for linear regression models. Constrained Minimum Criterion (CMC) is a frequentist method for regression model selection based on the following geometric observations. In the parameter vector space of the full model, every vector represents a model. There exists a ball centered on the true parameter vector of the full model in which the true model is the smallest model (in L 0 {\displaystyle L_{0}} norm). As the sample size goes to infinity, the MLE for the true parameter vector converges to and thus pulls the shrinking likelihood ratio confidence region to the true parameter vector. The confidence region will be inside the ball with probability tending to one. The CMC selects the smallest model in this region. When the region captures the true parameter vector, the CMC selection is the true model. Hence, the probability that the CMC selection is the true model is greater than or equal to the confidence level. Among these criteria, cross-validation is typically the most accurate, and computationally the most expensive, for supervised learning problems. Burnham & Anderson (2002, §6.3) say the following: There is a variety of model selection methods. However, from the point of view of statistical performance of a method, and intended context of its use, there are only two distinct classes of methods: These have been labeled efficient and consistent. (...) Under the frequentist paradigm for model selection one generally has three main approaches: (I) optimization of some selection criteria, (II) tests of hypotheses, and (III) ad hoc methods. == See also == == Notes == == References == Aho, K.; Derryberry, D.; Peterson, T. (2014), "Model selection for ecologists: the worldviews of AIC and BIC", Ecology, 95 (3): 631–636, Bibcode:2014Ecol...95..631A, doi:10.1890/13-1452.1, PMID 24804445 Akaike, H. (1994), "Implications of informational point of view on the development of statistical science", in Bozdogan, H. (ed.), Proceedings of the First US/JAPAN Conference on The Frontiers of Statistical Modeling: An Informational Approach—Volume 3, Kluwer Academic Publishers, pp. 27–38 Anderson, D.R. (2008), Model Based Inference in the Life Sciences, Springer, ISBN 9780387740751 Ando, T. (2010), Bayesian Model Selection and Statistical Modeling, CRC Press, ISBN 9781439836156 Breiman, L. (2001), "Statistical modeling: the two cultures", Statistical Science, 16: 199–231, doi:10.1214/ss/1009213726 Burnham, K.P.; Anderson, D.R. (2002), Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach (2nd ed.), Springer-Verlag, ISBN 0-387-95364-7 [this has over 38000 citations on Google Scholar] Chamberlin, T.C. (1890), "The method of multiple working hypotheses", Science, 15 (366): 92–6, Bibcode:1890Sci....15R..92., doi:10.1126/science.ns-15.366.92, PMID 17782687 (reprinted 1965, Science 148: 754–759 [1] doi:10.1126/science.148.3671.754) Claeskens, G. (2016), "Statistical model choice" (PDF), Annual Review of Statistics and Its Application, 3 (1): 233–256, Bibcode:2016AnRSA...3..233C, doi:10.1146/annurev-statistics-041715-033413 Claeskens, G.; Hjort, N.L. (2008), Model Selection and Model Averaging, Cambridge University Press, ISBN 9781139471800 Cox, D.R. (2006), Principles of Statistical Inference, Cambridge University Press Ding, J.; Tarokh, V.; Yang, Y. (2018), "Model Selection Techniques - An Overview", IEEE Signal Processing Magazine, 35 (6): 16–34, arXiv:1810.09583, Bibcode:2018ISPM...35f..16D, doi:10.1109/MSP.2018.2867638, S2CID 53035396 Kashyap, R.L. (1982), "Optimal choice of AR and MA parts in autoregressive moving average models", IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-4 (2), IEEE: 99–104, doi:10.1109/TPAMI.1982.4767213, PMID 21869012, S2CID 18484243 Konishi, S.; Kitagawa, G. (2008), Information Criteria and Statistical Modeling, Springer, Bibcode:2007icsm.book.....K, ISBN 9780387718866 Lahiri, P. (2001), Model Selection, Institute of Mathematical Statistics Leeb, H.; Pötscher, B. M. (2009), "Model Selection", in Anderson, T. G. (ed.), Handbook of Financial Time Series, Springer, pp. 889–925, doi:10.1007/978-3-540-71297-8_39, ISBN 978-3-540-71296-1 Lukacs, P. M.; Thompson, W. L.; Kendall, W. L.; Gould, W. R.; Doherty, P. F. Jr.; Burnham, K. P.; Anderson, D. R. (2007), "Concerns regarding a call for pluralism of information theory and hypothesis testing", Journal of Applied Ecology, 44 (2): 456–460, Bibcode:2007JApEc..44..456L, doi:10.1111/j.1365-2664.2006.01267.x, S2CID 83816981 McQuarrie, Allan D. R.; Tsai, Chih-Ling (1998), Regression and Time Series Model Selection, Singapore: World Scientific, ISBN 981-02-3242-X Massart, P. (2007), Concentration Inequalities and Model Selection, Springer Massart, P. (2014), "A non-asymptotic walk in probability and statistics", in Lin, Xihong (ed.), Past, Present, and Future of Statistical Science, Chapman & Hall, pp. 309–321, ISBN 9781482204988 Navarro, D. J. (2019), "Between the Devil and the Deep Blue Sea: Tensions between scientific judgement and statistical model selection", Computational Brain & Behavior, 2: 28–34, doi:10.1007/s42113-018-0019-z, hdl:1959.4/unsworks_64247 Resende, Paulo Angelo Alves; Dorea, Chang Chung Yu (2016), "Model identification using the Efficient Determination Criterion", Journal of Multivariate Analysis, 150: 229–244, arXiv:1409.7441, doi:10.1016/j.jmva.2016.06.002, S2CID 5469654 Shmueli, G. (2010), "To explain or to predict?", Statistical Science, 25 (3): 289–310, arXiv:1101.0891, doi:10.1214/10-STS330, MR 2791669, S2CID 15900983 Stoica, P.; Selen, Y. (2004), "Model-order selection: a review of information criterion rules" (PDF), IEEE Signal Processing Magazine, 21 (4): 36–47, doi:10.1109/MSP.2004.1311138, S2CID 17338979 Wit, E.; van den Heuvel, E.; Romeijn, J.-W. (2012), "'All models are wrong...': an introduction to model uncertainty" (PDF), Statistica Neerlandica, 66 (3): 217–236, doi:10.1111/j.1467-9574.2012.00530.x, S2CID 7793470 Wit, E.; McCullagh, P. (2001), Viana, M. A. G.; Richards, D. St. P. (eds.), "The extendibility of statistical models", Algebraic Methods in Statistics and Probability, pp. 327–340 Wójtowicz, Anna; Bigaj, Tomasz (2016), "Justification, confirmation, and the problem of mutually exclusive hypotheses", in Kuźniar, Adrian; Odrowąż-Sypniewska, Joanna (eds.), Uncovering Facts and Values, Brill Publishers, pp. 122–143, doi:10.1163/9789004312654_009, ISBN 9789004312654 Owrang, Arash; Jansson, Magnus (2018), "A Model Selection Criterion for High-Dimensional Linear Regression", IEEE Transactions on Signal Processing , 66 (13): 3436–3446, Bibcode:2018ITSP...66.3436O, doi:10.1109/TSP.2018.2821628, ISSN 1941-0476, S2CID 46931136 B. Gohain, Prakash; Jansson, Magnus (2022), "Scale-Invariant and consistent Bayesian information criterion for order selection in linear regression models", Signal Processing, 196: 108499, Bibcode:2022SigPr.19608499G, doi:10.1016/j.sigpro.2022.108499, ISSN 0165-1684, S2CID 246759677
Wikipedia/Model_selection
In statistics, model specification is part of the process of building a statistical model: specification consists of selecting an appropriate functional form for the model and choosing which variables to include. For example, given personal income y {\displaystyle y} together with years of schooling s {\displaystyle s} and on-the-job experience x {\displaystyle x} , we might specify a functional relationship y = f ( s , x ) {\displaystyle y=f(s,x)} as follows: ln ⁡ y = ln ⁡ y 0 + ρ s + β 1 x + β 2 x 2 + ε {\displaystyle \ln y=\ln y_{0}+\rho s+\beta _{1}x+\beta _{2}x^{2}+\varepsilon } where ε {\displaystyle \varepsilon } is the unexplained error term that is supposed to comprise independent and identically distributed Gaussian variables. The statistician Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". == Specification error and bias == Specification error occurs when the functional form or the choice of independent variables poorly represent relevant aspects of the true data-generating process. In particular, bias (the expected value of the difference of an estimated parameter and the true underlying value) occurs if an independent variable is correlated with the errors inherent in the underlying process. There are several different possible causes of specification error; some are listed below. An inappropriate functional form could be employed. A variable omitted from the model may have a relationship with both the dependent variable and one or more of the independent variables (causing omitted-variable bias). An irrelevant variable may be included in the model (although this does not create bias, it involves overfitting and so can lead to poor predictive performance). The dependent variable may be part of a system of simultaneous equations (giving simultaneity bias). Additionally, measurement errors may affect the independent variables: while this is not a specification error, it can create statistical bias. Note that all models will have some specification error. Indeed, in statistics there is a common aphorism that "all models are wrong". In the words of Burnham & Anderson, "Modeling is an art as well as a science and is directed toward finding a good approximating model ... as the basis for statistical inference". === Detection of misspecification === The Ramsey RESET test can help test for specification error in regression analysis. In the example given above relating personal income to schooling and job experience, if the assumptions of the model are correct, then the least squares estimates of the parameters ρ {\displaystyle \rho } and β {\displaystyle \beta } will be efficient and unbiased. Hence specification diagnostics usually involve testing the first to fourth moment of the residuals. == Model building == Building a model involves finding a set of relationships to represent the process that is generating the data. This requires avoiding all the sources of misspecification mentioned above. One approach is to start with a model in general form that relies on a theoretical understanding of the data-generating process. Then the model can be fit to the data and checked for the various sources of misspecification, in a task called statistical model validation. Theoretical understanding can then guide the modification of the model in such a way as to retain theoretical validity while removing the sources of misspecification. But if it proves impossible to find a theoretically acceptable specification that fits the data, the theoretical model may have to be rejected and replaced with another one. A quotation from Karl Popper is apposite here: "Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve". Another approach to model building is to specify several different models as candidates, and then compare those candidate models to each other. The purpose of the comparison is to determine which candidate model is most appropriate for statistical inference. Common criteria for comparing models include the following: R2, Bayes factor, and the likelihood-ratio test together with its generalization relative likelihood. For more on this topic, see statistical model selection. == See also == == Notes == == Further reading == Akaike, Hirotugu (1994), "Implications of informational point of view on the development of statistical science", in Bozdogan, H. (ed.), Proceedings of the First US/JAPAN Conference on The Frontiers of Statistical Modeling: An Informational Approach—Volume 3, Kluwer Academic Publishers, pp. 27–38. Asteriou, Dimitrios; Hall, Stephen G. (2011). "Misspecification: Wrong regressors, measurement errors and wrong functional forms". Applied Econometrics (Second ed.). Palgrave Macmillan. pp. 172–197. Colegrave, N.; Ruxton, G. D. (2017). "Statistical model specification and power: recommendations on the use of test-qualified pooling in analysis of experimental data". Proceedings of the Royal Society B. 284 (1851): 20161850. doi:10.1098/rspb.2016.1850. PMC 5378071. PMID 28330912. Gujarati, Damodar N.; Porter, Dawn C. (2009). "Econometric modeling: Model specification and diagnostic testing". Basic Econometrics (Fifth ed.). McGraw-Hill/Irwin. pp. 467–522. ISBN 978-0-07-337577-9. Harrell, Frank (2001), Regression Modeling Strategies, Springer. Kmenta, Jan (1986). Elements of Econometrics (Second ed.). New York: Macmillan Publishers. pp. 442–455. ISBN 0-02-365070-2. Lehmann, E. L. (1990). "Model specification: The views of Fisher and Neyman, and later developments". Statistical Science. 5 (2): 160–168. doi:10.1214/ss/1177012164. MacKinnon, James G. (1992). "Model specification tests and artificial regressions". Journal of Economic Literature. 30 (1): 102–146. JSTOR 2727880. Maddala, G. S.; Lahiri, Kajal (2009). "Diagnostic checking, model selection, and specification testing". Introduction to Econometrics (Fourth ed.). Wiley. pp. 401–449. ISBN 978-0-470-01512-4. Sapra, Sunil (2005). "A regression error specification test (RESET) for generalized linear models" (PDF). Economics Bulletin. 3 (1): 1–6.
Wikipedia/Model_specification
Evolutionary algorithms (EA) reproduce essential elements of the biological evolution in a computer algorithm in order to solve "difficult" problems, at least approximately, for which no exact or satisfactory solution methods are known. They belong to the class of metaheuristics and are a subset of population based bio-inspired algorithms and evolutionary computation, which itself are part of the field of computational intelligence. The mechanisms of biological evolution that an EA mainly imitates are reproduction, mutation, recombination and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators. Evolutionary algorithms often perform well approximating solutions to all types of problems because they ideally do not make any assumption about the underlying fitness landscape. Techniques from evolutionary algorithms applied to the modeling of biological evolution are generally limited to explorations of microevolutionary processes and planning models based upon cellular processes. In most real applications of EAs, computational complexity is a prohibiting factor. In fact, this computational complexity is due to fitness function evaluation. Fitness approximation is one of the solutions to overcome this difficulty. However, seemingly simple EA can solve often complex problems; therefore, there may be no direct link between algorithm complexity and problem complexity. == Generic definition == The following is an example of a generic evolutionary algorithm: Randomly generate the initial population of individuals, the first generation. Evaluate the fitness of each individual in the population. Check, if the goal is reached and the algorithm can be terminated. Select individuals as parents, preferably of higher fitness. Produce offspring with optional crossover (mimicking reproduction). Apply mutation operations on the offspring. Select individuals preferably of lower fitness for replacement with new individuals (mimicking natural selection). Return to 2 == Types == Similar techniques differ in genetic representation and other implementation details, and the nature of the particular applied problem. Genetic algorithm – This is the most popular type of EA. One seeks the solution of a problem in the form of strings of numbers (traditionally binary, although the best representations are usually those that reflect something about the problem being solved), by applying operators such as recombination and mutation (sometimes one, sometimes both). This type of EA is often used in optimization problems. Genetic programming – Here the solutions are in the form of computer programs, and their fitness is determined by their ability to solve a computational problem. There are many variants of Genetic Programming: Cartesian genetic programming Gene expression programming Grammatical evolution Linear genetic programming Multi expression programming Evolutionary programming – Similar to evolution strategy, but with a deterministic selection of all parents. Evolution strategy (ES) – Works with vectors of real numbers as representations of solutions, and typically uses self-adaptive mutation rates. The method is mainly used for numerical optimization, although there are also variants for combinatorial tasks. CMA-ES Natural evolution strategy Differential evolution – Based on vector differences and is therefore primarily suited for numerical optimization problems. Coevolutionary algorithm – Similar to genetic algorithms and evolution strategies, but the created solutions are compared on the basis of their outcomes from interactions with other solutions. Solutions can either compete or cooperate during the search process. Coevolutionary algorithms are often used in scenarios where the fitness landscape is dynamic, complex, or involves competitive interactions. Neuroevolution – Similar to genetic programming but the genomes represent artificial neural networks by describing structure and connection weights. The genome encoding can be direct or indirect. Learning classifier system – Here the solution is a set of classifiers (rules or conditions). A Michigan-LCS evolves at the level of individual classifiers whereas a Pittsburgh-LCS uses populations of classifier-sets. Initially, classifiers were only binary, but now include real, neural net, or S-expression types. Fitness is typically determined with either a strength or accuracy based reinforcement learning or supervised learning approach. Quality–Diversity algorithms – QD algorithms simultaneously aim for high-quality and diverse solutions. Unlike traditional optimization algorithms that solely focus on finding the best solution to a problem, QD algorithms explore a wide variety of solutions across a problem space and keep those that are not just high performing, but also diverse and unique. == Theoretical background == The following theoretical principles apply to all or almost all EAs. === No free lunch theorem === The no free lunch theorem of optimization states that all optimization strategies are equally effective when the set of all optimization problems is considered. Under the same condition, no evolutionary algorithm is fundamentally better than another. This can only be the case if the set of all problems is restricted. This is exactly what is inevitably done in practice. Therefore, to improve an EA, it must exploit problem knowledge in some form (e.g. by choosing a certain mutation strength or a problem-adapted coding). Thus, if two EAs are compared, this constraint is implied. In addition, an EA can use problem specific knowledge by, for example, not randomly generating the entire start population, but creating some individuals through heuristics or other procedures. Another possibility to tailor an EA to a given problem domain is to involve suitable heuristics, local search procedures or other problem-related procedures in the process of generating the offspring. This form of extension of an EA is also known as a memetic algorithm. Both extensions play a major role in practical applications, as they can speed up the search process and make it more robust. === Convergence === For EAs in which, in addition to the offspring, at least the best individual of the parent generation is used to form the subsequent generation (so-called elitist EAs), there is a general proof of convergence under the condition that an optimum exists. Without loss of generality, a maximum search is assumed for the proof: From the property of elitist offspring acceptance and the existence of the optimum it follows that per generation k {\displaystyle k} an improvement of the fitness F {\displaystyle F} of the respective best individual x ′ {\displaystyle x'} will occur with a probability P > 0 {\displaystyle P>0} . Thus: F ( x 1 ′ ) ≤ F ( x 2 ′ ) ≤ F ( x 3 ′ ) ≤ ⋯ ≤ F ( x k ′ ) ≤ ⋯ {\displaystyle F(x'_{1})\leq F(x'_{2})\leq F(x'_{3})\leq \cdots \leq F(x'_{k})\leq \cdots } I.e., the fitness values represent a monotonically non-decreasing sequence, which is bounded due to the existence of the optimum. From this follows the convergence of the sequence against the optimum. Since the proof makes no statement about the speed of convergence, it is of little help in practical applications of EAs. But it does justify the recommendation to use elitist EAs. However, when using the usual panmictic population model, elitist EAs tend to converge prematurely more than non-elitist ones. In a panmictic population model, mate selection (see step 4 of the generic definition) is such that every individual in the entire population is eligible as a mate. In non-panmictic populations, selection is suitably restricted, so that the dispersal speed of better individuals is reduced compared to panmictic ones. Thus, the general risk of premature convergence of elitist EAs can be significantly reduced by suitable population models that restrict mate selection. === Virtual alphabets === With the theory of virtual alphabets, David E. Goldberg showed in 1990 that by using a representation with real numbers, an EA that uses classical recombination operators (e.g. uniform or n-point crossover) cannot reach certain areas of the search space, in contrast to a coding with binary numbers. This results in the recommendation for EAs with real representation to use arithmetic operators for recombination (e.g. arithmetic mean or intermediate recombination). With suitable operators, real-valued representations are more effective than binary ones, contrary to earlier opinion. == Comparison to other concepts == === Biological processes === A possible limitation of many evolutionary algorithms is their lack of a clear genotype–phenotype distinction. In nature, the fertilized egg cell undergoes a complex process known as embryogenesis to become a mature phenotype. This indirect encoding is believed to make the genetic search more robust (i.e. reduce the probability of fatal mutations), and also may improve the evolvability of the organism. Such indirect (also known as generative or developmental) encodings also enable evolution to exploit the regularity in the environment. Recent work in the field of artificial embryogeny, or artificial developmental systems, seeks to address these concerns. And gene expression programming successfully explores a genotype–phenotype system, where the genotype consists of linear multigenic chromosomes of fixed length and the phenotype consists of multiple expression trees or computer programs of different sizes and shapes. === Monte-Carlo methods === Both method classes have in common that their individual search steps are determined by chance. The main difference, however, is that EAs, like many other metaheuristics, learn from past search steps and incorporate this experience into the execution of the next search steps in a method-specific form. With EAs, this is done firstly through the fitness-based selection operators for partner choice and the formation of the next generation. And secondly, in the type of search steps: In EA, they start from a current solution and change it or they mix the information of two solutions. In contrast, when dicing out new solutions in Monte-Carlo methods, there is usually no connection to existing solutions. If, on the other hand, the search space of a task is such that there is nothing to learn, Monte-Carlo methods are an appropriate tool, as they do not contain any algorithmic overhead that attempts to draw suitable conclusions from the previous search. An example of such tasks is the proverbial search for a needle in a haystack, e.g. in the form of a flat (hyper)plane with a single narrow peak. == Applications == The areas in which evolutionary algorithms are practically used are almost unlimited and range from industry, engineering, complex scheduling, agriculture, robot movement planning and finance to research and art. The application of an evolutionary algorithm requires some rethinking from the inexperienced user, as the approach to a task using an EA is different from conventional exact methods and this is usually not part of the curriculum of engineers or other disciplines. For example, the fitness calculation must not only formulate the goal but also support the evolutionary search process towards it, e.g. by rewarding improvements that do not yet lead to a better evaluation of the original quality criteria. For example, if peak utilisation of resources such as personnel deployment or energy consumption is to be avoided in a scheduling task, it is not sufficient to assess the maximum utilisation. Rather, the number and duration of exceedances of a still acceptable level should also be recorded in order to reward reductions below the actual maximum peak value. There are therefore some publications that are aimed at the beginner and want to help avoiding beginner's mistakes as well as leading an application project to success. This includes clarifying the fundamental question of when an EA should be used to solve a problem and when it is better not to. == Related techniques and other global search methods == There are some other proven and widely used methods of nature inspired global search techniques such as Memetic algorithm – A hybrid method, inspired by Richard Dawkins's notion of a meme. It commonly takes the form of a population-based algorithm (frequently an EA) coupled with individual learning procedures capable of performing local refinements. Emphasizes the exploitation of problem-specific knowledge and tries to orchestrate local and global search in a synergistic way. A cellular evolutionary or memetic algorithm uses a topological neighbouhood relation between the individuals of a population for restricting the mate selection and by that reducing the propagation speed of above-average individuals. The idea is to maintain genotypic diversity in the poulation over a longer period of time to reduce the risk of premature convergence. Ant colony optimization is based on the ideas of ant foraging by pheromone communication to form paths. Primarily suited for combinatorial optimization and graph problems. Particle swarm optimization is based on the ideas of animal flocking behaviour. Also primarily suited for numerical optimization problems. Gaussian adaptation – Based on information theory. Used for maximization of manufacturing yield, mean fitness or average information. See for instance Entropy in thermodynamics and information theory. In addition, many new nature-inspired or methaphor-guided algorithms have been proposed since the beginning of this century. For criticism of most publications on these, see the remarks at the end of the introduction to the article on metaheuristics. == Examples == In 2020, Google stated that their AutoML-Zero can successfully rediscover classic algorithms such as the concept of neural networks. The computer simulations Tierra and Avida attempt to model macroevolutionary dynamics. == Gallery == == References == == Bibliography == Ashlock, D. (2006), Evolutionary Computation for Modeling and Optimization, Springer, New York, doi:10.1007/0-387-31909-3 ISBN 0-387-22196-4. Bäck, T. (1996), Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms, Oxford Univ. Press, New York, ISBN 978-0-19-509971-3. Bäck, T., Fogel, D., Michalewicz, Z. (1999), Evolutionary Computation 1: Basic Algorithms and Operators, CRC Press, Boca Raton, USA, ISBN 978-0-7503-0664-5. Bäck, T., Fogel, D., Michalewicz, Z. (2000), Evolutionary Computation 2: Advanced Algorithms and Operators, CRC Press, Boca Raton, USA, doi:10.1201/9781420034349 ISBN 978-0-3678-0637-8. Banzhaf, W., Nordin, P., Keller, R., Francone, F. (1998), Genetic Programming - An Introduction, Morgan Kaufmann, San Francisco, ISBN 978-1-55860-510-7. Eiben, A.E., Smith, J.E. (2003), Introduction to Evolutionary Computing, Springer, Heidelberg, New York, doi:10.1007/978-3-662-44874-8 ISBN 978-3-662-44873-1. Holland, J. H. (1992), Adaptation in Natural and Artificial Systems, MIT Press, Cambridge, MA, ISBN 978-0-262-08213-6. Michalewicz, Z.; Fogel, D.B. (2004), How To Solve It: Modern Heuristics. Springer, Berlin, Heidelberg, ISBN 978-3-642-06134-9, doi:10.1007/978-3-662-07807-5. Benko, Attila; Dosa, Gyorgy; Tuza, Zsolt (2010). "Bin Packing/Covering with Delivery, solved with the evolution of algorithms". 2010 IEEE Fifth International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA). pp. 298–302. doi:10.1109/BICTA.2010.5645312. ISBN 978-1-4244-6437-1. S2CID 16875144. Poli, R.; Langdon, W. B.; McPhee, N. F. (2008). A Field Guide to Genetic Programming. Lulu.com, freely available from the internet. ISBN 978-1-4092-0073-4. Archived from the original on 2016-05-27. Retrieved 2011-03-05. Price, K., Storn, R.M., Lampinen, J.A., (2005). Differential Evolution: A Practical Approach to Global Optimization, Springer, Berlin, Heidelberg, ISBN 978-3-642-42416-8, doi:10.1007/3-540-31306-0. Ingo Rechenberg (1971), Evolutionsstrategie - Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (PhD thesis). Reprinted by Fromman-Holzboog (1973). ISBN 3-7728-1642-8 Hans-Paul Schwefel (1974), Numerische Optimierung von Computer-Modellen (PhD thesis). Reprinted by Birkhäuser (1977). Hans-Paul Schwefel (1995), Evolution and Optimum Seeking. Wiley & Sons, New York. ISBN 0-471-57148-2 Simon, D. (2013), Evolutionary Optimization Algorithms Archived 2014-03-10 at the Wayback Machine, Wiley & Sons, ISBN 978-0-470-93741-9 Kruse, Rudolf; Borgelt, Christian; Klawonn, Frank; Moewes, Christian; Steinbrecher, Matthias; Held, Pascal (2013), Computational Intelligence: A Methodological Introduction. Springer, London. ISBN 978-1-4471-5012-1, doi:10.1007/978-1-4471-5013-8. Rahman, Rosshairy Abd.; Kendall, Graham; Ramli, Razamin; Jamari, Zainoddin; Ku-Mahamud, Ku Ruhana (2017). "Shrimp Feed Formulation via Evolutionary Algorithm with Power Heuristics for Handling Constraints". Complexity. 2017: 1–12. doi:10.1155/2017/7053710. == External links == An Overview of the History and Flavors of Evolutionary Algorithms
Wikipedia/Evolutionary_algorithm
In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions over convex sets. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size. == History == The ellipsoid method has a long history. As an iterative method, a preliminary version was introduced by Naum Z. Shor. In 1972, an approximation algorithm for real convex minimization was studied by Arkadi Nemirovski and David B. Yudin (Judin). As an algorithm for solving linear programming problems with rational data, the ellipsoid algorithm was studied by Leonid Khachiyan; Khachiyan's achievement was to prove the polynomial-time solvability of linear programs. This was a notable step from a theoretical perspective: The standard algorithm for solving linear problems at the time was the simplex algorithm, which has a run time that typically is linear in the size of the problem, but for which examples exist for which it is exponential in the size of the problem. As such, having an algorithm that is guaranteed to be polynomial for all cases was a theoretical breakthrough. Khachiyan's work showed, for the first time, that there can be algorithms for solving linear programs whose runtime can be proven to be polynomial. In practice, however, the algorithm is fairly slow and of little practical interest, though it provided inspiration for later work that turned out to be of much greater practical use. Specifically, Karmarkar's algorithm, an interior-point method, is much faster than the ellipsoid method in practice. Karmarkar's algorithm is also faster in the worst case. The ellipsoidal algorithm allows complexity theorists to achieve (worst-case) bounds that depend on the dimension of the problem and on the size of the data, but not on the number of rows, so it remained important in combinatorial optimization theory for many years. Only in the 21st century have interior-point algorithms with similar complexity properties appeared. == Description == A convex minimization problem consists of the following ingredients. A convex function f 0 ( x ) : R n → R {\displaystyle f_{0}(x):\mathbb {R} ^{n}\to \mathbb {R} } to be minimized over the vector x {\displaystyle x} (containing n variables); Convex inequality constraints of the form f i ( x ) ⩽ 0 {\displaystyle f_{i}(x)\leqslant 0} , where the functions f i {\displaystyle f_{i}} are convex; these constraints define a convex set Q {\displaystyle Q} . Linear equality constraints of the form h i ( x ) = 0 {\displaystyle h_{i}(x)=0} . We are also given an initial ellipsoid E ( 0 ) ⊂ R n {\displaystyle {\mathcal {E}}^{(0)}\subset \mathbb {R} ^{n}} defined as E ( 0 ) = { z ∈ R n : ( z − x 0 ) T P ( 0 ) − 1 ( z − x 0 ) ⩽ 1 } {\displaystyle {\mathcal {E}}^{(0)}=\left\{z\in \mathbb {R} ^{n}\ :\ (z-x_{0})^{T}P_{(0)}^{-1}(z-x_{0})\leqslant 1\right\}} containing a minimizer x ∗ {\displaystyle x^{*}} , where P ( 0 ) ≻ 0 {\displaystyle P_{(0)}\succ 0} and x 0 {\displaystyle x_{0}} is the center of E {\displaystyle {\mathcal {E}}} . Finally, we require the existence of a separation oracle for the convex set Q {\displaystyle Q} . Given a point x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} , the oracle should return one of two answers: "The point x {\displaystyle x} is in Q {\displaystyle Q} ", or - "The point x {\displaystyle x} is not in Q {\displaystyle Q} , and moreover, here is a hyperplane that separates x {\displaystyle x} from Q {\displaystyle Q} ", that is, a vector c {\displaystyle c} such that c ⋅ x < c ⋅ y {\displaystyle c\cdot x<c\cdot y} for all y ∈ Q {\displaystyle y\in Q} . The output of the ellipsoid method is either: Any point in the polytope Q {\displaystyle Q} (i.e., any feasible point), or - A proof that Q {\displaystyle Q} is empty. Inequality-constrained minimization of a function that is zero everywhere corresponds to the problem of simply identifying any feasible point. It turns out that any linear programming problem can be reduced to a linear feasibility problem (i.e. minimize the zero function subject to some linear inequality and equality constraints). One way to do this is by combining the primal and dual linear programs together into one program, and adding the additional (linear) constraint that the value of the primal solution is no worse than the value of the dual solution.: 84  Another way is to treat the objective of the linear program as an additional constraint, and use binary search to find the optimum value.: 7–8  == Unconstrained minimization == At the k-th iteration of the algorithm, we have a point x ( k ) {\displaystyle x^{(k)}} at the center of an ellipsoid E ( k ) = { x ∈ R n : ( x − x ( k ) ) T P ( k ) − 1 ( x − x ( k ) ) ⩽ 1 } . {\displaystyle {\mathcal {E}}^{(k)}=\left\{x\in \mathbb {R} ^{n}\ :\ \left(x-x^{(k)}\right)^{T}P_{(k)}^{-1}\left(x-x^{(k)}\right)\leqslant 1\right\}.} We query the cutting-plane oracle to obtain a vector g ( k + 1 ) ∈ R n {\displaystyle g^{(k+1)}\in \mathbb {R} ^{n}} such that g ( k + 1 ) T ( x ∗ − x ( k ) ) ⩽ 0. {\displaystyle g^{(k+1)T}\left(x^{*}-x^{(k)}\right)\leqslant 0.} We therefore conclude that x ∗ ∈ E ( k ) ∩ { z : g ( k + 1 ) T ( z − x ( k ) ) ⩽ 0 } . {\displaystyle x^{*}\in {\mathcal {E}}^{(k)}\cap \left\{z\ :\ g^{(k+1)T}\left(z-x^{(k)}\right)\leqslant 0\right\}.} We set E ( k + 1 ) {\displaystyle {\mathcal {E}}^{(k+1)}} to be the ellipsoid of minimal volume containing the half-ellipsoid described above and compute x ( k + 1 ) {\displaystyle x^{(k+1)}} . The update is given by x ( k + 1 ) = x ( k ) − 1 n + 1 P ( k ) g ~ ( k + 1 ) P ( k + 1 ) = n 2 n 2 − 1 ( P ( k ) − 2 n + 1 P ( k ) g ~ ( k + 1 ) g ~ ( k + 1 ) T P ( k ) ) {\displaystyle {\begin{aligned}x^{(k+1)}&=x^{(k)}-{\frac {1}{n+1}}P_{(k)}{\tilde {g}}^{(k+1)}\\P_{(k+1)}&={\frac {n^{2}}{n^{2}-1}}\left(P_{(k)}-{\frac {2}{n+1}}P_{(k)}{\tilde {g}}^{(k+1)}{\tilde {g}}^{(k+1)T}P_{(k)}\right)\end{aligned}}} where g ~ ( k + 1 ) = ( 1 g ( k + 1 ) T P ( k ) g ( k + 1 ) ) g ( k + 1 ) . {\displaystyle {\tilde {g}}^{(k+1)}=\left({\frac {1}{\sqrt {g^{(k+1)T}P_{(k)}g^{(k+1)}}}}\right)g^{(k+1)}.} The stopping criterion is given by the property that g ( k ) T P ( k ) g ( k ) ⩽ ϵ ⇒ f ( x ( k ) ) − f ( x ∗ ) ⩽ ϵ . {\displaystyle {\sqrt {g^{(k)T}P_{(k)}g^{(k)}}}\leqslant \epsilon \quad \Rightarrow \quad f(x^{(k)})-f\left(x^{*}\right)\leqslant \epsilon .} == Inequality-constrained minimization == At the k-th iteration of the algorithm for constrained minimization, we have a point x ( k ) {\displaystyle x^{(k)}} at the center of an ellipsoid E ( k ) {\displaystyle {\mathcal {E}}^{(k)}} as before. We also must maintain a list of values f b e s t ( k ) {\displaystyle f_{\rm {best}}^{(k)}} recording the smallest objective value of feasible iterates so far. Depending on whether or not the point x ( k ) {\displaystyle x^{(k)}} is feasible, we perform one of two tasks: If x ( k ) {\displaystyle x^{(k)}} is feasible, perform essentially the same update as in the unconstrained case, by choosing a subgradient g 0 {\displaystyle g_{0}} that satisfies g 0 T ( x ∗ − x ( k ) ) + f 0 ( x ( k ) ) − f b e s t ( k ) ⩽ 0 {\displaystyle g_{0}^{T}(x^{*}-x^{(k)})+f_{0}(x^{(k)})-f_{\rm {best}}^{(k)}\leqslant 0} If x ( k ) {\displaystyle x^{(k)}} is infeasible and violates the j-th constraint, update the ellipsoid with a feasibility cut. Our feasibility cut may be a subgradient g j {\displaystyle g_{j}} of f j {\displaystyle f_{j}} which must satisfy g j T ( z − x ( k ) ) + f j ( x ( k ) ) ⩽ 0 {\displaystyle g_{j}^{T}(z-x^{(k)})+f_{j}(x^{(k)})\leqslant 0} for all feasible z. == Performance in convex programs == === Theoretical run-time complexity guarantee === The run-time complexity guarantee of the ellipsoid method in the real RAM model is given by the following theorem.: Thm.8.3.1  Consider a family of convex optimization problems of the form: minimize f(x) s.t. x is in G, where f is a convex function and G is a convex set (a subset of an Euclidean space Rn). Each problem p in the family is represented by a data-vector Data(p), e.g., the real-valued coefficients in matrices and vectors representing the function f and the feasible region G. The size of a problem p, Size(p), is defined as the number of elements (real numbers) in Data(p). The following assumptions are needed: G (the feasible region) is: Bounded; Has a non-empty interior (so there is a strictly-feasible point); Given Data(p), one can compute using poly(Size(p)) arithmetic operations: An ellipsoid that contains G; A lower bound 'MinVol(p)>0' of the volume G. Given Data(p) and a point x in Rn, one can compute using poly(Size(p)) arithmetic operations: A separation oracle for G (that is: either assert that x is in G, or return a hyperplane separating x from G). A first-order oracle for f (that is: compute the value of f(x) and a subgradient f'(x)). Under these assumptions, the ellipsoid method is "R-polynomial". This means that there exists a polynomial Poly such that, for every problem-instance p and every approximation-ratio ε>0, the method finds a solution x satisfying : f ( x ) − min G f ≤ ε ⋅ [ max G f − min G f ] {\displaystyle f(x)-\min _{G}f\leq \varepsilon \cdot [\max _{G}f-\min _{G}f]} ,using at most the following number of arithmetic operations on real numbers: P o l y ( S i z e ( p ) ) ⋅ ln ⁡ ( V ( p ) ϵ ) {\displaystyle Poly(Size(p))\cdot \ln \left({\frac {V(p)}{\epsilon }}\right)} where V(p) is a data-dependent quantity. Intuitively, it means that the number of operations required for each additional digit of accuracy is polynomial in Size(p). In the case of the ellipsoid method, we have: V ( p ) = [ V o l ( initial ellipsoid ) V o l ( G ) ] 1 / n ≤ [ V o l ( initial ellipsoid ) M i n V o l ( p ) ] 1 / n {\displaystyle V(p)=\left[{\frac {Vol({\text{initial ellipsoid}})}{Vol(G)}}\right]^{1/n}\leq \left[{\frac {Vol({\text{initial ellipsoid}})}{MinVol(p)}}\right]^{1/n}} .The ellipsoid method requires at most 2 ( n − 1 ) n ⋅ ln ⁡ ( V ( p ) ϵ ) {\displaystyle 2(n-1)n\cdot \ln \left({\frac {V(p)}{\epsilon }}\right)} steps, and each step requires Poly(Size(p)) arithmetic operations. === Practical performance === The ellipsoid method is used on low-dimensional problems, such as planar location problems, where it is numerically stable. Nemirovsky and BenTal: Sec.8.3.3  say that it is efficient if the number of variables is at most 20-30; this is so even if there are thousands of constraints, as the number of iterations does not depend on the number of constraints. However, in problems with many variables, the ellipsoid method is very inefficient, as the number of iterations grows as O(n2). Even on "small"-sized problems, it suffers from numerical instability and poor performance in practice . === Theoretical importance === The ellipsoid method is an important theoretical technique in combinatorial optimization. In computational complexity theory, the ellipsoid algorithm is attractive because its complexity depends on the number of columns and the digital size of the coefficients, but not on the number of rows. The ellipsoid method can be used to show that many algorithmic problems on convex sets are polynomial-time equivalent. == Performance in linear programs == Leonid Khachiyan applied the ellipsoid method to the special case of linear programming: minimize cTx s.t. Ax ≤ b, where all coefficients in A,b,c are rational numbers. He showed that linear programs can be solved in polynomial time. Here is a sketch of Khachiyan's theorem.: Sec.8.4.2  Step 1: reducing optimization to search. The theorem of linear programming duality says that we can reduce the above minimization problem to the search problem: find x,y s.t. Ax ≤ b ; ATy = c ; y ≤ 0 ; cTx=bTy. The first problem is solvable iff the second problem is solvable; in case the problem is solvable, the x-components of the solution to the second problem are an optimal solution to the first problem. Therefore, from now on, we can assume that we need to solve the following problem: find z ≥ 0 s.t. Rz ≤ r. Multiplying all rational coefficients by the common denominator, we can assume that all coefficients are integers. Step 2: reducing search to feasibility-check. The problem find z ≥ 0 s.t. Rz ≤ r can be reduced to the binary decision problem: "is there a z ≥ 0 such that Rz ≤ r?". This can be done as follows. If the answer to the decision problem is "no", then the answer to the search problem is "None", and we are done. Otherwise, take the first inequality constraint R1z ≤ r1; replace it with an equality R1z = r1; and apply the decision problem again. If the answer is "yes", we keep the equality; if the answer is "no", it means that the inequality is redundant, and we can remove it. Then we proceed to the next inequality constraint. For each constraint, we either convert it to equality or remove it. Finally, we have only equality constraints, which can be solved by any method for solving a system of linear equations. Step 3: the decision problem can be reduced to a different optimization problem. Define the residual function f(z) := max[(Rz)1-r1, (Rz)2-r2, (Rz)3-r3,...]. Clearly, f(z)≤0 iff Rz ≤ r. Therefore, to solve the decision problem, it is sufficient to solve the minimization problem: minz f(z). The function f is convex (it is a maximum of linear functions). Denote the minimum value by f*. Then the answer to the decision problem is "yes" iff f*≤0. Step 4: In the optimization problem minz f(z), we can assume that z is in a box of side-length 2L, where L is the bit length of the problem data. Thus, we have a bounded convex program, that can be solved up to any accuracy ε by the ellipsoid method, in time polynomial in L. Step 5: It can be proved that, if f*>0, then f*>2-poly(L), for some polynomial. Therefore, we can pick the accuracy ε=2-poly(L). Then, the ε-approximate solution found by the ellipsoid method will be positive, iff f*>0, iff the decision problem is unsolvable. == Variants == The ellipsoid method has several variants, depending on what cuts exactly are used in each step.: Sec. 3  === Different cuts === In the central-cut ellipsoid method,: 82, 87–94  the cuts are always through the center of the current ellipsoid. The input is a rational number ε>0, a convex body K given by a weak separation oracle, and a number R such that S(0,R) (the ball of radius R around the origin) contains K. The output is one of the following: (a) A vector at a distance of at most ε from K, or -- (b) A positive definite matrix A and a point a such that the ellipsoid E(A,a) contains K, and the volume of E(A,a) is at most ε. The number of steps is N := ⌈ 5 n log ⁡ ( 1 / ϵ ) + 5 n 2 log ⁡ ( 2 R ) ⌉ {\displaystyle N:=\lceil 5n\log(1/\epsilon )+5n^{2}\log(2R)\rceil } , the number of required accuracy digits is p := 8N, and the required accuracy of the separation oracle is d := 2−p. In the deep-cut ellipsoid method,: 83  the cuts remove more than half of the ellipsoid in each step. This makes it faster to discover that K is empty. However, when K is nonempty, there are examples in which the central-cut method finds a feasible point faster. The use of deep cuts does not change the order of magnitude of the run-time. In the shallow-cut ellipsoid method,: 83, 94–101  the cuts remove less than half of the ellipsoid in each step. This variant is not very useful in practice, but it has theoretical importance: it allows to prove results that cannot be derived from other variants. The input is a rational number ε>0, a convex body K given by a shallow separation oracle, and a number R such that S(0,R) contains K. The output is a positive definite matrix A and a point a such that one of the following holds: (a) The ellipsoid E(A,a) has been declared "tough" by the oracle, or - (b) K is contained in E(A,a) and the volume of E(A,a) is at most ε. The number of steps is N := ⌈ 5 n ( n + 1 ) 2 log ⁡ ( 1 / ϵ ) + 5 n 2 ( n + 1 ) 2 log ⁡ ( 2 R ) + log ⁡ ( n + 1 ) ⌉ {\displaystyle N:=\lceil 5n(n+1)^{2}\log(1/\epsilon )+5n^{2}(n+1)^{2}\log(2R)+\log(n+1)\rceil } , and the number of required accuracy digits is p := 8N. === Different ellipsoids === There is also a distinction between the circumscribed ellipsoid and the inscribed ellipsoid methods: In the circumscribed ellipsoid method, each iteration finds an ellipsoid of smallest volume that contains the remaining part of the previous ellipsoid. This method was developed by Yudin and Nemirovskii. In the Inscribed ellipsoid method, each iteration finds an ellipsoid of largest volume that is contained the remaining part of the previous ellipsoid. This method was developed by Tarasov, Khachian and Erlikh. The methods differ in their runtime complexity (below, n is the number of variables and epsilon is the accuracy): The circumscribed method requires O ( n 2 ) ln ⁡ 1 ϵ {\displaystyle O(n^{2})\ln {\frac {1}{\epsilon }}} iterations, where each iteration consists of finding a separating hyperplane and finding a new circumscribed ellipsoid. Finding a circumscribed ellipsoid requires O ( n 2 ) {\displaystyle O(n^{2})} time. The inscribed method requires O ( n ) ln ⁡ 1 ϵ {\displaystyle O(n)\ln {\frac {1}{\epsilon }}} iterations, where each iteration consists of finding a separating hyperplane and finding a new inscribed ellipsoid. Finding an inscribed ellipsoid requires O ( n 3.5 + δ ) {\displaystyle O(n^{3.5+\delta })} time for some small δ > 0 {\displaystyle \delta >0} . The relative efficiency of the methods depends on the time required for finding a separating hyperplane, which depends on the application: if the runtime is O ( n t ) {\displaystyle O(n^{t})} for t ≤ 2.5 {\displaystyle t\leq 2.5} then the circumscribed method is more efficient, but if t > 2.5 {\displaystyle t>2.5} then the inscribed method is more efficient. == Related methods == The center-of-gravity method is a conceptually simpler method, that requires fewer steps. However, each step is computationally expensive, as it requires to compute the center of gravity of the current feasible polytope. Interior point methods, too, allow solving convex optimization problems in polynomial time, but their practical performance is much better than the ellipsoid method. == Notes == == Further reading == Dmitris Alevras and Manfred W. Padberg, Linear Optimization and Extensions: Problems and Extensions, Universitext, Springer-Verlag, 2001. (Problems from Padberg with solutions.) V. Chandru and M.R.Rao, Linear Programming, Chapter 31 in Algorithms and Theory of Computation Handbook, edited by M.J.Atallah, CRC Press 1999, 31-1 to 31-37. V. Chandru and M.R.Rao, Integer Programming, Chapter 32 in Algorithms and Theory of Computation Handbook, edited by M.J.Atallah, CRC Press 1999, 32-1 to 32-45. George B. Dantzig and Mukund N. Thapa. 1997. Linear programming 1: Introduction. Springer-Verlag. George B. Dantzig and Mukund N. Thapa. 2003. Linear Programming 2: Theory and Extensions. Springer-Verlag. L. Lovász: An Algorithmic Theory of Numbers, Graphs, and Convexity, CBMS-NSF Regional Conference Series in Applied Mathematics 50, SIAM, Philadelphia, Pennsylvania, 1986 Kattta G. Murty, Linear Programming, Wiley, 1983. M. Padberg, Linear Optimization and Extensions, Second Edition, Springer-Verlag, 1999. Christos H. Papadimitriou and Kenneth Steiglitz, Combinatorial Optimization: Algorithms and Complexity, Corrected republication with a new preface, Dover. Alexander Schrijver, Theory of Linear and Integer Programming. John Wiley & sons, 1998, ISBN 0-471-98232-6 == External links == EE364b, a Stanford course homepage
Wikipedia/Ellipsoid_method
In numerical optimization, the nonlinear conjugate gradient method generalizes the conjugate gradient method to nonlinear optimization. For a quadratic function f ( x ) {\displaystyle \displaystyle f(x)} f ( x ) = ‖ A x − b ‖ 2 , {\displaystyle \displaystyle f(x)=\|Ax-b\|^{2},} the minimum of f {\displaystyle f} is obtained when the gradient is 0: ∇ x f = 2 A T ( A x − b ) = 0 {\displaystyle \nabla _{x}f=2A^{T}(Ax-b)=0} . Whereas linear conjugate gradient seeks a solution to the linear equation A T A x = A T b {\displaystyle \displaystyle A^{T}Ax=A^{T}b} , the nonlinear conjugate gradient method is generally used to find the local minimum of a nonlinear function using its gradient ∇ x f {\displaystyle \nabla _{x}f} alone. It works when the function is approximately quadratic near the minimum, which is the case when the function is twice differentiable at the minimum and the second derivative is non-singular there. Given a function f ( x ) {\displaystyle \displaystyle f(x)} of N {\displaystyle N} variables to minimize, its gradient ∇ x f {\displaystyle \nabla _{x}f} indicates the direction of maximum increase. One simply starts in the opposite (steepest descent) direction: Δ x 0 = − ∇ x f ( x 0 ) {\displaystyle \Delta x_{0}=-\nabla _{x}f(x_{0})} with an adjustable step length α {\displaystyle \displaystyle \alpha } and performs a line search in this direction until it reaches the minimum of f {\displaystyle \displaystyle f} : α 0 := arg ⁡ min α f ( x 0 + α Δ x 0 ) {\displaystyle \displaystyle \alpha _{0}:=\arg \min _{\alpha }f(x_{0}+\alpha \Delta x_{0})} , x 1 = x 0 + α 0 Δ x 0 {\displaystyle \displaystyle x_{1}=x_{0}+\alpha _{0}\Delta x_{0}} After this first iteration in the steepest direction Δ x 0 {\displaystyle \displaystyle \Delta x_{0}} , the following steps constitute one iteration of moving along a subsequent conjugate direction s n {\displaystyle \displaystyle s_{n}} , where s 0 = Δ x 0 {\displaystyle \displaystyle s_{0}=\Delta x_{0}} : Calculate the steepest direction: Δ x n = − ∇ x f ( x n ) {\displaystyle \Delta x_{n}=-\nabla _{x}f(x_{n})} , Compute β n {\displaystyle \displaystyle \beta _{n}} according to one of the formulas below, Update the conjugate direction: s n = Δ x n + β n s n − 1 {\displaystyle \displaystyle s_{n}=\Delta x_{n}+\beta _{n}s_{n-1}} Perform a line search: optimize α n = arg ⁡ min α f ( x n + α s n ) {\displaystyle \displaystyle \alpha _{n}=\arg \min _{\alpha }f(x_{n}+\alpha s_{n})} , Update the position: x n + 1 = x n + α n s n {\displaystyle \displaystyle x_{n+1}=x_{n}+\alpha _{n}s_{n}} , With a pure quadratic function the minimum is reached within N iterations (excepting roundoff error), but a non-quadratic function will make slower progress. Subsequent search directions lose conjugacy requiring the search direction to be reset to the steepest descent direction at least every N iterations, or sooner if progress stops. However, resetting every iteration turns the method into steepest descent. The algorithm stops when it finds the minimum, determined when no progress is made after a direction reset (i.e. in the steepest descent direction), or when some tolerance criterion is reached. Within a linear approximation, the parameters α {\displaystyle \displaystyle \alpha } and β {\displaystyle \displaystyle \beta } are the same as in the linear conjugate gradient method but have been obtained with line searches. The conjugate gradient method can follow narrow (ill-conditioned) valleys, where the steepest descent method slows down and follows a criss-cross pattern. Four of the best known formulas for β n {\displaystyle \displaystyle \beta _{n}} are named after their developers: Fletcher–Reeves: β n F R = Δ x n T Δ x n Δ x n − 1 T Δ x n − 1 . {\displaystyle \beta _{n}^{FR}={\frac {\Delta x_{n}^{T}\Delta x_{n}}{\Delta x_{n-1}^{T}\Delta x_{n-1}}}.} Polak–Ribière: β n P R = Δ x n T ( Δ x n − Δ x n − 1 ) Δ x n − 1 T Δ x n − 1 . {\displaystyle \beta _{n}^{PR}={\frac {\Delta x_{n}^{T}(\Delta x_{n}-\Delta x_{n-1})}{\Delta x_{n-1}^{T}\Delta x_{n-1}}}.} Hestenes–Stiefel: β n H S = Δ x n T ( Δ x n − Δ x n − 1 ) − s n − 1 T ( Δ x n − Δ x n − 1 ) . {\displaystyle \beta _{n}^{HS}={\frac {\Delta x_{n}^{T}(\Delta x_{n}-\Delta x_{n-1})}{-s_{n-1}^{T}(\Delta x_{n}-\Delta x_{n-1})}}.} Dai–Yuan: β n D Y = Δ x n T Δ x n − s n − 1 T ( Δ x n − Δ x n − 1 ) . {\displaystyle \beta _{n}^{DY}={\frac {\Delta x_{n}^{T}\Delta x_{n}}{-s_{n-1}^{T}(\Delta x_{n}-\Delta x_{n-1})}}.} . These formulas are equivalent for a quadratic function, but for nonlinear optimization the preferred formula is a matter of heuristics or taste. A popular choice is β = max { 0 , β P R } {\displaystyle \displaystyle \beta =\max\{0,\beta ^{PR}\}} , which provides a direction reset automatically. Algorithms based on Newton's method potentially converge much faster. There, both step direction and length are computed from the gradient as the solution of a linear system of equations, with the coefficient matrix being the exact Hessian matrix (for Newton's method proper) or an estimate thereof (in the quasi-Newton methods, where the observed change in the gradient during the iterations is used to update the Hessian estimate). For high-dimensional problems, the exact computation of the Hessian is usually prohibitively expensive, and even its storage can be problematic, requiring O ( N 2 ) {\displaystyle O(N^{2})} memory (but see the limited-memory L-BFGS quasi-Newton method). The conjugate gradient method can also be derived using optimal control theory. In this accelerated optimization theory, the conjugate gradient method falls out as a nonlinear optimal feedback controller, u = k ( x , x ˙ ) := − γ a ∇ x f ( x ) − γ b x ˙ {\displaystyle u=k(x,{\dot {x}}):=-\gamma _{a}\nabla _{x}f(x)-\gamma _{b}{\dot {x}}} for the double integrator system, x ¨ = u {\displaystyle {\ddot {x}}=u} The quantities γ a > 0 {\displaystyle \gamma _{a}>0} and γ b > 0 {\displaystyle \gamma _{b}>0} are variable feedback gains. == See also == Gradient descent Broyden–Fletcher–Goldfarb–Shanno algorithm Conjugate gradient method L-BFGS (limited memory BFGS) Nelder–Mead method Wolfe conditions == References ==
Wikipedia/Nonlinear_conjugate_gradient_method
A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage. In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time. For example, a greedy strategy for the travelling salesman problem (which is of high computational complexity) is the following heuristic: "At each step of the journey, visit the nearest unvisited city." This heuristic does not intend to find the best solution, but it terminates in a reasonable number of steps; finding an optimal solution to such a complex problem typically requires unreasonably many steps. In mathematical optimization, greedy algorithms optimally solve combinatorial problems having the properties of matroids and give constant-factor approximations to optimization problems with the submodular structure. == Specifics == Greedy algorithms produce good solutions on some mathematical problems, but not on others. Most problems for which they work will have two properties: Greedy choice property Whichever choice seems best at a given moment can be made and then (recursively) solve the remaining sub-problems. The choice made by a greedy algorithm may depend on choices made so far, but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices. This is the main difference from dynamic programming, which is exhaustive and is guaranteed to find the solution. After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage and may reconsider the previous stage's algorithmic path to the solution. Optimal substructure "A problem exhibits optimal substructure if an optimal solution to the problem contains optimal solutions to the sub-problems." === Correctness Proofs === A common technique for proving the correctness of greedy algorithms uses an inductive exchange argument. The exchange argument demonstrates that any solution different from the greedy solution can be transformed into the greedy solution without degrading its quality. This proof pattern typically follows these steps: This proof pattern typically follows these steps (by contradiction): Assume there exists an optimal solution different from the greedy solution Identify the first point where the optimal and greedy solutions differ Prove that exchanging the optimal choice for the greedy choice at this point cannot worsen the solution Conclude by induction that there must exist an optimal solution identical to the greedy solution In some cases, an additional step may be needed to prove that no optimal solution can strictly improve upon the greedy solution. === Cases of failure === Greedy algorithms fail to produce the optimal solution for many other problems and may even produce the unique worst possible solution. One example is the travelling salesman problem mentioned above: for each number of cities, there is an assignment of distances between the cities for which the nearest-neighbour heuristic produces the unique worst possible tour. For other possible examples, see horizon effect. == Types == Greedy algorithms can be characterized as being 'short sighted', and also as 'non-recoverable'. They are ideal only for problems that have an 'optimal substructure'. Despite this, for many simple problems, the best-suited algorithms are greedy. It is important, however, to note that the greedy algorithm can be used as a selection algorithm to prioritize options within a search, or branch-and-bound algorithm. There are a few variations to the greedy algorithm: Pure greedy algorithms Orthogonal greedy algorithms Relaxed greedy algorithms == Theory == Greedy algorithms have a long history of study in combinatorial optimization and theoretical computer science. Greedy heuristics are known to produce suboptimal results on many problems, and so natural questions are: For which problems do greedy algorithms perform optimally? For which problems do greedy algorithms guarantee an approximately optimal solution? For which problems are greedy algorithms guaranteed not to produce an optimal solution? A large body of literature exists answering these questions for general classes of problems, such as matroids, as well as for specific problems, such as set cover. === Matroids === A matroid is a mathematical structure that generalizes the notion of linear independence from vector spaces to arbitrary sets. If an optimization problem has the structure of a matroid, then the appropriate greedy algorithm will solve it optimally. === Submodular functions === A function f {\displaystyle f} defined on subsets of a set Ω {\displaystyle \Omega } is called submodular if for every S , T ⊆ Ω {\displaystyle S,T\subseteq \Omega } we have that f ( S ) + f ( T ) ≥ f ( S ∪ T ) + f ( S ∩ T ) {\displaystyle f(S)+f(T)\geq f(S\cup T)+f(S\cap T)} . Suppose one wants to find a set S {\displaystyle S} which maximizes f {\displaystyle f} . The greedy algorithm, which builds up a set S {\displaystyle S} by incrementally adding the element which increases f {\displaystyle f} the most at each step, produces as output a set that is at least ( 1 − 1 / e ) max X ⊆ Ω f ( X ) {\displaystyle (1-1/e)\max _{X\subseteq \Omega }f(X)} . That is, greedy performs within a constant factor of ( 1 − 1 / e ) ≈ 0.63 {\displaystyle (1-1/e)\approx 0.63} as good as the optimal solution. Similar guarantees are provable when additional constraints, such as cardinality constraints, are imposed on the output, though often slight variations on the greedy algorithm are required. See for an overview. === Other problems with guarantees === Other problems for which the greedy algorithm gives a strong guarantee, but not an optimal solution, include Set cover The Steiner tree problem Load balancing Independent set Many of these problems have matching lower bounds; i.e., the greedy algorithm does not perform better than the guarantee in the worst case. == Applications == Greedy algorithms typically (but not always) fail to find the globally optimal solution because they usually do not operate exhaustively on all the data. They can make commitments to certain choices too early, preventing them from finding the best overall solution later. For example, all known greedy coloring algorithms for the graph coloring problem and all other NP-complete problems do not consistently find optimum solutions. Nevertheless, they are useful because they are quick to think up and often give good approximations to the optimum. If a greedy algorithm can be proven to yield the global optimum for a given problem class, it typically becomes the method of choice because it is faster than other optimization methods like dynamic programming. Examples of such greedy algorithms are Kruskal's algorithm and Prim's algorithm for finding minimum spanning trees and the algorithm for finding optimum Huffman trees. Greedy algorithms appear in network routing as well. Using greedy routing, a message is forwarded to the neighbouring node which is "closest" to the destination. The notion of a node's location (and hence "closeness") may be determined by its physical location, as in geographic routing used by ad hoc networks. Location may also be an entirely artificial construct as in small world routing and distributed hash table. == Examples == The activity selection problem is characteristic of this class of problems, where the goal is to pick the maximum number of activities that do not clash with each other. In the Macintosh computer game Crystal Quest the objective is to collect crystals, in a fashion similar to the travelling salesman problem. The game has a demo mode, where the game uses a greedy algorithm to go to every crystal. The artificial intelligence does not account for obstacles, so the demo mode often ends quickly. The matching pursuit is an example of a greedy algorithm applied on signal approximation. A greedy algorithm finds the optimal solution to Malfatti's problem of finding three disjoint circles within a given triangle that maximize the total area of the circles; it is conjectured that the same greedy algorithm is optimal for any number of circles. A greedy algorithm is used to construct a Huffman tree during Huffman coding where it finds an optimal solution. In decision tree learning, greedy algorithms are commonly used, however they are not guaranteed to find the optimal solution. One popular such algorithm is the ID3 algorithm for decision tree construction. Dijkstra's algorithm and the related A* search algorithm are verifiably optimal greedy algorithms for graph search and shortest path finding. A* search is conditionally optimal, requiring an "admissible heuristic" that will not overestimate path costs. Kruskal's algorithm and Prim's algorithm are greedy algorithms for constructing minimum spanning trees of a given connected graph. They always find an optimal solution, which may not be unique in general. The Sequitur and Lempel-Ziv-Welch algorithms are greedy algorithms for grammar induction. == See also == == References == === Sources === == External links == "Greedy algorithm", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Gift, Noah. "Python greedy coin example".
Wikipedia/Exchange_algorithm
A double exponential function is a constant raised to the power of an exponential function. The general formula is f ( x ) = a b x = a ( b x ) {\displaystyle f(x)=a^{b^{x}}=a^{(b^{x})}} (where a>1 and b>1), which grows much more quickly than an exponential function. For example, if a = b = 10: f(x) = 1010x f(0) = 10 f(1) = 1010 f(2) = 10100 = googol f(3) = 101000 f(100) = 1010100 = googolplex. Factorials grow faster than exponential functions, but much more slowly than double exponential functions. However, tetration and the Ackermann function grow faster. See Big O notation for a comparison of the rate of growth of various functions. The inverse of the double exponential function is the double logarithm log(log(x)). The complex double exponential function is entire, because it is the composition of two entire functions f ( x ) = a x = e x ln ⁡ a {\displaystyle f(x)=a^{x}=e^{x\ln a}} and g ( x ) = b x = e x ln ⁡ b {\displaystyle g(x)=b^{x}=e^{x\ln b}} . == Double exponential sequences == A sequence of positive integers (or real numbers) is said to have double exponential rate of growth if the function giving the nth term of the sequence is bounded above and below by double exponential functions of n. Examples include The Fermat numbers F ( m ) = 2 2 m + 1 {\displaystyle F(m)=2^{2^{m}}+1} The harmonic primes: The primes p, in which the sequence 1/2 + 1/3 + 1/5 + 1/7 + ⋯ + 1/p exceeds 0, 1, 2, 3, …The first few numbers, starting with 0, are 2, 5, 277, 5195977, ... (sequence A016088 in the OEIS) The Double Mersenne numbers M M ( p ) = 2 2 p − 1 − 1 {\displaystyle MM(p)=2^{2^{p}-1}-1} The elements of Sylvester's sequence (sequence A000058 in the OEIS) s n = ⌊ E 2 n + 1 + 1 2 ⌋ {\displaystyle s_{n}=\left\lfloor E^{2^{n+1}}+{\frac {1}{2}}\right\rfloor } where E ≈ 1.264084735305302 is Vardi's constant (sequence A076393 in the OEIS). The number of k-ary Boolean functions: 2 2 k {\displaystyle 2^{2^{k}}} The prime numbers 2, 11, 1361, ... (sequence A051254 in the OEIS) a ( n ) = ⌊ A 3 n ⌋ {\displaystyle a(n)=\left\lfloor A^{3^{n}}\right\rfloor } where A ≈ 1.306377883863 is Mills' constant. Aho and Sloane observed that in several important integer sequences, each term is a constant plus the square of the previous term. They show that such sequences can be formed by rounding to the nearest integer the values of a double exponential function with middle exponent 2. Ionaşcu and Stănică describe some more general sufficient conditions for a sequence to be the floor of a double exponential sequence plus a constant. == Applications == === Algorithmic complexity === In computational complexity theory, 2-EXPTIME is the class of decision problems solvable in double exponential time. It is equivalent to AEXPSPACE, the set of decision problems solvable by an alternating Turing machine in exponential space, and is a superset of EXPSPACE. An example of a problem in 2-EXPTIME that is not in EXPTIME is the problem of proving or disproving statements in Presburger arithmetic. In some other problems in the design and analysis of algorithms, double exponential sequences are used within the design of an algorithm rather than in its analysis. An example is Chan's algorithm for computing convex hulls, which performs a sequence of computations using test values hi = 22i (estimates for the eventual output size), taking time O(n log hi) for each test value in the sequence. Because of the double exponential growth of these test values, the time for each computation in the sequence grows singly exponentially as a function of i, and the total time is dominated by the time for the final step of the sequence. Thus, the overall time for the algorithm is O(n log h) where h is the actual output size. === Number theory === Some number theoretical bounds are double exponential. Odd perfect numbers with n distinct prime factors are known to be at most 2 4 n {\displaystyle 2^{4^{n}}} , a result of Nielsen (2003). The maximal volume of a polytope in a d-dimensional integer lattice with k ≥ 1 interior lattice points is at most k ⋅ ( 8 d ) d ⋅ 15 d ⋅ 2 2 d + 1 , {\displaystyle k\cdot (8d)^{d}\cdot 15^{d\cdot 2^{2d+1}},} a result of Pikhurko (2001). The largest known prime number in the electronic era has grown roughly as a double exponential function of the year since Miller and Wheeler found a 79-digit prime on EDSAC1 in 1951. === Theoretical biology === In population dynamics the growth of human population is sometimes supposed to be double exponential. Varfolomeyev and Gurevich experimentally fit N ( y ) = 375.6 ⋅ 1.00185 1.00737 y − 1000 {\displaystyle N(y)=375.6\cdot 1.00185^{1.00737^{y-1000}}\,} where N(y) is the population in millions in year y. === Physics === In the Toda oscillator model of self-pulsation, the logarithm of amplitude varies exponentially with time (for large amplitudes), thus the amplitude varies as double exponential function of time. Dendritic macromolecules have been observed to grow in a doubly-exponential fashion. == References ==
Wikipedia/Double_exponential_function
In mathematics, particularly p-adic analysis, the p-adic exponential function is a p-adic analogue of the usual exponential function on the complex numbers. As in the complex case, it has an inverse function, named the p-adic logarithm. == Definition == The usual exponential function on C is defined by the infinite series exp ⁡ ( z ) = ∑ n = 0 ∞ z n n ! . {\displaystyle \exp(z)=\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}.} Entirely analogously, one defines the exponential function on Cp, the completion of the algebraic closure of Qp, by exp p ⁡ ( z ) = ∑ n = 0 ∞ z n n ! . {\displaystyle \exp _{p}(z)=\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}.} However, unlike exp which converges on all of C, expp only converges on the disc | z | p < p − 1 / ( p − 1 ) . {\displaystyle |z|_{p}<p^{-1/(p-1)}.} This is because p-adic series converge if and only if the summands tend to zero, and since the n! in the denominator of each summand tends to make them large p-adically, a small value of z is needed in the numerator. It follows from Legendre's formula that if | z | p < p − 1 / ( p − 1 ) {\displaystyle |z|_{p}<p^{-1/(p-1)}} then z n n ! {\displaystyle {\frac {z^{n}}{n!}}} tends to 0 {\displaystyle 0} , p-adically. Although the p-adic exponential is sometimes denoted ex, the number e itself has no p-adic analogue. This is because the power series expp(x) does not converge at x = 1. It is possible to choose a number e to be a p-th root of expp(p) for p ≠ 2, but there are multiple such roots and there is no canonical choice among them. == p-adic logarithm function == The power series log p ⁡ ( 1 + x ) = ∑ n = 1 ∞ ( − 1 ) n + 1 x n n , {\displaystyle \log _{p}(1+x)=\sum _{n=1}^{\infty }{\frac {(-1)^{n+1}x^{n}}{n}},} converges for x in Cp satisfying |x|p < 1 and so defines the p-adic logarithm function logp(z) for |z − 1|p < 1 satisfying the usual property logp(zw) = logpz + logpw. The function logp can be extended to all of C×p (the set of nonzero elements of Cp) by imposing that it continues to satisfy this last property and setting logp(p) = 0. Specifically, every element w of C×p can be written as w = pr·ζ·z with r a rational number, ζ a root of unity, and |z − 1|p < 1, in which case logp(w) = logp(z). This function on C×p is sometimes called the Iwasawa logarithm to emphasize the choice of logp(p) = 0. In fact, there is an extension of the logarithm from |z − 1|p < 1 to all of C×p for each choice of logp(p) in Cp. == Properties == If z and w are both in the radius of convergence for expp, then their sum is too and we have the usual addition formula: expp(z + w) = expp(z)expp(w). Similarly if z and w are nonzero elements of Cp then logp(zw) = logpz + logpw. For z in the domain of expp, we have expp(logp(1+z)) = 1+z and logp(expp(z)) = z. The roots of the Iwasawa logarithm logp(z) are exactly the elements of Cp of the form pr·ζ where r is a rational number and ζ is a root of unity. Note that there is no analogue in Cp of Euler's identity, e2πi = 1. This is a corollary of Strassmann's theorem. Another major difference to the situation in C is that the domain of convergence of expp is much smaller than that of logp. A modified exponential function — the Artin–Hasse exponential — can be used instead which converges on |z|p < 1. == Notes == == References == === Citations === === List of references === Chapter 12 of Cassels, J. W. S. (1986). Local fields. London Mathematical Society Student Texts. Cambridge University Press. ISBN 0-521-31525-5. Cohen, Henri (2007), Number theory, Volume I: Tools and Diophantine equations, Graduate Texts in Mathematics, vol. 239, New York: Springer, doi:10.1007/978-0-387-49923-9, ISBN 978-0-387-49922-2, MR 2312337 Robert, Alain M. (2000), A Course in p-adic Analysis, Springer, ISBN 0-387-98669-3 == External links == p-adic exponential and p-adic logarithm
Wikipedia/P-adic_exponential_function
In mathematics, exponentiation, denoted bn, is an operation involving two numbers: the base, b, and the exponent or power, n. When n is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, bn is the product of multiplying n bases: b n = b × b × ⋯ × b × b ⏟ n times . {\displaystyle b^{n}=\underbrace {b\times b\times \dots \times b\times b} _{n{\text{ times}}}.} In particular, b 1 = b {\displaystyle b^{1}=b} . The exponent is usually shown as a superscript to the right of the base as bn or in computer code as b^n. This binary operation is often read as "b to the power n"; it may also be referred to as "b raised to the nth power", "the nth power of b", or, most briefly, "b to the n". The above definition of b n {\displaystyle b^{n}} immediately implies several properties, in particular the multiplication rule: b n × b m = b × ⋯ × b ⏟ n times × b × ⋯ × b ⏟ m times = b × ⋯ × b ⏟ n + m times = b n + m . {\displaystyle {\begin{aligned}b^{n}\times b^{m}&=\underbrace {b\times \dots \times b} _{n{\text{ times}}}\times \underbrace {b\times \dots \times b} _{m{\text{ times}}}\\[1ex]&=\underbrace {b\times \dots \times b} _{n+m{\text{ times}}}\ =\ b^{n+m}.\end{aligned}}} That is, when multiplying a base raised to one power times the same base raised to another power, the powers add. Extending this rule to the power zero gives b 0 × b n = b 0 + n = b n {\displaystyle b^{0}\times b^{n}=b^{0+n}=b^{n}} , and, where b is non-zero, dividing both sides by b n {\displaystyle b^{n}} gives b 0 = b n / b n = 1 {\displaystyle b^{0}=b^{n}/b^{n}=1} . That is the multiplication rule implies the definition b 0 = 1. {\displaystyle b^{0}=1.} A similar argument implies the definition for negative integer powers: b − n = 1 / b n . {\displaystyle b^{-n}=1/b^{n}.} That is, extending the multiplication rule gives b − n × b n = b − n + n = b 0 = 1 {\displaystyle b^{-n}\times b^{n}=b^{-n+n}=b^{0}=1} . Dividing both sides by b n {\displaystyle b^{n}} gives b − n = 1 / b n {\displaystyle b^{-n}=1/b^{n}} . This also implies the definition for fractional powers: b n / m = b n m . {\displaystyle b^{n/m}={\sqrt[{m}]{b^{n}}}.} For example, b 1 / 2 × b 1 / 2 = b 1 / 2 + 1 / 2 = b 1 = b {\displaystyle b^{1/2}\times b^{1/2}=b^{1/2\,+\,1/2}=b^{1}=b} , meaning ( b 1 / 2 ) 2 = b {\displaystyle (b^{1/2})^{2}=b} , which is the definition of square root: b 1 / 2 = b {\displaystyle b^{1/2}={\sqrt {b}}} . The definition of exponentiation can be extended in a natural way (preserving the multiplication rule) to define b x {\displaystyle b^{x}} for any positive real base b {\displaystyle b} and any real number exponent x {\displaystyle x} . More involved definitions allow complex base and exponent, as well as certain types of matrices as base or exponent. Exponentiation is used extensively in many fields, including economics, biology, chemistry, physics, and computer science, with applications such as compound interest, population growth, chemical reaction kinetics, wave behavior, and public-key cryptography. == Etymology == The term exponent originates from the Latin exponentem, the present participle of exponere, meaning "to put forth". The term power (Latin: potentia, potestas, dignitas) is a mistranslation of the ancient Greek δύναμις (dúnamis, here: "amplification") used by the Greek mathematician Euclid for the square of a line, following Hippocrates of Chios. The word exponent was coined in 1544 by Michael Stifel. In the 16th century, Robert Recorde used the terms "square", "cube", "zenzizenzic" (fourth power), "sursolid" (fifth), "zenzicube" (sixth), "second sursolid" (seventh), and "zenzizenzizenzic" (eighth). "Biquadrate" has been used to refer to the fourth power as well. == History == In The Sand Reckoner, Archimedes proved the law of exponents, 10a · 10b = 10a+b, necessary to manipulate powers of 10. He then used powers of 10 to estimate the number of grains of sand that can be contained in the universe. In the 9th century, the Persian mathematician Al-Khwarizmi used the terms مَال (māl, "possessions", "property") for a square—the Muslims, "like most mathematicians of those and earlier times, thought of a squared number as a depiction of an area, especially of land, hence property"—and كَعْبَة (Kaʿbah, "cube") for a cube, which later Islamic mathematicians represented in mathematical notation as the letters mīm (m) and kāf (k), respectively, by the 15th century, as seen in the work of Abu'l-Hasan ibn Ali al-Qalasadi. Nicolas Chuquet used a form of exponential notation in the 15th century, for example 122 to represent 12x2. This was later used by Henricus Grammateus and Michael Stifel in the 16th century. In the late 16th century, Jost Bürgi would use Roman numerals for exponents in a way similar to that of Chuquet, for example iii4 for 4x3. In 1636, James Hume used in essence modern notation, when in L'algèbre de Viète he wrote Aiii for A3. Early in the 17th century, the first form of our modern exponential notation was introduced by René Descartes in his text titled La Géométrie; there, the notation is introduced in Book I. I designate ... aa, or a2 in multiplying a by itself; and a3 in multiplying it once more again by a, and thus to infinity. Some mathematicians (such as Descartes) used exponents only for powers greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as ax + bxx + cx3 + d. Samuel Jeake introduced the term indices in 1696. The term involution was used synonymously with the term indices, but had declined in usage and should not be confused with its more common meaning. In 1748, Leonhard Euler introduced variable exponents, and, implicitly, non-integer exponents by writing:Consider exponentials or powers in which the exponent itself is a variable. It is clear that quantities of this kind are not algebraic functions, since in those the exponents must be constant. === 20th century === As calculation was mechanized, notation was adapted to numerical capacity by conventions in exponential notation. For example Konrad Zuse introduced floating-point arithmetic in his 1938 computer Z1. One register contained representation of leading digits, and a second contained representation of the exponent of 10. Earlier Leonardo Torres Quevedo contributed Essays on Automation (1914) which had suggested the floating-point representation of numbers. The more flexible decimal floating-point representation was introduced in 1946 with a Bell Laboratories computer. Eventually educators and engineers adopted scientific notation of numbers, consistent with common reference to order of magnitude in a ratio scale. For instance, in 1961 the School Mathematics Study Group developed the notation in connection with units used in the metric system. Exponents also came to be used to describe units of measurement and quantity dimensions. For instance, since force is mass times acceleration, it is measured in kg m/sec2. Using M for mass, L for length, and T for time, the expression M L T–2 is used in dimensional analysis to describe force. == Terminology == The expression b2 = b · b is called "the square of b" or "b squared", because the area of a square with side-length b is b2. (It is true that it could also be called "b to the second power", but "the square of b" and "b squared" are more traditional) Similarly, the expression b3 = b · b · b is called "the cube of b" or "b cubed", because the volume of a cube with side-length b is b3. When an exponent is a positive integer, that exponent indicates how many copies of the base are multiplied together. For example, 35 = 3 · 3 · 3 · 3 · 3 = 243. The base 3 appears 5 times in the multiplication, because the exponent is 5. Here, 243 is the 5th power of 3, or 3 raised to the 5th power. The word "raised" is usually omitted, and sometimes "power" as well, so 35 can be simply read "3 to the 5th", or "3 to the 5". == Integer exponents == The exponentiation operation with integer exponents may be defined directly from elementary arithmetic operations. === Positive exponents === The definition of the exponentiation as an iterated multiplication can be formalized by using induction, and this definition can be used as soon as one has an associative multiplication: The base case is b 1 = b {\displaystyle b^{1}=b} and the recurrence is b n + 1 = b n ⋅ b . {\displaystyle b^{n+1}=b^{n}\cdot b.} The associativity of multiplication implies that for any positive integers m and n, b m + n = b m ⋅ b n , {\displaystyle b^{m+n}=b^{m}\cdot b^{n},} and ( b m ) n = b m n . {\displaystyle (b^{m})^{n}=b^{mn}.} === Zero exponent === As mentioned earlier, a (nonzero) number raised to the 0 power is 1: b 0 = 1. {\displaystyle b^{0}=1.} This value is also obtained by the empty product convention, which may be used in every algebraic structure with a multiplication that has an identity. This way the formula b m + n = b m ⋅ b n {\displaystyle b^{m+n}=b^{m}\cdot b^{n}} also holds for n = 0 {\displaystyle n=0} . The case of 00 is controversial. In contexts where only integer powers are considered, the value 1 is generally assigned to 00 but, otherwise, the choice of whether to assign it a value and what value to assign may depend on context. For more details, see Zero to the power of zero. === Negative exponents === Exponentiation with negative exponents is defined by the following identity, which holds for any integer n and nonzero b: b − n = 1 b n {\displaystyle b^{-n}={\frac {1}{b^{n}}}} . Raising 0 to a negative exponent is undefined but, in some circumstances, it may be interpreted as infinity ( ∞ {\displaystyle \infty } ). This definition of exponentiation with negative exponents is the only one that allows extending the identity b m + n = b m ⋅ b n {\displaystyle b^{m+n}=b^{m}\cdot b^{n}} to negative exponents (consider the case m = − n {\displaystyle m=-n} ). The same definition applies to invertible elements in a multiplicative monoid, that is, an algebraic structure, with an associative multiplication and a multiplicative identity denoted 1 (for example, the square matrices of a given dimension). In particular, in such a structure, the inverse of an invertible element x is standardly denoted x − 1 . {\displaystyle x^{-1}.} === Identities and properties === The following identities, often called exponent rules, hold for all integer exponents, provided that the base is non-zero: b m ⋅ b n = b m + n ( b m ) n = b m ⋅ n b n ⋅ c n = ( b ⋅ c ) n {\displaystyle {\begin{aligned}b^{m}\cdot b^{n}&=b^{m+n}\\\left(b^{m}\right)^{n}&=b^{m\cdot n}\\b^{n}\cdot c^{n}&=(b\cdot c)^{n}\end{aligned}}} Unlike addition and multiplication, exponentiation is not commutative: for example, 2 3 = 8 {\displaystyle 2^{3}=8} , but reversing the operands gives the different value 3 2 = 9 {\displaystyle 3^{2}=9} . Also unlike addition and multiplication, exponentiation is not associative: for example, (23)2 = 82 = 64, whereas 2(32) = 29 = 512. Without parentheses, the conventional order of operations for serial exponentiation in superscript notation is top-down (or right-associative), not bottom-up (or left-associative). That is, b p q = b ( p q ) , {\displaystyle b^{p^{q}}=b^{\left(p^{q}\right)},} which, in general, is different from ( b p ) q = b p q . {\displaystyle \left(b^{p}\right)^{q}=b^{pq}.} === Powers of a sum === The powers of a sum can normally be computed from the powers of the summands by the binomial formula ( a + b ) n = ∑ i = 0 n ( n i ) a i b n − i = ∑ i = 0 n n ! i ! ( n − i ) ! a i b n − i . {\displaystyle (a+b)^{n}=\sum _{i=0}^{n}{\binom {n}{i}}a^{i}b^{n-i}=\sum _{i=0}^{n}{\frac {n!}{i!(n-i)!}}a^{i}b^{n-i}.} However, this formula is true only if the summands commute (i.e. that ab = ba), which is implied if they belong to a structure that is commutative. Otherwise, if a and b are, say, square matrices of the same size, this formula cannot be used. It follows that in computer algebra, many algorithms involving integer exponents must be changed when the exponentiation bases do not commute. Some general purpose computer algebra systems use a different notation (sometimes ^^ instead of ^) for exponentiation with non-commuting bases, which is then called non-commutative exponentiation. === Combinatorial interpretation === For nonnegative integers n and m, the value of nm is the number of functions from a set of m elements to a set of n elements (see cardinal exponentiation). Such functions can be represented as m-tuples from an n-element set (or as m-letter words from an n-letter alphabet). Some examples for particular values of m and n are given in the following table: === Particular bases === ==== Powers of ten ==== In the base ten (decimal) number system, integer powers of 10 are written as the digit 1 followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, 103 = 1000 and 10−4 = 0.0001. Exponentiation with base 10 is used in scientific notation to denote large or small numbers. For instance, 299792458 m/s (the speed of light in vacuum, in metres per second) can be written as 2.99792458×108 m/s and then approximated as 2.998×108 m/s. SI prefixes based on powers of 10 are also used to describe small or large quantities. For example, the prefix kilo means 103 = 1000, so a kilometre is 1000 m. ==== Powers of two ==== The first negative powers of 2 have special names: 2 − 1 {\displaystyle 2^{-1}} is a half; 2 − 2 {\displaystyle 2^{-2}} is a quarter. Powers of 2 appear in set theory, since a set with n members has a power set, the set of all of its subsets, which has 2n members. Integer powers of 2 are important in computer science. The positive integer powers 2n give the number of possible values for an n-bit integer binary number; for example, a byte may take 28 = 256 different values. The binary number system expresses any number as a sum of powers of 2, and denotes it as a sequence of 0 and 1, separated by a binary point, where 1 indicates a power of 2 that appears in the sum; the exponent is determined by the place of this 1: the nonnegative exponents are the rank of the 1 on the left of the point (starting from 0), and the negative exponents are determined by the rank on the right of the point. ==== Powers of one ==== Every power of one equals: 1n = 1. ==== Powers of zero ==== For a positive exponent n > 0, the nth power of zero is zero: 0n = 0. For a negative exponent, 0 − n = 1 / 0 n = 1 / 0 {\displaystyle 0^{-n}=1/0^{n}=1/0} is undefined. In some contexts (e.g., combinatorics), the expression 00 is defined to be equal to 1 {\displaystyle 1} ; in others (e.g., analysis), it is often undefined. ==== Powers of negative one ==== Since a negative number times another negative is positive, we have: ( − 1 ) n = { 1 for even n , − 1 for odd n . {\displaystyle (-1)^{n}=\left\{{\begin{array}{rl}1&{\text{for even }}n,\\-1&{\text{for odd }}n.\\\end{array}}\right.} Because of this, powers of −1 are useful for expressing alternating sequences. For a similar discussion of powers of the complex number i, see § nth roots of a complex number. === Large exponents === The limit of a sequence of powers of a number greater than one diverges; in other words, the sequence grows without bound: bn → ∞ as n → ∞ when b > 1 This can be read as "b to the power of n tends to +∞ as n tends to infinity when b is greater than one". Powers of a number with absolute value less than one tend to zero: bn → 0 as n → ∞ when |b| < 1 Any power of one is always one: bn = 1 for all n for b = 1 Powers of a negative number b ≤ − 1 {\displaystyle b\leq -1} alternate between positive and negative as n alternates between even and odd, and thus do not tend to any limit as n grows. If the exponentiated number varies while tending to 1 as the exponent tends to infinity, then the limit is not necessarily one of those above. A particularly important case is (1 + 1/n)n → e as n → ∞ See § Exponential function below. Other limits, in particular those of expressions that take on an indeterminate form, are described in § Limits of powers below. === Power functions === Real functions of the form f ( x ) = c x n {\displaystyle f(x)=cx^{n}} , where c ≠ 0 {\displaystyle c\neq 0} , are sometimes called power functions. When n {\displaystyle n} is an integer and n ≥ 1 {\displaystyle n\geq 1} , two primary families exist: for n {\displaystyle n} even, and for n {\displaystyle n} odd. In general for c > 0 {\displaystyle c>0} , when n {\displaystyle n} is even f ( x ) = c x n {\displaystyle f(x)=cx^{n}} will tend towards positive infinity with increasing x {\displaystyle x} , and also towards positive infinity with decreasing x {\displaystyle x} . All graphs from the family of even power functions have the general shape of y = c x 2 {\displaystyle y=cx^{2}} , flattening more in the middle as n {\displaystyle n} increases. Functions with this kind of symmetry ( f ( − x ) = f ( x ) {\displaystyle f(-x)=f(x)} ) are called even functions. When n {\displaystyle n} is odd, f ( x ) {\displaystyle f(x)} 's asymptotic behavior reverses from positive x {\displaystyle x} to negative x {\displaystyle x} . For c > 0 {\displaystyle c>0} , f ( x ) = c x n {\displaystyle f(x)=cx^{n}} will also tend towards positive infinity with increasing x {\displaystyle x} , but towards negative infinity with decreasing x {\displaystyle x} . All graphs from the family of odd power functions have the general shape of y = c x 3 {\displaystyle y=cx^{3}} , flattening more in the middle as n {\displaystyle n} increases and losing all flatness there in the straight line for n = 1 {\displaystyle n=1} . Functions with this kind of symmetry ( f ( − x ) = − f ( x ) {\displaystyle f(-x)=-f(x)} ) are called odd functions. For c < 0 {\displaystyle c<0} , the opposite asymptotic behavior is true in each case. === Table of powers of decimal digits === == Rational exponents == If x is a nonnegative real number, and n is a positive integer, x 1 / n {\displaystyle x^{1/n}} or x n {\displaystyle {\sqrt[{n}]{x}}} denotes the unique nonnegative real nth root of x, that is, the unique nonnegative real number y such that y n = x . {\displaystyle y^{n}=x.} If x is a positive real number, and p q {\displaystyle {\frac {p}{q}}} is a rational number, with p and q > 0 integers, then x p / q {\textstyle x^{p/q}} is defined as x p q = ( x p ) 1 q = ( x 1 q ) p . {\displaystyle x^{\frac {p}{q}}=\left(x^{p}\right)^{\frac {1}{q}}=(x^{\frac {1}{q}})^{p}.} The equality on the right may be derived by setting y = x 1 q , {\displaystyle y=x^{\frac {1}{q}},} and writing ( x 1 q ) p = y p = ( ( y p ) q ) 1 q = ( ( y q ) p ) 1 q = ( x p ) 1 q . {\displaystyle (x^{\frac {1}{q}})^{p}=y^{p}=\left((y^{p})^{q}\right)^{\frac {1}{q}}=\left((y^{q})^{p}\right)^{\frac {1}{q}}=(x^{p})^{\frac {1}{q}}.} If r is a positive rational number, 0r = 0, by definition. All these definitions are required for extending the identity ( x r ) s = x r s {\displaystyle (x^{r})^{s}=x^{rs}} to rational exponents. On the other hand, there are problems with the extension of these definitions to bases that are not positive real numbers. For example, a negative real number has a real nth root, which is negative, if n is odd, and no real root if n is even. In the latter case, whichever complex nth root one chooses for x 1 n , {\displaystyle x^{\frac {1}{n}},} the identity ( x a ) b = x a b {\displaystyle (x^{a})^{b}=x^{ab}} cannot be satisfied. For example, ( ( − 1 ) 2 ) 1 2 = 1 1 2 = 1 ≠ ( − 1 ) 2 ⋅ 1 2 = ( − 1 ) 1 = − 1. {\displaystyle \left((-1)^{2}\right)^{\frac {1}{2}}=1^{\frac {1}{2}}=1\neq (-1)^{2\cdot {\frac {1}{2}}}=(-1)^{1}=-1.} See § Real exponents and § Non-integer powers of complex numbers for details on the way these problems may be handled. == Real exponents == For positive real numbers, exponentiation to real powers can be defined in two equivalent ways, either by extending the rational powers to reals by continuity (§ Limits of rational exponents, below), or in terms of the logarithm of the base and the exponential function (§ Powers via logarithms, below). The result is always a positive real number, and the identities and properties shown above for integer exponents remain true with these definitions for real exponents. The second definition is more commonly used, since it generalizes straightforwardly to complex exponents. On the other hand, exponentiation to a real power of a negative real number is much more difficult to define consistently, as it may be non-real and have several values. One may choose one of these values, called the principal value, but there is no choice of the principal value for which the identity ( b r ) s = b r s {\displaystyle \left(b^{r}\right)^{s}=b^{rs}} is true; see § Failure of power and logarithm identities. Therefore, exponentiation with a basis that is not a positive real number is generally viewed as a multivalued function. === Limits of rational exponents === Since any irrational number can be expressed as the limit of a sequence of rational numbers, exponentiation of a positive real number b with an arbitrary real exponent x can be defined by continuity with the rule b x = lim r ( ∈ Q ) → x b r ( b ∈ R + , x ∈ R ) , {\displaystyle b^{x}=\lim _{r(\in \mathbb {Q} )\to x}b^{r}\quad (b\in \mathbb {R} ^{+},\,x\in \mathbb {R} ),} where the limit is taken over rational values of r only. This limit exists for every positive b and every real x. For example, if x = π, the non-terminating decimal representation π = 3.14159... and the monotonicity of the rational powers can be used to obtain intervals bounded by rational powers that are as small as desired, and must contain b π : {\displaystyle b^{\pi }:} [ b 3 , b 4 ] , [ b 3.1 , b 3.2 ] , [ b 3.14 , b 3.15 ] , [ b 3.141 , b 3.142 ] , [ b 3.1415 , b 3.1416 ] , [ b 3.14159 , b 3.14160 ] , … {\displaystyle \left[b^{3},b^{4}\right],\left[b^{3.1},b^{3.2}\right],\left[b^{3.14},b^{3.15}\right],\left[b^{3.141},b^{3.142}\right],\left[b^{3.1415},b^{3.1416}\right],\left[b^{3.14159},b^{3.14160}\right],\ldots } So, the upper bounds and the lower bounds of the intervals form two sequences that have the same limit, denoted b π . {\displaystyle b^{\pi }.} This defines b x {\displaystyle b^{x}} for every positive b and real x as a continuous function of b and x. See also Well-defined expression. === Exponential function === The exponential function may be defined as x ↦ e x , {\displaystyle x\mapsto e^{x},} where e ≈ 2.718 {\displaystyle e\approx 2.718} is Euler's number, but to avoid circular reasoning, this definition cannot be used here. Rather, we give an independent definition of the exponential function exp ⁡ ( x ) , {\displaystyle \exp(x),} and of e = exp ⁡ ( 1 ) {\displaystyle e=\exp(1)} , relying only on positive integer powers (repeated multiplication). Then we sketch the proof that this agrees with the previous definition: exp ⁡ ( x ) = e x . {\displaystyle \exp(x)=e^{x}.} There are many equivalent ways to define the exponential function, one of them being exp ⁡ ( x ) = lim n → ∞ ( 1 + x n ) n . {\displaystyle \exp(x)=\lim _{n\rightarrow \infty }\left(1+{\frac {x}{n}}\right)^{n}.} One has exp ⁡ ( 0 ) = 1 , {\displaystyle \exp(0)=1,} and the exponential identity (or multiplication rule) exp ⁡ ( x ) exp ⁡ ( y ) = exp ⁡ ( x + y ) {\displaystyle \exp(x)\exp(y)=\exp(x+y)} holds as well, since exp ⁡ ( x ) exp ⁡ ( y ) = lim n → ∞ ( 1 + x n ) n ( 1 + y n ) n = lim n → ∞ ( 1 + x + y n + x y n 2 ) n , {\displaystyle \exp(x)\exp(y)=\lim _{n\rightarrow \infty }\left(1+{\frac {x}{n}}\right)^{n}\left(1+{\frac {y}{n}}\right)^{n}=\lim _{n\rightarrow \infty }\left(1+{\frac {x+y}{n}}+{\frac {xy}{n^{2}}}\right)^{n},} and the second-order term x y n 2 {\displaystyle {\frac {xy}{n^{2}}}} does not affect the limit, yielding exp ⁡ ( x ) exp ⁡ ( y ) = exp ⁡ ( x + y ) {\displaystyle \exp(x)\exp(y)=\exp(x+y)} . Euler's number can be defined as e = exp ⁡ ( 1 ) {\displaystyle e=\exp(1)} . It follows from the preceding equations that exp ⁡ ( x ) = e x {\displaystyle \exp(x)=e^{x}} when x is an integer (this results from the repeated-multiplication definition of the exponentiation). If x is real, exp ⁡ ( x ) = e x {\displaystyle \exp(x)=e^{x}} results from the definitions given in preceding sections, by using the exponential identity if x is rational, and the continuity of the exponential function otherwise. The limit that defines the exponential function converges for every complex value of x, and therefore it can be used to extend the definition of exp ⁡ ( z ) {\displaystyle \exp(z)} , and thus e z , {\displaystyle e^{z},} from the real numbers to any complex argument z. This extended exponential function still satisfies the exponential identity, and is commonly used for defining exponentiation for complex base and exponent. === Powers via logarithms === The definition of ex as the exponential function allows defining bx for every positive real numbers b, in terms of exponential and logarithm function. Specifically, the fact that the natural logarithm ln(x) is the inverse of the exponential function ex means that one has b = exp ⁡ ( ln ⁡ b ) = e ln ⁡ b {\displaystyle b=\exp(\ln b)=e^{\ln b}} for every b > 0. For preserving the identity ( e x ) y = e x y , {\displaystyle (e^{x})^{y}=e^{xy},} one must have b x = ( e ln ⁡ b ) x = e x ln ⁡ b {\displaystyle b^{x}=\left(e^{\ln b}\right)^{x}=e^{x\ln b}} So, e x ln ⁡ b {\displaystyle e^{x\ln b}} can be used as an alternative definition of bx for any positive real b. This agrees with the definition given above using rational exponents and continuity, with the advantage to extend straightforwardly to any complex exponent. == Complex exponents with a positive real base == If b is a positive real number, exponentiation with base b and complex exponent z is defined by means of the exponential function with complex argument (see the end of § Exponential function, above) as b z = e ( z ln ⁡ b ) , {\displaystyle b^{z}=e^{(z\ln b)},} where ln ⁡ b {\displaystyle \ln b} denotes the natural logarithm of b. This satisfies the identity b z + t = b z b t , {\displaystyle b^{z+t}=b^{z}b^{t},} In general, ( b z ) t {\textstyle \left(b^{z}\right)^{t}} is not defined, since bz is not a real number. If a meaning is given to the exponentiation of a complex number (see § Non-integer powers of complex numbers, below), one has, in general, ( b z ) t ≠ b z t , {\displaystyle \left(b^{z}\right)^{t}\neq b^{zt},} unless z is real or t is an integer. Euler's formula, e i y = cos ⁡ y + i sin ⁡ y , {\displaystyle e^{iy}=\cos y+i\sin y,} allows expressing the polar form of b z {\displaystyle b^{z}} in terms of the real and imaginary parts of z, namely b x + i y = b x ( cos ⁡ ( y ln ⁡ b ) + i sin ⁡ ( y ln ⁡ b ) ) , {\displaystyle b^{x+iy}=b^{x}(\cos(y\ln b)+i\sin(y\ln b)),} where the absolute value of the trigonometric factor is one. This results from b x + i y = b x b i y = b x e i y ln ⁡ b = b x ( cos ⁡ ( y ln ⁡ b ) + i sin ⁡ ( y ln ⁡ b ) ) . {\displaystyle b^{x+iy}=b^{x}b^{iy}=b^{x}e^{iy\ln b}=b^{x}(\cos(y\ln b)+i\sin(y\ln b)).} == Non-integer exponents with a complex base == In the preceding sections, exponentiation with non-integer exponents has been defined for positive real bases only. For other bases, difficulties appear already with the apparently simple case of nth roots, that is, of exponents 1 / n , {\displaystyle 1/n,} where n is a positive integer. Although the general theory of exponentiation with non-integer exponents applies to nth roots, this case deserves to be considered first, since it does not need to use complex logarithms, and is therefore easier to understand. === nth roots of a complex number === Every nonzero complex number z may be written in polar form as z = ρ e i θ = ρ ( cos ⁡ θ + i sin ⁡ θ ) , {\displaystyle z=\rho e^{i\theta }=\rho (\cos \theta +i\sin \theta ),} where ρ {\displaystyle \rho } is the absolute value of z, and θ {\displaystyle \theta } is its argument. The argument is defined up to an integer multiple of 2π; this means that, if θ {\displaystyle \theta } is the argument of a complex number, then θ + 2 k π {\displaystyle \theta +2k\pi } is also an argument of the same complex number for every integer k {\displaystyle k} . The polar form of the product of two complex numbers is obtained by multiplying the absolute values and adding the arguments. It follows that the polar form of an nth root of a complex number can be obtained by taking the nth root of the absolute value and dividing its argument by n: ( ρ e i θ ) 1 n = ρ n e i θ n . {\displaystyle \left(\rho e^{i\theta }\right)^{\frac {1}{n}}={\sqrt[{n}]{\rho }}\,e^{\frac {i\theta }{n}}.} If 2 π {\displaystyle 2\pi } is added to θ {\displaystyle \theta } , the complex number is not changed, but this adds 2 i π / n {\displaystyle 2i\pi /n} to the argument of the nth root, and provides a new nth root. This can be done n times ( k = 0 , 1 , . . . , n − 1 {\displaystyle k=0,1,...,n-1} ), and provides the n nth roots of the complex number: ( ρ e i ( θ + 2 k π ) ) 1 n = ρ n e i ( θ + 2 k π ) n . {\displaystyle \left(\rho e^{i(\theta +2k\pi )}\right)^{\frac {1}{n}}={\sqrt[{n}]{\rho }}\,e^{\frac {i(\theta +2k\pi )}{n}}.} It is usual to choose one of the n nth root as the principal root. The common choice is to choose the nth root for which − π < θ ≤ π , {\displaystyle -\pi <\theta \leq \pi ,} that is, the nth root that has the largest real part, and, if there are two, the one with positive imaginary part. This makes the principal nth root a continuous function in the whole complex plane, except for negative real values of the radicand. This function equals the usual nth root for positive real radicands. For negative real radicands, and odd exponents, the principal nth root is not real, although the usual nth root is real. Analytic continuation shows that the principal nth root is the unique complex differentiable function that extends the usual nth root to the complex plane without the nonpositive real numbers. If the complex number is moved around zero by increasing its argument, after an increment of 2 π , {\displaystyle 2\pi ,} the complex number comes back to its initial position, and its nth roots are permuted circularly (they are multiplied by e 2 i π / n e^{2i\pi /n} ). This shows that it is not possible to define a nth root function that is continuous in the whole complex plane. ==== Roots of unity ==== The nth roots of unity are the n complex numbers such that wn = 1, where n is a positive integer. They arise in various areas of mathematics, such as in discrete Fourier transform or algebraic solutions of algebraic equations (Lagrange resolvent). The n nth roots of unity are the n first powers of ω = e 2 π i n {\displaystyle \omega =e^{\frac {2\pi i}{n}}} , that is 1 = ω 0 = ω n , ω = ω 1 , ω 2 , . . . , ω n − 1 . {\displaystyle 1=\omega ^{0}=\omega ^{n},\omega =\omega ^{1},\omega ^{2},...,\omega ^{n-1}.} The nth roots of unity that have this generating property are called primitive nth roots of unity; they have the form ω k = e 2 k π i n , {\displaystyle \omega ^{k}=e^{\frac {2k\pi i}{n}},} with k coprime with n. The unique primitive square root of unity is − 1 ; {\displaystyle -1;} the primitive fourth roots of unity are i {\displaystyle i} and − i . {\displaystyle -i.} The nth roots of unity allow expressing all nth roots of a complex number z as the n products of a given nth roots of z with a nth root of unity. Geometrically, the nth roots of unity lie on the unit circle of the complex plane at the vertices of a regular n-gon with one vertex on the real number 1. As the number e 2 k π i n {\displaystyle e^{\frac {2k\pi i}{n}}} is the primitive nth root of unity with the smallest positive argument, it is called the principal primitive nth root of unity, sometimes shortened as principal nth root of unity, although this terminology can be confused with the principal value of 1 1 / n {\displaystyle 1^{1/n}} , which is 1. === Complex exponentiation === Defining exponentiation with complex bases leads to difficulties that are similar to those described in the preceding section, except that there are, in general, infinitely many possible values for z w z^{w} . So, either a principal value is defined, which is not continuous for the values of z that are real and nonpositive, or z w z^{w} is defined as a multivalued function. In all cases, the complex logarithm is used to define complex exponentiation as z w = e w log ⁡ z , {\displaystyle z^{w}=e^{w\log z},} where log ⁡ z {\displaystyle \log z} is the variant of the complex logarithm that is used, which is a function or a multivalued function such that e log ⁡ z = z {\displaystyle e^{\log z}=z} for every z in its domain of definition. ==== Principal value ==== The principal value of the complex logarithm is the unique continuous function, commonly denoted log , {\displaystyle \log ,} such that, for every nonzero complex number z, e log ⁡ z = z , {\displaystyle e^{\log z}=z,} and the argument of z satisfies − π < Arg ⁡ z ≤ π . {\displaystyle -\pi <\operatorname {Arg} z\leq \pi .} The principal value of the complex logarithm is not defined for z = 0 , {\displaystyle z=0,} it is discontinuous at negative real values of z, and it is holomorphic (that is, complex differentiable) elsewhere. If z is real and positive, the principal value of the complex logarithm is the natural logarithm: log ⁡ z = ln ⁡ z . {\displaystyle \log z=\ln z.} The principal value of z w {\displaystyle z^{w}} is defined as z w = e w log ⁡ z , {\displaystyle z^{w}=e^{w\log z},} where log ⁡ z {\displaystyle \log z} is the principal value of the logarithm. The function ( z , w ) → z w {\displaystyle (z,w)\to z^{w}} is holomorphic except in the neighbourhood of the points where z is real and nonpositive. If z is real and positive, the principal value of z w {\displaystyle z^{w}} equals its usual value defined above. If w = 1 / n , {\displaystyle w=1/n,} where n is an integer, this principal value is the same as the one defined above. ==== Multivalued function ==== In some contexts, there is a problem with the discontinuity of the principal values of log ⁡ z {\displaystyle \log z} and z w {\displaystyle z^{w}} at the negative real values of z. In this case, it is useful to consider these functions as multivalued functions. If log ⁡ z {\displaystyle \log z} denotes one of the values of the multivalued logarithm (typically its principal value), the other values are 2 i k π + log ⁡ z , {\displaystyle 2ik\pi +\log z,} where k is any integer. Similarly, if z w {\displaystyle z^{w}} is one value of the exponentiation, then the other values are given by e w ( 2 i k π + log ⁡ z ) = z w e 2 i k π w , {\displaystyle e^{w(2ik\pi +\log z)}=z^{w}e^{2ik\pi w},} where k is any integer. Different values of k give different values of z w {\displaystyle z^{w}} unless w is a rational number, that is, there is an integer d such that dw is an integer. This results from the periodicity of the exponential function, more specifically, that e a = e b {\displaystyle e^{a}=e^{b}} if and only if a − b {\displaystyle a-b} is an integer multiple of 2 π i . {\displaystyle 2\pi i.} If w = m n {\displaystyle w={\frac {m}{n}}} is a rational number with m and n coprime integers with n > 0 , {\displaystyle n>0,} then z w {\displaystyle z^{w}} has exactly n values. In the case m = 1 , {\displaystyle m=1,} these values are the same as those described in § nth roots of a complex number. If w is an integer, there is only one value that agrees with that of § Integer exponents. The multivalued exponentiation is holomorphic for z ≠ 0 , {\displaystyle z\neq 0,} in the sense that its graph consists of several sheets that define each a holomorphic function in the neighborhood of every point. If z varies continuously along a circle around 0, then, after a turn, the value of z w {\displaystyle z^{w}} has changed of sheet. ==== Computation ==== The canonical form x + i y {\displaystyle x+iy} of z w {\displaystyle z^{w}} can be computed from the canonical form of z and w. Although this can be described by a single formula, it is clearer to split the computation in several steps. Polar form of z. If z = a + i b {\displaystyle z=a+ib} is the canonical form of z (a and b being real), then its polar form is z = ρ e i θ = ρ ( cos ⁡ θ + i sin ⁡ θ ) , {\displaystyle z=\rho e^{i\theta }=\rho (\cos \theta +i\sin \theta ),} with ρ = a 2 + b 2 {\textstyle \rho ={\sqrt {a^{2}+b^{2}}}} and θ = atan2 ⁡ ( b , a ) {\displaystyle \theta =\operatorname {atan2} (b,a)} , where ⁠ atan2 {\displaystyle \operatorname {atan2} } ⁠ is the two-argument arctangent function. Logarithm of z. The principal value of this logarithm is log ⁡ z = ln ⁡ ρ + i θ , {\displaystyle \log z=\ln \rho +i\theta ,} where ln {\displaystyle \ln } denotes the natural logarithm. The other values of the logarithm are obtained by adding 2 i k π {\displaystyle 2ik\pi } for any integer k. Canonical form of w log ⁡ z . {\displaystyle w\log z.} If w = c + d i {\displaystyle w=c+di} with c and d real, the values of w log ⁡ z {\displaystyle w\log z} are w log ⁡ z = ( c ln ⁡ ρ − d θ − 2 d k π ) + i ( d ln ⁡ ρ + c θ + 2 c k π ) , {\displaystyle w\log z=(c\ln \rho -d\theta -2dk\pi )+i(d\ln \rho +c\theta +2ck\pi ),} the principal value corresponding to k = 0. {\displaystyle k=0.} Final result. Using the identities e x + y = e x e y {\displaystyle e^{x+y}=e^{x}e^{y}} and e y ln ⁡ x = x y , {\displaystyle e^{y\ln x}=x^{y},} one gets z w = ρ c e − d ( θ + 2 k π ) ( cos ⁡ ( d ln ⁡ ρ + c θ + 2 c k π ) + i sin ⁡ ( d ln ⁡ ρ + c θ + 2 c k π ) ) , {\displaystyle z^{w}=\rho ^{c}e^{-d(\theta +2k\pi )}\left(\cos(d\ln \rho +c\theta +2ck\pi )+i\sin(d\ln \rho +c\theta +2ck\pi )\right),} with k = 0 {\displaystyle k=0} for the principal value. ===== Examples ===== i i {\displaystyle i^{i}} The polar form of i is i = e i π / 2 , {\displaystyle i=e^{i\pi /2},} and the values of log ⁡ i {\displaystyle \log i} are thus log ⁡ i = i ( π 2 + 2 k π ) . {\displaystyle \log i=i\left({\frac {\pi }{2}}+2k\pi \right).} It follows that i i = e i log ⁡ i = e − π 2 e − 2 k π . {\displaystyle i^{i}=e^{i\log i}=e^{-{\frac {\pi }{2}}}e^{-2k\pi }.} So, all values of i i {\displaystyle i^{i}} are real, the principal one being e − π 2 ≈ 0.2079. {\displaystyle e^{-{\frac {\pi }{2}}}\approx 0.2079.} ( − 2 ) 3 + 4 i {\displaystyle (-2)^{3+4i}} Similarly, the polar form of −2 is − 2 = 2 e i π . {\displaystyle -2=2e^{i\pi }.} So, the above described method gives the values ( − 2 ) 3 + 4 i = 2 3 e − 4 ( π + 2 k π ) ( cos ⁡ ( 4 ln ⁡ 2 + 3 ( π + 2 k π ) ) + i sin ⁡ ( 4 ln ⁡ 2 + 3 ( π + 2 k π ) ) ) = − 2 3 e − 4 ( π + 2 k π ) ( cos ⁡ ( 4 ln ⁡ 2 ) + i sin ⁡ ( 4 ln ⁡ 2 ) ) . {\displaystyle {\begin{aligned}(-2)^{3+4i}&=2^{3}e^{-4(\pi +2k\pi )}(\cos(4\ln 2+3(\pi +2k\pi ))+i\sin(4\ln 2+3(\pi +2k\pi )))\\&=-2^{3}e^{-4(\pi +2k\pi )}(\cos(4\ln 2)+i\sin(4\ln 2)).\end{aligned}}} In this case, all the values have the same argument 4 ln ⁡ 2 , {\displaystyle 4\ln 2,} and different absolute values. In both examples, all values of z w {\displaystyle z^{w}} have the same argument. More generally, this is true if and only if the real part of w is an integer. ==== Failure of power and logarithm identities ==== Some identities for powers and logarithms for positive real numbers will fail for complex numbers, no matter how complex powers and complex logarithms are defined as single-valued functions. For example: == Irrationality and transcendence == If b is a positive real algebraic number, and x is a rational number, then bx is an algebraic number. This results from the theory of algebraic extensions. This remains true if b is any algebraic number, in which case, all values of bx (as a multivalued function) are algebraic. If x is irrational (that is, not rational), and both b and x are algebraic, Gelfond–Schneider theorem asserts that all values of bx are transcendental (that is, not algebraic), except if b equals 0 or 1. In other words, if x is irrational and b ∉ { 0 , 1 } , {\displaystyle b\not \in \{0,1\},} then at least one of b, x and bx is transcendental. == Integer powers in algebra == The definition of exponentiation with positive integer exponents as repeated multiplication may apply to any associative operation denoted as a multiplication. The definition of x0 requires further the existence of a multiplicative identity. An algebraic structure consisting of a set together with an associative operation denoted multiplicatively, and a multiplicative identity denoted by 1 is a monoid. In such a monoid, exponentiation of an element x is defined inductively by x 0 = 1 , {\displaystyle x^{0}=1,} x n + 1 = x x n {\displaystyle x^{n+1}=xx^{n}} for every nonnegative integer n. If n is a negative integer, x n {\displaystyle x^{n}} is defined only if x has a multiplicative inverse. In this case, the inverse of x is denoted x−1, and xn is defined as ( x − 1 ) − n . {\displaystyle \left(x^{-1}\right)^{-n}.} Exponentiation with integer exponents obeys the following laws, for x and y in the algebraic structure, and m and n integers: x 0 = 1 x m + n = x m x n ( x m ) n = x m n ( x y ) n = x n y n if x y = y x , and, in particular, if the multiplication is commutative. {\displaystyle {\begin{aligned}x^{0}&=1\\x^{m+n}&=x^{m}x^{n}\\(x^{m})^{n}&=x^{mn}\\(xy)^{n}&=x^{n}y^{n}\quad {\text{if }}xy=yx,{\text{and, in particular, if the multiplication is commutative.}}\end{aligned}}} These definitions are widely used in many areas of mathematics, notably for groups, rings, fields, square matrices (which form a ring). They apply also to functions from a set to itself, which form a monoid under function composition. This includes, as specific instances, geometric transformations, and endomorphisms of any mathematical structure. When there are several operations that may be repeated, it is common to indicate the repeated operation by placing its symbol in the superscript, before the exponent. For example, if f is a real function whose valued can be multiplied, f n {\displaystyle f^{n}} denotes the exponentiation with respect of multiplication, and f ∘ n {\displaystyle f^{\circ n}} may denote exponentiation with respect of function composition. That is, ( f n ) ( x ) = ( f ( x ) ) n = f ( x ) f ( x ) ⋯ f ( x ) , {\displaystyle (f^{n})(x)=(f(x))^{n}=f(x)\,f(x)\cdots f(x),} and ( f ∘ n ) ( x ) = f ( f ( ⋯ f ( f ( x ) ) ⋯ ) ) . {\displaystyle (f^{\circ n})(x)=f(f(\cdots f(f(x))\cdots )).} Commonly, ( f n ) ( x ) {\displaystyle (f^{n})(x)} is denoted f ( x ) n , {\displaystyle f(x)^{n},} while ( f ∘ n ) ( x ) {\displaystyle (f^{\circ n})(x)} is denoted f n ( x ) . {\displaystyle f^{n}(x).} === In a group === A multiplicative group is a set with as associative operation denoted as multiplication, that has an identity element, and such that every element has an inverse. So, if G is a group, x n {\displaystyle x^{n}} is defined for every x ∈ G {\displaystyle x\in G} and every integer n. The set of all powers of an element of a group form a subgroup. A group (or subgroup) that consists of all powers of a specific element x is the cyclic group generated by x. If all the powers of x are distinct, the group is isomorphic to the additive group Z {\displaystyle \mathbb {Z} } of the integers. Otherwise, the cyclic group is finite (it has a finite number of elements), and its number of elements is the order of x. If the order of x is n, then x n = x 0 = 1 , {\displaystyle x^{n}=x^{0}=1,} and the cyclic group generated by x consists of the n first powers of x (starting indifferently from the exponent 0 or 1). Order of elements play a fundamental role in group theory. For example, the order of an element in a finite group is always a divisor of the number of elements of the group (the order of the group). The possible orders of group elements are important in the study of the structure of a group (see Sylow theorems), and in the classification of finite simple groups. Superscript notation is also used for conjugation; that is, gh = h−1gh, where g and h are elements of a group. This notation cannot be confused with exponentiation, since the superscript is not an integer. The motivation of this notation is that conjugation obeys some of the laws of exponentiation, namely ( g h ) k = g h k {\displaystyle (g^{h})^{k}=g^{hk}} and ( g h ) k = g k h k . {\displaystyle (gh)^{k}=g^{k}h^{k}.} === In a ring === In a ring, it may occur that some nonzero elements satisfy x n = 0 {\displaystyle x^{n}=0} for some integer n. Such an element is said to be nilpotent. In a commutative ring, the nilpotent elements form an ideal, called the nilradical of the ring. If the nilradical is reduced to the zero ideal (that is, if x ≠ 0 {\displaystyle x\neq 0} implies x n ≠ 0 {\displaystyle x^{n}\neq 0} for every positive integer n), the commutative ring is said to be reduced. Reduced rings are important in algebraic geometry, since the coordinate ring of an affine algebraic set is always a reduced ring. More generally, given an ideal I in a commutative ring R, the set of the elements of R that have a power in I is an ideal, called the radical of I. The nilradical is the radical of the zero ideal. A radical ideal is an ideal that equals its own radical. In a polynomial ring k [ x 1 , … , x n ] {\displaystyle k[x_{1},\ldots ,x_{n}]} over a field k, an ideal is radical if and only if it is the set of all polynomials that are zero on an affine algebraic set (this is a consequence of Hilbert's Nullstellensatz). === Matrices and linear operators === If A is a square matrix, then the product of A with itself n times is called the matrix power. Also A 0 {\displaystyle A^{0}} is defined to be the identity matrix, and if A is invertible, then A − n = ( A − 1 ) n {\displaystyle A^{-n}=\left(A^{-1}\right)^{n}} . Matrix powers appear often in the context of discrete dynamical systems, where the matrix A expresses a transition from a state vector x of some system to the next state Ax of the system. This is the standard interpretation of a Markov chain, for example. Then A 2 x {\displaystyle A^{2}x} is the state of the system after two time steps, and so forth: A n x {\displaystyle A^{n}x} is the state of the system after n time steps. The matrix power A n {\displaystyle A^{n}} is the transition matrix between the state now and the state at a time n steps in the future. So computing matrix powers is equivalent to solving the evolution of the dynamical system. In many cases, matrix powers can be expediently computed by using eigenvalues and eigenvectors. Apart from matrices, more general linear operators can also be exponentiated. An example is the derivative operator of calculus, d / d x {\displaystyle d/dx} , which is a linear operator acting on functions f ( x ) {\displaystyle f(x)} to give a new function ( d / d x ) f ( x ) = f ′ ( x ) {\displaystyle (d/dx)f(x)=f'(x)} . The nth power of the differentiation operator is the nth derivative: ( d d x ) n f ( x ) = d n d x n f ( x ) = f ( n ) ( x ) . {\displaystyle \left({\frac {d}{dx}}\right)^{n}f(x)={\frac {d^{n}}{dx^{n}}}f(x)=f^{(n)}(x).} These examples are for discrete exponents of linear operators, but in many circumstances it is also desirable to define powers of such operators with continuous exponents. This is the starting point of the mathematical theory of semigroups. Just as computing matrix powers with discrete exponents solves discrete dynamical systems, so does computing matrix powers with continuous exponents solve systems with continuous dynamics. Examples include approaches to solving the heat equation, Schrödinger equation, wave equation, and other partial differential equations including a time evolution. The special case of exponentiating the derivative operator to a non-integer power is called the fractional derivative which, together with the fractional integral, is one of the basic operations of the fractional calculus. === Finite fields === A field is an algebraic structure in which multiplication, addition, subtraction, and division are defined and satisfy the properties that multiplication is associative and every nonzero element has a multiplicative inverse. This implies that exponentiation with integer exponents is well-defined, except for nonpositive powers of 0. Common examples are the field of complex numbers, the real numbers and the rational numbers, considered earlier in this article, which are all infinite. A finite field is a field with a finite number of elements. This number of elements is either a prime number or a prime power; that is, it has the form q = p k , {\displaystyle q=p^{k},} where p is a prime number, and k is a positive integer. For every such q, there are fields with q elements. The fields with q elements are all isomorphic, which allows, in general, working as if there were only one field with q elements, denoted F q . {\displaystyle \mathbb {F} _{q}.} One has x q = x {\displaystyle x^{q}=x} for every x ∈ F q . {\displaystyle x\in \mathbb {F} _{q}.} A primitive element in F q {\displaystyle \mathbb {F} _{q}} is an element g such that the set of the q − 1 first powers of g (that is, { g 1 = g , g 2 , … , g p − 1 = g 0 = 1 } {\displaystyle \{g^{1}=g,g^{2},\ldots ,g^{p-1}=g^{0}=1\}} ) equals the set of the nonzero elements of F q . {\displaystyle \mathbb {F} _{q}.} There are φ ( p − 1 ) {\displaystyle \varphi (p-1)} primitive elements in F q , {\displaystyle \mathbb {F} _{q},} where φ {\displaystyle \varphi } is Euler's totient function. In F q , {\displaystyle \mathbb {F} _{q},} the freshman's dream identity ( x + y ) p = x p + y p {\displaystyle (x+y)^{p}=x^{p}+y^{p}} is true for the exponent p. As x p = x {\displaystyle x^{p}=x} in F q , {\displaystyle \mathbb {F} _{q},} It follows that the map F : F q → F q x ↦ x p {\displaystyle {\begin{aligned}F\colon {}&\mathbb {F} _{q}\to \mathbb {F} _{q}\\&x\mapsto x^{p}\end{aligned}}} is linear over F q , {\displaystyle \mathbb {F} _{q},} and is a field automorphism, called the Frobenius automorphism. If q = p k , {\displaystyle q=p^{k},} the field F q {\displaystyle \mathbb {F} _{q}} has k automorphisms, which are the k first powers (under composition) of F. In other words, the Galois group of F q {\displaystyle \mathbb {F} _{q}} is cyclic of order k, generated by the Frobenius automorphism. The Diffie–Hellman key exchange is an application of exponentiation in finite fields that is widely used for secure communications. It uses the fact that exponentiation is computationally inexpensive, whereas the inverse operation, the discrete logarithm, is computationally expensive. More precisely, if g is a primitive element in F q , {\displaystyle \mathbb {F} _{q},} then g e {\displaystyle g^{e}} can be efficiently computed with exponentiation by squaring for any e, even if q is large, while there is no known computationally practical algorithm that allows retrieving e from g e {\displaystyle g^{e}} if q is sufficiently large. == Powers of sets == The Cartesian product of two sets S and T is the set of the ordered pairs ( x , y ) {\displaystyle (x,y)} such that x ∈ S {\displaystyle x\in S} and y ∈ T . {\displaystyle y\in T.} This operation is not properly commutative nor associative, but has these properties up to canonical isomorphisms, that allow identifying, for example, ( x , ( y , z ) ) , {\displaystyle (x,(y,z)),} ( ( x , y ) , z ) , {\displaystyle ((x,y),z),} and ( x , y , z ) . {\displaystyle (x,y,z).} This allows defining the nth power S n {\displaystyle S^{n}} of a set S as the set of all n-tuples ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} of elements of S. When S is endowed with some structure, it is frequent that S n {\displaystyle S^{n}} is naturally endowed with a similar structure. In this case, the term "direct product" is generally used instead of "Cartesian product", and exponentiation denotes product structure. For example R n {\displaystyle \mathbb {R} ^{n}} (where R {\displaystyle \mathbb {R} } denotes the real numbers) denotes the Cartesian product of n copies of R , {\displaystyle \mathbb {R} ,} as well as their direct product as vector space, topological spaces, rings, etc. === Sets as exponents === A n-tuple ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} of elements of S can be considered as a function from { 1 , … , n } . {\displaystyle \{1,\ldots ,n\}.} This generalizes to the following notation. Given two sets S and T, the set of all functions from T to S is denoted S T {\displaystyle S^{T}} . This exponential notation is justified by the following canonical isomorphisms (for the first one, see Currying): ( S T ) U ≅ S T × U , {\displaystyle (S^{T})^{U}\cong S^{T\times U},} S T ⊔ U ≅ S T × S U , {\displaystyle S^{T\sqcup U}\cong S^{T}\times S^{U},} where × {\displaystyle \times } denotes the Cartesian product, and ⊔ {\displaystyle \sqcup } the disjoint union. One can use sets as exponents for other operations on sets, typically for direct sums of abelian groups, vector spaces, or modules. For distinguishing direct sums from direct products, the exponent of a direct sum is placed between parentheses. For example, R N {\displaystyle \mathbb {R} ^{\mathbb {N} }} denotes the vector space of the infinite sequences of real numbers, and R ( N ) {\displaystyle \mathbb {R} ^{(\mathbb {N} )}} the vector space of those sequences that have a finite number of nonzero elements. The latter has a basis consisting of the sequences with exactly one nonzero element that equals 1, while the Hamel bases of the former cannot be explicitly described (because their existence involves Zorn's lemma). In this context, 2 can represents the set { 0 , 1 } . {\displaystyle \{0,1\}.} So, 2 S {\displaystyle 2^{S}} denotes the power set of S, that is the set of the functions from S to { 0 , 1 } , {\displaystyle \{0,1\},} which can be identified with the set of the subsets of S, by mapping each function to the inverse image of 1. This fits in with the exponentiation of cardinal numbers, in the sense that |ST| = |S||T|, where |X| is the cardinality of X. === In category theory === In the category of sets, the morphisms between sets X and Y are the functions from X to Y. It results that the set of the functions from X to Y that is denoted Y X {\displaystyle Y^{X}} in the preceding section can also be denoted hom ⁡ ( X , Y ) . {\displaystyle \hom(X,Y).} The isomorphism ( S T ) U ≅ S T × U {\displaystyle (S^{T})^{U}\cong S^{T\times U}} can be rewritten hom ⁡ ( U , S T ) ≅ hom ⁡ ( T × U , S ) . {\displaystyle \hom(U,S^{T})\cong \hom(T\times U,S).} This means the functor "exponentiation to the power T " is a right adjoint to the functor "direct product with T ". This generalizes to the definition of exponentiation in a category in which finite direct products exist: in such a category, the functor X → X T {\displaystyle X\to X^{T}} is, if it exists, a right adjoint to the functor Y → T × Y . {\displaystyle Y\to T\times Y.} A category is called a Cartesian closed category, if direct products exist, and the functor Y → X × Y {\displaystyle Y\to X\times Y} has a right adjoint for every T. == Repeated exponentiation == Just as exponentiation of natural numbers is motivated by repeated multiplication, it is possible to define an operation based on repeated exponentiation; this operation is sometimes called hyper-4 or tetration. Iterating tetration leads to another operation, and so on, a concept named hyperoperation. This sequence of operations is expressed by the Ackermann function and Knuth's up-arrow notation. Just as exponentiation grows faster than multiplication, which is faster-growing than addition, tetration is faster-growing than exponentiation. Evaluated at (3, 3), the functions addition, multiplication, exponentiation, and tetration yield 6, 9, 27, and 7625597484987 (=327 = 333 = 33) respectively. == Limits of powers == Zero to the power of zero gives a number of examples of limits that are of the indeterminate form 00. The limits in these examples exist, but have different values, showing that the two-variable function xy has no limit at the point (0, 0). One may consider at what points this function does have a limit. More precisely, consider the function f ( x , y ) = x y {\displaystyle f(x,y)=x^{y}} defined on D = { ( x , y ) ∈ R 2 : x > 0 } {\displaystyle D=\{(x,y)\in \mathbf {R} ^{2}:x>0\}} . Then D can be viewed as a subset of R2 (that is, the set of all pairs (x, y) with x, y belonging to the extended real number line R = [−∞, +∞], endowed with the product topology), which will contain the points at which the function f has a limit. In fact, f has a limit at all accumulation points of D, except for (0, 0), (+∞, 0), (1, +∞) and (1, −∞). Accordingly, this allows one to define the powers xy by continuity whenever 0 ≤ x ≤ +∞, −∞ ≤ y ≤ +∞, except for 00, (+∞)0, 1+∞ and 1−∞, which remain indeterminate forms. Under this definition by continuity, we obtain: x+∞ = +∞ and x−∞ = 0, when 1 < x ≤ +∞. x+∞ = 0 and x−∞ = +∞, when 0 < x < 1. 0y = 0 and (+∞)y = +∞, when 0 < y ≤ +∞. 0y = +∞ and (+∞)y = 0, when −∞ ≤ y < 0. These powers are obtained by taking limits of xy for positive values of x. This method does not permit a definition of xy when x < 0, since pairs (x, y) with x < 0 are not accumulation points of D. On the other hand, when n is an integer, the power xn is already meaningful for all values of x, including negative ones. This may make the definition 0n = +∞ obtained above for negative n problematic when n is odd, since in this case xn → +∞ as x tends to 0 through positive values, but not negative ones. == Efficient computation with integer exponents == Computing bn using iterated multiplication requires n − 1 multiplication operations, but it can be computed more efficiently than that, as illustrated by the following example. To compute 2100, apply Horner's rule to the exponent 100 written in binary: 100 = 2 2 + 2 5 + 2 6 = 2 2 ( 1 + 2 3 ( 1 + 2 ) ) {\displaystyle 100=2^{2}+2^{5}+2^{6}=2^{2}(1+2^{3}(1+2))} . Then compute the following terms in order, reading Horner's rule from right to left. This series of steps only requires 8 multiplications instead of 99. In general, the number of multiplication operations required to compute bn can be reduced to ♯ n + ⌊ log 2 ⁡ n ⌋ − 1 , {\displaystyle \sharp n+\lfloor \log _{2}n\rfloor -1,} by using exponentiation by squaring, where ♯ n {\displaystyle \sharp n} denotes the number of 1s in the binary representation of n. For some exponents (100 is not among them), the number of multiplications can be further reduced by computing and using the minimal addition-chain exponentiation. Finding the minimal sequence of multiplications (the minimal-length addition chain for the exponent) for bn is a difficult problem, for which no efficient algorithms are currently known (see Subset sum problem), but many reasonably efficient heuristic algorithms are available. However, in practical computations, exponentiation by squaring is efficient enough, and much more easy to implement. == Iterated functions == Function composition is a binary operation that is defined on functions such that the codomain of the function written on the right is included in the domain of the function written on the left. It is denoted g ∘ f , {\displaystyle g\circ f,} and defined as ( g ∘ f ) ( x ) = g ( f ( x ) ) {\displaystyle (g\circ f)(x)=g(f(x))} for every x in the domain of f. If the domain of a function f equals its codomain, one may compose the function with itself an arbitrary number of time, and this defines the nth power of the function under composition, commonly called the nth iterate of the function. Thus f n {\displaystyle f^{n}} denotes generally the nth iterate of f; for example, f 3 ( x ) {\displaystyle f^{3}(x)} means f ( f ( f ( x ) ) ) . {\displaystyle f(f(f(x))).} When a multiplication is defined on the codomain of the function, this defines a multiplication on functions, the pointwise multiplication, which induces another exponentiation. When using functional notation, the two kinds of exponentiation are generally distinguished by placing the exponent of the functional iteration before the parentheses enclosing the arguments of the function, and placing the exponent of pointwise multiplication after the parentheses. Thus f 2 ( x ) = f ( f ( x ) ) , {\displaystyle f^{2}(x)=f(f(x)),} and f ( x ) 2 = f ( x ) ⋅ f ( x ) . {\displaystyle f(x)^{2}=f(x)\cdot f(x).} When functional notation is not used, disambiguation is often done by placing the composition symbol before the exponent; for example f ∘ 3 = f ∘ f ∘ f , {\displaystyle f^{\circ 3}=f\circ f\circ f,} and f 3 = f ⋅ f ⋅ f . {\displaystyle f^{3}=f\cdot f\cdot f.} For historical reasons, the exponent of a repeated multiplication is placed before the argument for some specific functions, typically the trigonometric functions. So, sin 2 ⁡ x {\displaystyle \sin ^{2}x} and sin 2 ⁡ ( x ) {\displaystyle \sin ^{2}(x)} both mean sin ⁡ ( x ) ⋅ sin ⁡ ( x ) {\displaystyle \sin(x)\cdot \sin(x)} and not sin ⁡ ( sin ⁡ ( x ) ) , {\displaystyle \sin(\sin(x)),} which, in any case, is rarely considered. Historically, several variants of these notations were used by different authors. In this context, the exponent − 1 {\displaystyle -1} denotes always the inverse function, if it exists. So sin − 1 ⁡ x = sin − 1 ⁡ ( x ) = arcsin ⁡ x . {\displaystyle \sin ^{-1}x=\sin ^{-1}(x)=\arcsin x.} For the multiplicative inverse fractions are generally used as in 1 / sin ⁡ ( x ) = 1 sin ⁡ x . {\displaystyle 1/\sin(x)={\frac {1}{\sin x}}.} == In programming languages == Programming languages generally express exponentiation either as an infix operator or as a function application, as they do not support superscripts. The most common operator symbol for exponentiation is the caret (^). The original version of ASCII included an uparrow symbol (↑), intended for exponentiation, but this was replaced by the caret in 1967, so the caret became usual in programming languages. The notations include: x ^ y: AWK, BASIC, J, MATLAB, Wolfram Language (Mathematica), R, Microsoft Excel, Analytica, TeX (and its derivatives), TI-BASIC, bc (for integer exponents), Haskell (for nonnegative integer exponents), Lua, and most computer algebra systems. x ** y. The Fortran character set did not include lowercase characters or punctuation symbols other than +-*/()&=.,' and so used ** for exponentiation (the initial version used a xx b instead.). Many other languages followed suit: Ada, Z shell, KornShell, Bash, COBOL, CoffeeScript, Fortran, FoxPro, Gnuplot, Groovy, JavaScript, OCaml, ooRexx, F#, Perl, PHP, PL/I, Python, Rexx, Ruby, SAS, Seed7, Tcl, ABAP, Mercury, Haskell (for floating-point exponents), Turing, and VHDL. x ↑ y: Algol Reference language, Commodore BASIC, TRS-80 Level II/III BASIC. x ^^ y: Haskell (for fractional base, integer exponents), D. x⋆y: APL. In most programming languages with an infix exponentiation operator, it is right-associative, that is, a^b^c is interpreted as a^(b^c). This is because (a^b)^c is equal to a^(b*c) and thus not as useful. In some languages, it is left-associative, notably in Algol, MATLAB, and the Microsoft Excel formula language. Other programming languages use functional notation: (expt x y): Common Lisp. pown x y: F# (for integer base, integer exponent). Still others only provide exponentiation as part of standard libraries: pow(x, y): C, C++ (in math library). Math.Pow(x, y): C#. math:pow(X, Y): Erlang. Math.pow(x, y): Java. [Math]::Pow(x, y): PowerShell. In some statically typed languages that prioritize type safety such as Rust, exponentiation is performed via a multitude of methods: x.pow(y) for x and y as integers x.powf(y) for x and y as floating-point numbers x.powi(y) for x as a float and y as an integer == See also == == Notes == == References ==
Wikipedia/Power_function
In mathematics, the Lambert W function, also called the omega function or product logarithm, is a multivalued function, namely the branches of the converse relation of the function f(w) = wew, where w is any complex number and ew is the exponential function. The function is named after Johann Lambert, who considered a related problem in 1758. Building on Lambert's work, Leonhard Euler described the W function per se in 1783. For each integer k there is one branch, denoted by Wk(z), which is a complex-valued function of one complex argument. W0 is known as the principal branch. These functions have the following property: if z and w are any complex numbers, then w e w = z {\displaystyle we^{w}=z} holds if and only if w = W k ( z ) for some integer k . {\displaystyle w=W_{k}(z)\ \ {\text{ for some integer }}k.} When dealing with real numbers only, the two branches W0 and W−1 suffice: for real numbers x and y the equation y e y = x {\displaystyle ye^{y}=x} can be solved for y only if x ≥ −⁠1/e⁠; yields y = W0(x) if x ≥ 0 and the two values y = W0(x) and y = W−1(x) if −⁠1/e⁠ ≤ x < 0. The Lambert W function's branches cannot be expressed in terms of elementary functions. It is useful in combinatorics, for instance, in the enumeration of trees. It can be used to solve various equations involving exponentials (e.g. the maxima of the Planck, Bose–Einstein, and Fermi–Dirac distributions) and also occurs in the solution of delay differential equations, such as y′(t) = a y(t − 1). In biochemistry, and in particular enzyme kinetics, an opened-form solution for the time-course kinetics analysis of Michaelis–Menten kinetics is described in terms of the Lambert W function. == Terminology == The notation convention chosen here (with W0 and W−1) follows the canonical reference on the Lambert W function by Corless, Gonnet, Hare, Jeffrey and Knuth. The name "product logarithm" can be understood as follows: since the inverse function of f(w) = ew is termed the logarithm, it makes sense to call the inverse "function" of the product wew the "product logarithm". (Technical note: like the complex logarithm, it is multivalued and thus W is described as a converse relation rather than inverse function.) It is related to the omega constant, which is equal to W0(1). == History == Lambert first considered the related Lambert's Transcendental Equation in 1758, which led to an article by Leonhard Euler in 1783 that discussed the special case of wew. The equation Lambert considered was x = x m + q . {\displaystyle x=x^{m}+q.} Euler transformed this equation into the form x a − x b = ( a − b ) c x a + b . {\displaystyle x^{a}-x^{b}=(a-b)cx^{a+b}.} Both authors derived a series solution for their equations. Once Euler had solved this equation, he considered the case ⁠ a = b {\displaystyle a=b} ⁠. Taking limits, he derived the equation ln ⁡ x = c x a . {\displaystyle \ln x=cx^{a}.} He then put ⁠ a = 1 {\displaystyle a=1} ⁠ and obtained a convergent series solution for the resulting equation, expressing ⁠ x {\displaystyle x} ⁠ in terms of ⁠ c {\displaystyle c} ⁠. After taking derivatives with respect to ⁠ x {\displaystyle x} ⁠ and some manipulation, the standard form of the Lambert function is obtained. In 1993, it was reported that the Lambert ⁠ W {\displaystyle W} ⁠ function provides an exact solution to the quantum-mechanical double-well Dirac delta function model for equal charges—a fundamental problem in physics. Prompted by this, Rob Corless and developers of the Maple computer algebra system realized that "the Lambert W function has been widely used in many fields, but because of differing notation and the absence of a standard name, awareness of the function was not as high as it should have been." Another example where this function is found is in Michaelis–Menten kinetics. Although it was widely believed that the Lambert ⁠ W {\displaystyle W} ⁠ function cannot be expressed in terms of elementary (Liouvillian) functions, the first published proof did not appear until 2008. == Elementary properties, branches and range == There are countably many branches of the W function, denoted by Wk(z), for integer k; W0(z) being the main (or principal) branch. W0(z) is defined for all complex numbers z while Wk(z) with k ≠ 0 is defined for all non-zero z. With W0(0) = 0 and limz→0 Wk(z) = −∞ for all k ≠ 0. The branch point for the principal branch is at z = −⁠1/e⁠, with a branch cut that extends to −∞ along the negative real axis. This branch cut separates the principal branch from the two branches W−1 and W1. In all branches Wk with k ≠ 0, there is a branch point at z = 0 and a branch cut along the entire negative real axis. The functions Wk(z), k ∈ Z are all injective and their ranges are disjoint. The range of the entire multivalued function W is the complex plane. The image of the real axis is the union of the real axis and the quadratrix of Hippias, the parametric curve w = −t cot t + it. === Inverse === The range plot above also delineates the regions in the complex plane where the simple inverse relationship ⁠ W ( n , z e z ) = z {\displaystyle W(n,ze^{z})=z} ⁠ is true. ⁠ f = z e z {\displaystyle f=ze^{z}} ⁠ implies that there exists an ⁠ n {\displaystyle n} ⁠ such that ⁠ z = W ( n , f ) = W ( n , z e z ) {\displaystyle z=W(n,f)=W(n,ze^{z})} ⁠, where ⁠ n {\displaystyle n} ⁠ depends upon the value of ⁠ z {\displaystyle z} ⁠. The value of the integer ⁠ n {\displaystyle n} ⁠ changes abruptly when ⁠ z e z {\displaystyle ze^{z}} ⁠ is at the branch cut of ⁠ W ( n , z e z ) {\displaystyle W(n,ze^{z})} ⁠, which means that ⁠ z e z {\displaystyle ze^{z}} ⁠ ≤ 0, except for ⁠ n = 0 {\displaystyle n=0} ⁠ where it is ⁠ z e z {\displaystyle ze^{z}} ⁠ ≤ −1/⁠ e {\displaystyle e} ⁠. Defining ⁠ z = x + i y {\displaystyle z=x+iy} ⁠, where ⁠ x {\displaystyle x} ⁠ and ⁠ y {\displaystyle y} ⁠ are real, and expressing ⁠ e z {\displaystyle e^{z}} ⁠ in polar coordinates, it is seen that z e z = ( x + i y ) e x ( cos ⁡ y + i sin ⁡ y ) = e x ( x cos ⁡ y − y sin ⁡ y ) + i e x ( x sin ⁡ y + y cos ⁡ y ) {\displaystyle {\begin{aligned}ze^{z}&=(x+iy)e^{x}(\cos y+i\sin y)\\&=e^{x}(x\cos y-y\sin y)+ie^{x}(x\sin y+y\cos y)\\\end{aligned}}} For n ≠ 0 {\displaystyle n\neq 0} , the branch cut for ⁠ W ( n , z e z ) {\displaystyle W(n,ze^{z})} ⁠ is the non-positive real axis, so that x sin ⁡ y + y cos ⁡ y = 0 ⇒ x = − y / tan ⁡ ( y ) , {\displaystyle x\sin y+y\cos y=0\Rightarrow x=-y/\tan(y),} and ( x cos ⁡ y − y sin ⁡ y ) e x ≤ 0. {\displaystyle (x\cos y-y\sin y)e^{x}\leq 0.} For n = 0 {\displaystyle n=0} , the branch cut for ⁠ W [ n , z e z ] {\displaystyle W[n,ze^{z}]} ⁠ is the real axis with − ∞ < z ≤ − 1 / e {\displaystyle -\infty <z\leq -1/e} , so that the inequality becomes ( x cos ⁡ y − y sin ⁡ y ) e x ≤ − 1 / e . {\displaystyle (x\cos y-y\sin y)e^{x}\leq -1/e.} Inside the regions bounded by the above, there are no discontinuous changes in ⁠ W ( n , z e z ) {\displaystyle W(n,ze^{z})} ⁠, and those regions specify where the ⁠ W {\displaystyle W} ⁠ function is simply invertible, i.e. ⁠ W ( n , z e z ) = z {\displaystyle W(n,ze^{z})=z} ⁠. == Calculus == === Derivative === By implicit differentiation, one can show that all branches of W satisfy the differential equation z ( 1 + W ) d W d z = W for z ≠ − 1 e . {\displaystyle z(1+W){\frac {dW}{dz}}=W\quad {\text{for }}z\neq -{\frac {1}{e}}.} (W is not differentiable for z = −⁠1/e⁠.) As a consequence, that gets the following formula for the derivative of W: d W d z = W ( z ) z ( 1 + W ( z ) ) for z ∉ { 0 , − 1 e } . {\displaystyle {\frac {dW}{dz}}={\frac {W(z)}{z(1+W(z))}}\quad {\text{for }}z\not \in \left\{0,-{\frac {1}{e}}\right\}.} Using the identity eW(z) = ⁠z/W(z)⁠, gives the following equivalent formula: d W d z = 1 z + e W ( z ) for z ≠ − 1 e . {\displaystyle {\frac {dW}{dz}}={\frac {1}{z+e^{W(z)}}}\quad {\text{for }}z\neq -{\frac {1}{e}}.} At the origin we have W 0 ′ ( 0 ) = 1. {\displaystyle W'_{0}(0)=1.} The n-th derivative of W is of the form: d n W d z n = P n ( W ( z ) ) ( z + e W ( z ) ) n ( W ( z ) + 1 ) n − 1 for n > 0 , z ≠ − 1 e . {\displaystyle {\frac {d^{n}W}{dz^{n}}}={\frac {P_{n}(W(z))}{(z+e^{W(z)})^{n}(W(z)+1)^{n-1}}}\quad {\text{for }}n>0,\,z\neq -{\frac {1}{e}}.} Where Pn is a polynomial function with coefficients defined in A042977. If and only if z is a root of Pn then zez is a root of the n-th derivative of W. Taking the derivative of the n-th derivative of W yields: d n + 1 W d z n + 1 = ( W ( z ) + 1 ) P n ′ ( W ( z ) ) + ( 1 − 3 n − n W ( z ) ) P n ( W ( z ) ) ( n + e W ( z ) ) n + 1 ( W ( z ) + 1 ) n for n > 0 , z ≠ − 1 e . {\displaystyle {\frac {d^{n+1}W}{dz^{n+1}}}={\frac {(W(z)+1)P_{n}'(W(z))+(1-3n-nW(z))P_{n}(W(z))}{(n+e^{W(z)})^{n+1}(W(z)+1)^{n}}}\quad {\text{for }}n>0,\,z\neq -{\frac {1}{e}}.} Inductively proving the n-th derivative equation. === Integral === The function W(x), and many other expressions involving W(x), can be integrated using the substitution w = W(x), i.e. x = wew: ∫ W ( x ) d x = x W ( x ) − x + e W ( x ) + C = x ( W ( x ) − 1 + 1 W ( x ) ) + C . {\displaystyle {\begin{aligned}\int W(x)\,dx&=xW(x)-x+e^{W(x)}+C\\&=x\left(W(x)-1+{\frac {1}{W(x)}}\right)+C.\end{aligned}}} (The last equation is more common in the literature but is undefined at x = 0). One consequence of this (using the fact that W0(e) = 1) is the identity ∫ 0 e W 0 ( x ) d x = e − 1. {\displaystyle \int _{0}^{e}W_{0}(x)\,dx=e-1.} == Asymptotic expansions == The Taylor series of W0 around 0 can be found using the Lagrange inversion theorem and is given by W 0 ( x ) = ∑ n = 1 ∞ ( − n ) n − 1 n ! x n = x − x 2 + 3 2 x 3 − 16 6 x 4 + 125 24 x 5 − ⋯ . {\displaystyle W_{0}(x)=\sum _{n=1}^{\infty }{\frac {(-n)^{n-1}}{n!}}x^{n}=x-x^{2}+{\tfrac {3}{2}}x^{3}-{\tfrac {16}{6}}x^{4}+{\tfrac {125}{24}}x^{5}-\cdots .} The radius of convergence is ⁠1/e⁠, as may be seen by the ratio test. The function defined by this series can be extended to a holomorphic function defined on all complex numbers with a branch cut along the interval (−∞, −⁠1/e⁠]; this holomorphic function defines the principal branch of the Lambert W function. For large values of x, W0 is asymptotic to W 0 ( x ) = L 1 − L 2 + L 2 L 1 + L 2 ( − 2 + L 2 ) 2 L 1 2 + L 2 ( 6 − 9 L 2 + 2 L 2 2 ) 6 L 1 3 + L 2 ( − 12 + 36 L 2 − 22 L 2 2 + 3 L 2 3 ) 12 L 1 4 + ⋯ = L 1 − L 2 + ∑ l = 0 ∞ ∑ m = 1 ∞ ( − 1 ) l [ l + m l + 1 ] m ! L 1 − l − m L 2 m , {\displaystyle {\begin{aligned}W_{0}(x)&=L_{1}-L_{2}+{\frac {L_{2}}{L_{1}}}+{\frac {L_{2}\left(-2+L_{2}\right)}{2L_{1}^{2}}}+{\frac {L_{2}\left(6-9L_{2}+2L_{2}^{2}\right)}{6L_{1}^{3}}}+{\frac {L_{2}\left(-12+36L_{2}-22L_{2}^{2}+3L_{2}^{3}\right)}{12L_{1}^{4}}}+\cdots \\[5pt]&=L_{1}-L_{2}+\sum _{l=0}^{\infty }\sum _{m=1}^{\infty }{\frac {(-1)^{l}\left[{\begin{smallmatrix}l+m\\l+1\end{smallmatrix}}\right]}{m!}}L_{1}^{-l-m}L_{2}^{m},\end{aligned}}} where L1 = ln x, L2 = ln ln x, and [l + ml + 1] is a non-negative Stirling number of the first kind. Keeping only the first two terms of the expansion, W 0 ( x ) = ln ⁡ x − ln ⁡ ln ⁡ x + o ( 1 ) . {\displaystyle W_{0}(x)=\ln x-\ln \ln x+{\mathcal {o}}(1).} The other real branch, W−1, defined in the interval [−⁠1/e⁠, 0), has an approximation of the same form as x approaches zero, with in this case L1 = ln(−x) and L2 = ln(−ln(−x)). === Integer and complex powers === Integer powers of W0 also admit simple Taylor (or Laurent) series expansions at zero: W 0 ( x ) 2 = ∑ n = 2 ∞ − 2 ( − n ) n − 3 ( n − 2 ) ! x n = x 2 − 2 x 3 + 4 x 4 − 25 3 x 5 + 18 x 6 − ⋯ . {\displaystyle W_{0}(x)^{2}=\sum _{n=2}^{\infty }{\frac {-2\left(-n\right)^{n-3}}{(n-2)!}}x^{n}=x^{2}-2x^{3}+4x^{4}-{\tfrac {25}{3}}x^{5}+18x^{6}-\cdots .} More generally, for r ∈ Z, the Lagrange inversion formula gives W 0 ( x ) r = ∑ n = r ∞ − r ( − n ) n − r − 1 ( n − r ) ! x n , {\displaystyle W_{0}(x)^{r}=\sum _{n=r}^{\infty }{\frac {-r\left(-n\right)^{n-r-1}}{(n-r)!}}x^{n},} which is, in general, a Laurent series of order r. Equivalently, the latter can be written in the form of a Taylor expansion of powers of W0(x) / x: ( W 0 ( x ) x ) r = e − r W 0 ( x ) = ∑ n = 0 ∞ r ( n + r ) n − 1 n ! ( − x ) n , {\displaystyle \left({\frac {W_{0}(x)}{x}}\right)^{r}=e^{-rW_{0}(x)}=\sum _{n=0}^{\infty }{\frac {r\left(n+r\right)^{n-1}}{n!}}\left(-x\right)^{n},} which holds for any r ∈ C and |x| < ⁠1/e⁠. == Bounds and inequalities == A number of non-asymptotic bounds are known for the Lambert function. Hoorfar and Hassani showed that the following bound holds for x ≥ e: ln ⁡ x − ln ⁡ ln ⁡ x + ln ⁡ ln ⁡ x 2 ln ⁡ x ≤ W 0 ( x ) ≤ ln ⁡ x − ln ⁡ ln ⁡ x + e e − 1 ln ⁡ ln ⁡ x ln ⁡ x . {\displaystyle \ln x-\ln \ln x+{\frac {\ln \ln x}{2\ln x}}\leq W_{0}(x)\leq \ln x-\ln \ln x+{\frac {e}{e-1}}{\frac {\ln \ln x}{\ln x}}.} They also showed the general bound W 0 ( x ) ≤ ln ⁡ ( x + y 1 + ln ⁡ ( y ) ) , {\displaystyle W_{0}(x)\leq \ln \left({\frac {x+y}{1+\ln(y)}}\right),} for every y > 1 / e {\displaystyle y>1/e} and x ≥ − 1 / e {\displaystyle x\geq -1/e} , with equality only for x = y ln ⁡ ( y ) {\displaystyle x=y\ln(y)} . The bound allows many other bounds to be made, such as taking y = x + 1 {\displaystyle y=x+1} which gives the bound W 0 ( x ) ≤ ln ⁡ ( 2 x + 1 1 + ln ⁡ ( x + 1 ) ) . {\displaystyle W_{0}(x)\leq \ln \left({\frac {2x+1}{1+\ln(x+1)}}\right).} In 2013 it was proven that the branch W−1 can be bounded as follows: − 1 − 2 u − u < W − 1 ( − e − u − 1 ) < − 1 − 2 u − 2 3 u for u > 0. {\displaystyle -1-{\sqrt {2u}}-u<W_{-1}\left(-e^{-u-1}\right)<-1-{\sqrt {2u}}-{\tfrac {2}{3}}u\quad {\text{for }}u>0.} Roberto Iacono and John P. Boyd enhanced the bounds as follows: ln ⁡ ( x ln ⁡ x ) − ln ⁡ ( x ln ⁡ x ) 1 + ln ⁡ ( x ln ⁡ x ) ln ⁡ ( 1 − ln ⁡ ln ⁡ x ln ⁡ x ) ≤ W 0 ( x ) ≤ ln ⁡ ( x ln ⁡ x ) − ln ⁡ ( ( 1 − ln ⁡ ln ⁡ x ln ⁡ x ) ( 1 − ln ⁡ ( 1 − ln ⁡ ln ⁡ x ln ⁡ x ) 1 + ln ⁡ ( x ln ⁡ x ) ) ) . {\displaystyle \ln \left({\frac {x}{\ln x}}\right)-{\frac {\ln \left({\frac {x}{\ln x}}\right)}{1+\ln \left({\frac {x}{\ln x}}\right)}}\ln \left(1-{\frac {\ln \ln x}{\ln x}}\right)\leq W_{0}(x)\leq \ln \left({\frac {x}{\ln x}}\right)-\ln \left(\left(1-{\frac {\ln \ln x}{\ln x}}\right)\left(1-{\frac {\ln \left(1-{\frac {\ln \ln x}{\ln x}}\right)}{1+\ln \left({\frac {x}{\ln x}}\right)}}\right)\right).} == Identities == A few identities follow from the definition: W 0 ( x e x ) = x for x ≥ − 1 , W − 1 ( x e x ) = x for x ≤ − 1. {\displaystyle {\begin{aligned}W_{0}(xe^{x})&=x&{\text{for }}x&\geq -1,\\W_{-1}(xe^{x})&=x&{\text{for }}x&\leq -1.\end{aligned}}} Note that, since f(x) = xex is not injective, it does not always hold that W(f(x)) = x, much like with the inverse trigonometric functions. For fixed x < 0 and x ≠ −1, the equation xex = yey has two real solutions in y, one of which is of course y = x. Then, for i = 0 and x < −1, as well as for i = −1 and x ∈ (−1, 0), y = Wi(xex) is the other solution. Some other identities: W ( x ) e W ( x ) = x , therefore: e W ( x ) = x W ( x ) , e − W ( x ) = W ( x ) x , e n W ( x ) = ( x W ( x ) ) n . {\displaystyle {\begin{aligned}&W(x)e^{W(x)}=x,\quad {\text{therefore:}}\\[5pt]&e^{W(x)}={\frac {x}{W(x)}},\qquad e^{-W(x)}={\frac {W(x)}{x}},\qquad e^{nW(x)}=\left({\frac {x}{W(x)}}\right)^{n}.\end{aligned}}} ln ⁡ W 0 ( x ) = ln ⁡ x − W 0 ( x ) for x > 0. {\displaystyle \ln W_{0}(x)=\ln x-W_{0}(x)\quad {\text{for }}x>0.} W 0 ( x ln ⁡ x ) = ln ⁡ x and e W 0 ( x ln ⁡ x ) = x for 1 e ≤ x . {\displaystyle W_{0}\left(x\ln x\right)=\ln x\quad {\text{and}}\quad e^{W_{0}\left(x\ln x\right)}=x\quad {\text{for }}{\frac {1}{e}}\leq x.} W − 1 ( x ln ⁡ x ) = ln ⁡ x and e W − 1 ( x ln ⁡ x ) = x for 0 < x ≤ 1 e . {\displaystyle W_{-1}\left(x\ln x\right)=\ln x\quad {\text{and}}\quad e^{W_{-1}\left(x\ln x\right)}=x\quad {\text{for }}0<x\leq {\frac {1}{e}}.} W ( x ) = ln ⁡ x W ( x ) for x ≥ − 1 e , W ( n x n W ( x ) n − 1 ) = n W ( x ) for n , x > 0 {\displaystyle {\begin{aligned}&W(x)=\ln {\frac {x}{W(x)}}&&{\text{for }}x\geq -{\frac {1}{e}},\\[5pt]&W\left({\frac {nx^{n}}{W\left(x\right)^{n-1}}}\right)=nW(x)&&{\text{for }}n,x>0\end{aligned}}} (which can be extended to other n and x if the correct branch is chosen). W ( x ) + W ( y ) = W ( x y ( 1 W ( x ) + 1 W ( y ) ) ) for x , y > 0. {\displaystyle W(x)+W(y)=W\left(xy\left({\frac {1}{W(x)}}+{\frac {1}{W(y)}}\right)\right)\quad {\text{for }}x,y>0.} Substituting −ln x in the definition: W 0 ( − ln ⁡ x x ) = − ln ⁡ x for 0 < x ≤ e , W − 1 ( − ln ⁡ x x ) = − ln ⁡ x for x > e . {\displaystyle {\begin{aligned}W_{0}\left(-{\frac {\ln x}{x}}\right)&=-\ln x&{\text{for }}0&<x\leq e,\\[5pt]W_{-1}\left(-{\frac {\ln x}{x}}\right)&=-\ln x&{\text{for }}x&>e.\end{aligned}}} With Euler's iterated exponential h(x): h ( x ) = e − W ( − ln ⁡ x ) = W ( − ln ⁡ x ) − ln ⁡ x for x ≠ 1. {\displaystyle {\begin{aligned}h(x)&=e^{-W(-\ln x)}\\&={\frac {W(-\ln x)}{-\ln x}}\quad {\text{for }}x\neq 1.\end{aligned}}} == Special values == The following are special values of the principal branch: W 0 ( − π 2 ) = i π 2 {\displaystyle W_{0}\left(-{\frac {\pi }{2}}\right)={\frac {i\pi }{2}}} W 0 ( − 1 e ) = − 1 {\displaystyle W_{0}\left(-{\frac {1}{e}}\right)=-1} W 0 ( 2 ln ⁡ 2 ) = ln ⁡ 2 {\displaystyle W_{0}\left(2\ln 2\right)=\ln 2} W 0 ( x ln ⁡ x ) = ln ⁡ x ( x ⩾ 1 e ≈ 0.36788 ) {\displaystyle W_{0}\left(x\ln x\right)=\ln x\quad \left(x\geqslant {\tfrac {1}{e}}\approx 0.36788\right)} W 0 ( x x + 1 ln ⁡ x ) = x ln ⁡ x ( x > 0 ) {\displaystyle W_{0}\left(x^{x+1}\ln x\right)=x\ln x\quad \left(x>0\right)} W 0 ( 0 ) = 0 {\displaystyle W_{0}(0)=0} W 0 ( 1 ) = Ω = ( ∫ − ∞ ∞ d t ( e t − t ) 2 + π 2 ) − 1 − 1 ≈ 0.56714329 {\displaystyle W_{0}(1)=\Omega =\left(\int _{-\infty }^{\infty }{\frac {dt}{\left(e^{t}-t\right)^{2}+\pi ^{2}}}\right)^{\!-1}\!\!\!\!-\,1\approx 0.56714329\quad } (the omega constant) W 0 ( 1 ) = e − W 0 ( 1 ) = ln ⁡ 1 W 0 ( 1 ) = − ln ⁡ W 0 ( 1 ) {\displaystyle W_{0}(1)=e^{-W_{0}(1)}=\ln {\frac {1}{W_{0}(1)}}=-\ln W_{0}(1)} W 0 ( e ) = 1 {\displaystyle W_{0}(e)=1} W 0 ( e 1 + e ) = e {\displaystyle W_{0}\left(e^{1+e}\right)=e} W 0 ( e 2 ) = 1 2 {\displaystyle W_{0}\left({\frac {\sqrt {e}}{2}}\right)={\frac {1}{2}}} W 0 ( e n n ) = 1 n {\displaystyle W_{0}\left({\frac {\sqrt[{n}]{e}}{n}}\right)={\frac {1}{n}}} W 0 ( − 1 ) ≈ − 0.31813 + 1.33723 i {\displaystyle W_{0}(-1)\approx -0.31813+1.33723i} Special values of the branch W−1: W − 1 ( − ln ⁡ 2 2 ) = − ln ⁡ 4 {\displaystyle W_{-1}\left(-{\frac {\ln 2}{2}}\right)=-\ln 4} == Representations == The principal branch of the Lambert function can be represented by a proper integral, due to Poisson: − π 2 W 0 ( − x ) = ∫ 0 π sin ⁡ ( 3 2 t ) − x e cos ⁡ t sin ⁡ ( 5 2 t − sin ⁡ t ) 1 − 2 x e cos ⁡ t cos ⁡ ( t − sin ⁡ t ) + x 2 e 2 cos ⁡ t sin ⁡ ( 1 2 t ) d t for | x | < 1 e . {\displaystyle -{\frac {\pi }{2}}W_{0}(-x)=\int _{0}^{\pi }{\frac {\sin \left({\tfrac {3}{2}}t\right)-xe^{\cos t}\sin \left({\tfrac {5}{2}}t-\sin t\right)}{1-2xe^{\cos t}\cos(t-\sin t)+x^{2}e^{2\cos t}}}\sin \left({\tfrac {1}{2}}t\right)\,dt\quad {\text{for }}|x|<{\frac {1}{e}}.} Another representation of the principal branch was found by Kalugin–Jeffrey–Corless: W 0 ( x ) = 1 π ∫ 0 π ln ⁡ ( 1 + x sin ⁡ t t e t cot ⁡ t ) d t . {\displaystyle W_{0}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\ln \left(1+x{\frac {\sin t}{t}}e^{t\cot t}\right)dt.} The following continued fraction representation also holds for the principal branch: W 0 ( x ) = x 1 + x 1 + x 2 + 5 x 3 + 17 x 10 + 133 x 17 + 1927 x 190 + 13582711 x 94423 + ⋱ . {\displaystyle W_{0}(x)={\cfrac {x}{1+{\cfrac {x}{1+{\cfrac {x}{2+{\cfrac {5x}{3+{\cfrac {17x}{10+{\cfrac {133x}{17+{\cfrac {1927x}{190+{\cfrac {13582711x}{94423+\ddots }}}}}}}}}}}}}}}}.} Also, if |W0(x)| < 1: W 0 ( x ) = x exp ⁡ x exp ⁡ x ⋱ . {\displaystyle W_{0}(x)={\cfrac {x}{\exp {\cfrac {x}{\exp {\cfrac {x}{\ddots }}}}}}.} In turn, if |W0(x)| > 1, then W 0 ( x ) = ln ⁡ x ln ⁡ x ln ⁡ x ⋱ . {\displaystyle W_{0}(x)=\ln {\cfrac {x}{\ln {\cfrac {x}{\ln {\cfrac {x}{\ddots }}}}}}.} == Other formulas == === Definite integrals === There are several useful definite integral formulas involving the principal branch of the W function, including the following: ∫ 0 π W 0 ( 2 cot 2 ⁡ x ) sec 2 ⁡ x d x = 4 π , ∫ 0 ∞ W 0 ( x ) x x d x = 2 2 π , ∫ 0 ∞ W 0 ( 1 x 2 ) d x = 2 π , and more generally ∫ 0 ∞ W 0 ( 1 x N ) d x = N 1 − 1 N Γ ( 1 − 1 N ) for N > 0 {\displaystyle {\begin{aligned}&\int _{0}^{\pi }W_{0}\left(2\cot ^{2}x\right)\sec ^{2}x\,dx=4{\sqrt {\pi }},\\[5pt]&\int _{0}^{\infty }{\frac {W_{0}(x)}{x{\sqrt {x}}}}\,dx=2{\sqrt {2\pi }},\\[5pt]&\int _{0}^{\infty }W_{0}\left({\frac {1}{x^{2}}}\right)\,dx={\sqrt {2\pi }},{\text{ and more generally}}\\[5pt]&\int _{0}^{\infty }W_{0}\left({\frac {1}{x^{N}}}\right)\,dx=N^{1-{\frac {1}{N}}}\Gamma \left(1-{\frac {1}{N}}\right)\qquad {\text{for }}N>0\end{aligned}}} where Γ {\displaystyle \Gamma } denotes the gamma function. The first identity can be found by writing the Gaussian integral in polar coordinates. The second identity can be derived by making the substitution u = W0(x), which gives x = u e u , d x d u = ( u + 1 ) e u . {\displaystyle {\begin{aligned}x&=ue^{u},\\[5pt]{\frac {dx}{du}}&=(u+1)e^{u}.\end{aligned}}} Thus ∫ 0 ∞ W 0 ( x ) x x d x = ∫ 0 ∞ u u e u u e u ( u + 1 ) e u d u = ∫ 0 ∞ u + 1 u e u d u = ∫ 0 ∞ u + 1 u 1 e u d u = ∫ 0 ∞ u 1 2 e − u 2 d u + ∫ 0 ∞ u − 1 2 e − u 2 d u = 2 ∫ 0 ∞ ( 2 w ) 1 2 e − w d w + 2 ∫ 0 ∞ ( 2 w ) − 1 2 e − w d w ( u = 2 w ) = 2 2 ∫ 0 ∞ w 1 2 e − w d w + 2 ∫ 0 ∞ w − 1 2 e − w d w = 2 2 ⋅ Γ ( 3 2 ) + 2 ⋅ Γ ( 1 2 ) = 2 2 ( 1 2 π ) + 2 ( π ) = 2 2 π . {\displaystyle {\begin{aligned}\int _{0}^{\infty }{\frac {W_{0}(x)}{x{\sqrt {x}}}}\,dx&=\int _{0}^{\infty }{\frac {u}{ue^{u}{\sqrt {ue^{u}}}}}(u+1)e^{u}\,du\\[5pt]&=\int _{0}^{\infty }{\frac {u+1}{\sqrt {ue^{u}}}}du\\[5pt]&=\int _{0}^{\infty }{\frac {u+1}{\sqrt {u}}}{\frac {1}{\sqrt {e^{u}}}}du\\[5pt]&=\int _{0}^{\infty }u^{\tfrac {1}{2}}e^{-{\frac {u}{2}}}du+\int _{0}^{\infty }u^{-{\tfrac {1}{2}}}e^{-{\frac {u}{2}}}du\\[5pt]&=2\int _{0}^{\infty }(2w)^{\tfrac {1}{2}}e^{-w}\,dw+2\int _{0}^{\infty }(2w)^{-{\tfrac {1}{2}}}e^{-w}\,dw&&\quad (u=2w)\\[5pt]&=2{\sqrt {2}}\int _{0}^{\infty }w^{\tfrac {1}{2}}e^{-w}\,dw+{\sqrt {2}}\int _{0}^{\infty }w^{-{\tfrac {1}{2}}}e^{-w}\,dw\\[5pt]&=2{\sqrt {2}}\cdot \Gamma \left({\tfrac {3}{2}}\right)+{\sqrt {2}}\cdot \Gamma \left({\tfrac {1}{2}}\right)\\[5pt]&=2{\sqrt {2}}\left({\tfrac {1}{2}}{\sqrt {\pi }}\right)+{\sqrt {2}}\left({\sqrt {\pi }}\right)\\[5pt]&=2{\sqrt {2\pi }}.\end{aligned}}} The third identity may be derived from the second by making the substitution u = x−2 and the first can also be derived from the third by the substitution z = ⁠1/√2⁠ tan x. Deriving its generalization, the fourth identity, is only slightly more involved and can be done by substituting, in turn, u = x 1 N {\displaystyle u=x^{\frac {1}{N}}} , t = W 0 ( u ) {\displaystyle t=W_{0}(u)} , and z = t N {\displaystyle z={\frac {t}{N}}} , observing that one obtains two integrals matching the definition of the gamma function, and finally using the properties of the gamma function to collect terms and simplify. Except for z along the branch cut (−∞, −⁠1/e⁠] (where the integral does not converge), the principal branch of the Lambert W function can be computed by the following integral: W 0 ( z ) = z 2 π ∫ − π π ( 1 − ν cot ⁡ ν ) 2 + ν 2 z + ν csc ⁡ ( ν ) e − ν cot ⁡ ν d ν = z π ∫ 0 π ( 1 − ν cot ⁡ ν ) 2 + ν 2 z + ν csc ⁡ ( ν ) e − ν cot ⁡ ν d ν , {\displaystyle {\begin{aligned}W_{0}(z)&={\frac {z}{2\pi }}\int _{-\pi }^{\pi }{\frac {\left(1-\nu \cot \nu \right)^{2}+\nu ^{2}}{z+\nu \csc \left(\nu \right)e^{-\nu \cot \nu }}}\,d\nu \\[5pt]&={\frac {z}{\pi }}\int _{0}^{\pi }{\frac {\left(1-\nu \cot \nu \right)^{2}+\nu ^{2}}{z+\nu \csc \left(\nu \right)e^{-\nu \cot \nu }}}\,d\nu ,\end{aligned}}} where the two integral expressions are equivalent due to the symmetry of the integrand. === Indefinite integrals === ∫ W ( x ) x d x = W ( x ) 2 2 + W ( x ) + C {\displaystyle \int {\frac {W(x)}{x}}\,dx\;=\;{\frac {W(x)^{2}}{2}}+W(x)+C} ∫ W ( A e B x ) d x = W ( A e B x ) 2 2 B + W ( A e B x ) B + C {\displaystyle \int W\left(Ae^{Bx}\right)\,dx\;=\;{\frac {W\left(Ae^{Bx}\right)^{2}}{2B}}+{\frac {W\left(Ae^{Bx}\right)}{B}}+C} ∫ W ( x ) x 2 d x = Ei ⁡ ( − W ( x ) ) − e − W ( x ) + C {\displaystyle \int {\frac {W(x)}{x^{2}}}\,dx\;=\;\operatorname {Ei} \left(-W(x)\right)-e^{-W(x)}+C} == Applications == === Solving equations === The Lambert W function is used to solve equations in which the unknown quantity occurs both in the base and in the exponent, or both inside and outside of a logarithm. The strategy is to convert such an equation into one of the form zez = w and then to solve for z using the W function. For example, the equation 3 x = 2 x + 2 {\displaystyle 3^{x}=2x+2} (where x is an unknown real number) can be solved by rewriting it as ( x + 1 ) 3 − x = 1 2 ( multiply by 3 − x / 2 ) ⇔ ( − x − 1 ) 3 − x − 1 = − 1 6 ( multiply by − 1 / 3 ) ⇔ ( ln ⁡ 3 ) ( − x − 1 ) e ( ln ⁡ 3 ) ( − x − 1 ) = − ln ⁡ 3 6 ( multiply by ln ⁡ 3 ) {\displaystyle {\begin{aligned}&(x+1)\ 3^{-x}={\frac {1}{2}}&({\mbox{multiply by }}3^{-x}/2)\\\Leftrightarrow \ &(-x-1)\ 3^{-x-1}=-{\frac {1}{6}}&({\mbox{multiply by }}{-}1/3)\\\Leftrightarrow \ &(\ln 3)(-x-1)\ e^{(\ln 3)(-x-1)}=-{\frac {\ln 3}{6}}&({\mbox{multiply by }}\ln 3)\end{aligned}}} This last equation has the desired form and the solutions for real x are: ( ln ⁡ 3 ) ( − x − 1 ) = W 0 ( − ln ⁡ 3 6 ) or ( ln ⁡ 3 ) ( − x − 1 ) = W − 1 ( − ln ⁡ 3 6 ) {\displaystyle (\ln 3)(-x-1)=W_{0}\left({\frac {-\ln 3}{6}}\right)\ \ \ {\textrm {or}}\ \ \ (\ln 3)(-x-1)=W_{-1}\left({\frac {-\ln 3}{6}}\right)} and thus: x = − 1 − W 0 ( − ln ⁡ 3 6 ) ln ⁡ 3 = − 0.79011 … or x = − 1 − W − 1 ( − ln ⁡ 3 6 ) ln ⁡ 3 = 1.44456 … {\displaystyle x=-1-{\frac {W_{0}\left(-{\frac {\ln 3}{6}}\right)}{\ln 3}}=-0.79011\ldots \ \ {\textrm {or}}\ \ x=-1-{\frac {W_{-1}\left(-{\frac {\ln 3}{6}}\right)}{\ln 3}}=1.44456\ldots } Generally, the solution to x = a + b e c x {\displaystyle x=a+b\,e^{cx}} is: x = a − 1 c W ( − b c e a c ) {\displaystyle x=a-{\frac {1}{c}}W(-bc\,e^{ac})} where a, b, and c are complex constants, with b and c not equal to zero, and the W function is of any integer order. === Inviscid flows === Applying the unusual accelerating traveling-wave Ansatz in the form of ρ ( η ) = ρ ( x − a t 2 2 ) {\displaystyle \rho (\eta )=\rho {\big (}x-{\frac {at^{2}}{2}}{\big )}} (where ρ {\displaystyle \rho } , η {\displaystyle \eta } , a, x and t are the density, the reduced variable, the acceleration, the spatial and the temporal variables) the fluid density of the corresponding Euler equation can be given with the help of the W function. === Viscous flows === Granular and debris flow fronts and deposits, and the fronts of viscous fluids in natural events and in laboratory experiments can be described by using the Lambert–Euler omega function as follows: H ( x ) = 1 + W ( ( H ( 0 ) − 1 ) e ( H ( 0 ) − 1 ) − x L ) , {\displaystyle H(x)=1+W\left((H(0)-1)e^{(H(0)-1)-{\frac {x}{L}}}\right),} where H(x) is the debris flow height, x is the channel downstream position, L is the unified model parameter consisting of several physical and geometrical parameters of the flow, flow height and the hydraulic pressure gradient. In pipe flow, the Lambert W function is part of the explicit formulation of the Colebrook equation for finding the Darcy friction factor. This factor is used to determine the pressure drop through a straight run of pipe when the flow is turbulent. === Time-dependent flow in simple branch hydraulic systems === The principal branch of the Lambert W function is employed in the field of mechanical engineering, in the study of time dependent transfer of Newtonian fluids between two reservoirs with varying free surface levels, using centrifugal pumps. The Lambert W function provided an exact solution to the flow rate of fluid in both the laminar and turbulent regimes: Q turb = Q i ζ i W 0 [ ζ i e ( ζ i + β t / b ) ] Q lam = Q i ξ i W 0 [ ξ i e ( ξ i + β t / ( b − Γ 1 ) ) ] {\displaystyle {\begin{aligned}Q_{\text{turb}}&={\frac {Q_{i}}{\zeta _{i}}}W_{0}\left[\zeta _{i}\,e^{(\zeta _{i}+\beta t/b)}\right]\\Q_{\text{lam}}&={\frac {Q_{i}}{\xi _{i}}}W_{0}\left[\xi _{i}\,e^{\left(\xi _{i}+\beta t/(b-\Gamma _{1})\right)}\right]\end{aligned}}} where Q i {\displaystyle Q_{i}} is the initial flow rate and t {\displaystyle t} is time. === Neuroimaging === The Lambert W function is employed in the field of neuroimaging for linking cerebral blood flow and oxygen consumption changes within a brain voxel, to the corresponding blood oxygenation level dependent (BOLD) signal. === Chemical engineering === The Lambert W function is employed in the field of chemical engineering for modeling the porous electrode film thickness in a glassy carbon based supercapacitor for electrochemical energy storage. The Lambert W function provides an exact solution for a gas phase thermal activation process where growth of carbon film and combustion of the same film compete with each other. === Crystal growth === In the crystal growth, the negative principal of the Lambert W-function can be used to calculate the distribution coefficient, k {\textstyle k} , and solute concentration in the melt, C L {\textstyle C_{L}} , from the Scheil equation: k = W 0 ( Z ) ln ⁡ ( 1 − f s ) C L = C 0 ( 1 − f s ) e W 0 ( Z ) Z = C S C 0 ( 1 − f s ) ln ⁡ ( 1 − f s ) {\displaystyle {\begin{aligned}&k={\frac {W_{0}(Z)}{\ln(1-fs)}}\\&C_{L}={\frac {C_{0}}{(1-fs)}}e^{W_{0}(Z)}\\&Z={\frac {C_{S}}{C_{0}}}(1-fs)\ln(1-fs)\end{aligned}}} === Materials science === The Lambert W function is employed in the field of epitaxial film growth for the determination of the critical dislocation onset film thickness. This is the calculated thickness of an epitaxial film, where due to thermodynamic principles the film will develop crystallographic dislocations in order to minimise the elastic energy stored in the films. Prior to application of Lambert W for this problem, the critical thickness had to be determined via solving an implicit equation. Lambert W turns it in an explicit equation for analytical handling with ease. === Semiconductor === It was shown that a W-function describes the relation between voltage, current and resistance in a diode. === Porous media === The Lambert W function has been employed in the field of fluid flow in porous media to model the tilt of an interface separating two gravitationally segregated fluids in a homogeneous tilted porous bed of constant dip and thickness where the heavier fluid, injected at the bottom end, displaces the lighter fluid that is produced at the same rate from the top end. The principal branch of the solution corresponds to stable displacements while the −1 branch applies if the displacement is unstable with the heavier fluid running underneath the lighter fluid. === Bernoulli numbers and Todd genus === The equation (linked with the generating functions of Bernoulli numbers and Todd genus): Y = X 1 − e X {\displaystyle Y={\frac {X}{1-e^{X}}}} can be solved by means of the two real branches W0 and W−1: X ( Y ) = { W − 1 ( Y e Y ) − W 0 ( Y e Y ) = Y − W 0 ( Y e Y ) for Y < − 1 , W 0 ( Y e Y ) − W − 1 ( Y e Y ) = Y − W − 1 ( Y e Y ) for − 1 < Y < 0. {\displaystyle X(Y)={\begin{cases}W_{-1}\left(Ye^{Y}\right)-W_{0}\left(Ye^{Y}\right)=Y-W_{0}\left(Ye^{Y}\right)&{\text{for }}Y<-1,\\W_{0}\left(Ye^{Y}\right)-W_{-1}\left(Ye^{Y}\right)=Y-W_{-1}\left(Ye^{Y}\right)&{\text{for }}-1<Y<0.\end{cases}}} This application shows that the branch difference of the W function can be employed in order to solve other transcendental equations. === Statistics === The centroid of a set of histograms defined with respect to the symmetrized Kullback–Leibler divergence (also called the Jeffreys divergence ) has a closed form using the Lambert W function. === Pooling of tests for infectious diseases === Solving for the optimal group size to pool tests so that at least one individual is infected involves the Lambert W function. === Exact solutions of the Schrödinger equation === The Lambert W function appears in a quantum-mechanical potential, which affords the fifth – next to those of the harmonic oscillator plus centrifugal, the Coulomb plus inverse square, the Morse, and the inverse square root potential – exact solution to the stationary one-dimensional Schrödinger equation in terms of the confluent hypergeometric functions. The potential is given as V = V 0 1 + W ( e − x σ ) . {\displaystyle V={\frac {V_{0}}{1+W\left(e^{-{\frac {x}{\sigma }}}\right)}}.} A peculiarity of the solution is that each of the two fundamental solutions that compose the general solution of the Schrödinger equation is given by a combination of two confluent hypergeometric functions of an argument proportional to z = W ( e − x σ ) . {\displaystyle z=W\left(e^{-{\frac {x}{\sigma }}}\right).} The Lambert W function also appears in the exact solution for the bound state energy of the one dimensional Schrödinger equation with a Double Delta Potential. === Exact solution of QCD coupling constant === In Quantum chromodynamics, the quantum field theory of the Strong interaction, the coupling constant α s {\displaystyle \alpha _{\text{s}}} is computed perturbatively, the order n corresponding to Feynman diagrams including n quantum loops. The first order, n = 1, solution is exact (at that order) and analytical. At higher orders, n > 1, there is no exact and analytical solution and one typically uses an iterative method to furnish an approximate solution. However, for second order, n = 2, the Lambert function provides an exact (if non-analytical) solution. === Exact solutions of the Einstein vacuum equations === In the Schwarzschild metric solution of the Einstein vacuum equations, the W function is needed to go from the Eddington–Finkelstein coordinates to the Schwarzschild coordinates. For this reason, it also appears in the construction of the Kruskal–Szekeres coordinates. === Resonances of the delta-shell potential === The s-wave resonances of the delta-shell potential can be written exactly in terms of the Lambert W function. === Thermodynamic equilibrium === If a reaction involves reactants and products having heat capacities that are constant with temperature then the equilibrium constant K obeys ln ⁡ K = a T + b + c ln ⁡ T {\displaystyle \ln K={\frac {a}{T}}+b+c\ln T} for some constants a, b, and c. When c (equal to ⁠ΔCp/R⁠) is not zero the value or values of T can be found where K equals a given value as follows, where L can be used for ln T. − a = ( b − ln ⁡ K ) T + c T ln ⁡ T = ( b − ln ⁡ K ) e L + c L e L − a c = ( b − ln ⁡ K c + L ) e L − a c e b − ln ⁡ K c = ( L + b − ln ⁡ K c ) e L + b − ln ⁡ K c L = W ( − a c e b − ln ⁡ K c ) + ln ⁡ K − b c T = exp ⁡ ( W ( − a c e b − ln ⁡ K c ) + ln ⁡ K − b c ) . {\displaystyle {\begin{aligned}-a&=(b-\ln K)T+cT\ln T\\&=(b-\ln K)e^{L}+cLe^{L}\\[5pt]-{\frac {a}{c}}&=\left({\frac {b-\ln K}{c}}+L\right)e^{L}\\[5pt]-{\frac {a}{c}}e^{\frac {b-\ln K}{c}}&=\left(L+{\frac {b-\ln K}{c}}\right)e^{L+{\frac {b-\ln K}{c}}}\\[5pt]L&=W\left(-{\frac {a}{c}}e^{\frac {b-\ln K}{c}}\right)+{\frac {\ln K-b}{c}}\\[5pt]T&=\exp \left(W\left(-{\frac {a}{c}}e^{\frac {b-\ln K}{c}}\right)+{\frac {\ln K-b}{c}}\right).\end{aligned}}} If a and c have the same sign there will be either two solutions or none (or one if the argument of W is exactly −⁠1/e⁠). (The upper solution may not be relevant.) If they have opposite signs, there will be one solution. === Phase separation of polymer mixtures === In the calculation of the phase diagram of thermodynamically incompatible polymer mixtures according to the Edmond-Ogston model, the solutions for binodal and tie-lines are formulated in terms of Lambert W functions. === Wien's displacement law in a D-dimensional universe === Wien's displacement law is expressed as ν max / T = α = c o n s t {\displaystyle \nu _{\max }/T=\alpha =\mathrm {const} } . With x = h ν max / k B T {\displaystyle x=h\nu _{\max }/k_{\mathrm {B} }T} and d ρ T ( x ) / d x = 0 {\displaystyle d\rho _{T}\left(x\right)/dx=0} , where ρ T {\displaystyle \rho _{T}} is the spectral energy energy density, one finds e − x = 1 − x D {\displaystyle e^{-x}=1-{\frac {x}{D}}} , where D {\displaystyle D} is the number of degrees of freedom for spatial translation. The solution x = D + W ( − D e − D ) {\displaystyle x=D+W\left(-De^{-D}\right)} shows that the spectral energy density is dependent on the dimensionality of the universe. === AdS/CFT correspondence === The classical finite-size corrections to the dispersion relations of giant magnons, single spikes and GKP strings can be expressed in terms of the Lambert W function. === Epidemiology === In the t → ∞ limit of the SIR model, the proportion of susceptible and recovered individuals has a solution in terms of the Lambert W function. === Determination of the time of flight of a projectile === The total time of the journey of a projectile which experiences air resistance proportional to its velocity can be determined in exact form by using the Lambert W function. === Electromagnetic surface wave propagation === The transcendental equation that appears in the determination of the propagation wave number of an electromagnetic axially symmetric surface wave (a low-attenuation single TM01 mode) propagating in a cylindrical metallic wire gives rise to an equation like u ln u = v (where u and v clump together the geometrical and physical factors of the problem), which is solved by the Lambert W function. The first solution to this problem, due to Sommerfeld circa 1898, already contained an iterative method to determine the value of the Lambert W function. === Orthogonal trajectories of real ellipses === The family of ellipses x 2 + ( 1 − ε 2 ) y 2 = ε 2 {\displaystyle x^{2}+(1-\varepsilon ^{2})y^{2}=\varepsilon ^{2}} centered at ( 0 , 0 ) {\displaystyle (0,0)} is parameterized by eccentricity ε {\displaystyle \varepsilon } . The orthogonal trajectories of this family are given by the differential equation ( 1 y + y ) d y = ( 1 x − x ) d x {\displaystyle \left({\frac {1}{y}}+y\right)dy=\left({\frac {1}{x}}-x\right)dx} whose general solution is the family y 2 = {\displaystyle y^{2}=} W 0 ( x 2 exp ⁡ ( − 2 C − x 2 ) ) {\displaystyle W_{0}(x^{2}\exp(-2C-x^{2}))} . == Generalizations == The standard Lambert W function expresses exact solutions to transcendental algebraic equations (in x) of the form: where a0, c and r are real constants. The solution is x = r + 1 c W ( c e − c r a 0 ) . {\displaystyle x=r+{\frac {1}{c}}W\left({\frac {c\,e^{-cr}}{a_{0}}}\right).} Generalizations of the Lambert W function include: An application to general relativity and quantum mechanics (quantum gravity) in lower dimensions, in fact a link (unknown prior to 2007) between these two areas, where the right-hand side of (1) is replaced by a quadratic polynomial in x: where r1 and r2 are real distinct constants, the roots of the quadratic polynomial. Here, the solution is a function which has a single argument x but the terms like ri and a0 are parameters of that function. In this respect, the generalization resembles the hypergeometric function and the Meijer G function but it belongs to a different class of functions. When r1 = r2, both sides of (2) can be factored and reduced to (1) and thus the solution reduces to that of the standard W function. Equation (2) expresses the equation governing the dilaton field, from which is derived the metric of the R = T or lineal two-body gravity problem in 1 + 1 dimensions (one spatial dimension and one time dimension) for the case of unequal rest masses, as well as the eigenenergies of the quantum-mechanical double-well Dirac delta function model for unequal charges in one dimension. Analytical solutions of the eigenenergies of a special case of the quantum mechanical three-body problem, namely the (three-dimensional) hydrogen molecule-ion. Here the right-hand side of (1) is replaced by a ratio of infinite order polynomials in x: where ri and si are distinct real constants and x is a function of the eigenenergy and the internuclear distance R. Equation (3) with its specialized cases expressed in (1) and (2) is related to a large class of delay differential equations. G. H. Hardy's notion of a "false derivative" provides exact multiple roots to special cases of (3). Applications of the Lambert W function in fundamental physical problems are not exhausted even for the standard case expressed in (1) as seen recently in the area of atomic, molecular, and optical physics. == Plots == Plots of the Lambert W function on the complex plane == Numerical evaluation == The W function may be approximated using Newton's method, with successive approximations to w = W(z) (so z = wew) being w j + 1 = w j − w j e w j − z e w j + w j e w j . {\displaystyle w_{j+1}=w_{j}-{\frac {w_{j}e^{w_{j}}-z}{e^{w_{j}}+w_{j}e^{w_{j}}}}.} The W function may also be approximated using Halley's method, w j + 1 = w j − w j e w j − z e w j ( w j + 1 ) − ( w j + 2 ) ( w j e w j − z ) 2 w j + 2 {\displaystyle w_{j+1}=w_{j}-{\frac {w_{j}e^{w_{j}}-z}{e^{w_{j}}\left(w_{j}+1\right)-{\dfrac {\left(w_{j}+2\right)\left(w_{j}e^{w_{j}}-z\right)}{2w_{j}+2}}}}} given in Corless et al. to compute W. For real x ≥ − 1 / e {\displaystyle x\geq -1/e} , it may be approximated by the quadratic-rate recursive formula of R. Iacono and J.P. Boyd: w n + 1 ( x ) = w n ( x ) 1 + w n ( x ) ( 1 + log ⁡ ( x w n ( x ) ) ) . {\displaystyle w_{n+1}(x)={\frac {w_{n}(x)}{1+w_{n}(x)}}\left(1+\log \left({\frac {x}{w_{n}(x)}}\right)\right).} Lajos Lóczi proves that by using this iteration with an appropriate starting value w 0 ( x ) {\displaystyle w_{0}(x)} , For the principal branch W 0 : {\displaystyle W_{0}:} if x ∈ ( e , ∞ ) {\displaystyle x\in (e,\infty )} : w 0 ( x ) = log ⁡ ( x ) − log ⁡ ( log ⁡ ( x ) ) , {\displaystyle w_{0}(x)=\log(x)-\log(\log(x)),} if x ∈ ( 0 , e ) : {\displaystyle x\in (0,e):} w 0 ( x ) = x / e , {\displaystyle w_{0}(x)=x/e,} if x ∈ ( − 1 / e , 0 ) : {\displaystyle x\in (-1/e,0):} w 0 ( x ) = e x log ⁡ ( 1 + 1 + e x ) 1 + e x + 1 + e x , {\displaystyle w_{0}(x)={\frac {ex\log(1+{\sqrt {1+ex}})}{1+ex+{\sqrt {1+ex}}}},} For the branch W − 1 : {\displaystyle W_{-1}:} if x ∈ ( − 1 / 4 , 0 ) : {\displaystyle x\in (-1/4,0):} w 0 ( x ) = log ⁡ ( − x ) − log ⁡ ( − log ⁡ ( − x ) ) , {\displaystyle w_{0}(x)=\log(-x)-\log(-\log(-x)),} if x ∈ ( − 1 / e , − 1 / 4 ] : {\displaystyle x\in (-1/e,-1/4]:} w 0 ( x ) = − 1 − 2 1 + e x , {\displaystyle w_{0}(x)=-1-{\sqrt {2}}{\sqrt {1+ex}},} one can determine the maximum number of iteration steps in advance for any precision: if x ∈ ( e , ∞ ) {\displaystyle x\in (e,\infty )} (Theorem 2.4): 0 < W 0 ( x ) − w n ( x ) < ( log ⁡ ( 1 + 1 / e ) ) 2 n , {\displaystyle 0<W_{0}(x)-w_{n}(x)<\left(\log(1+1/e)\right)^{2^{n}},} if x ∈ ( 0 , e ) {\displaystyle x\in (0,e)} (Theorem 2.9): 0 < W 0 ( x ) − w n ( x ) < ( 1 − 1 / e ) 2 n − 1 5 , {\displaystyle 0<W_{0}(x)-w_{n}(x)<{\frac {\left(1-1/e\right)^{2^{n}-1}}{5}},} if x ∈ ( − 1 / e , 0 ) : {\displaystyle x\in (-1/e,0):} for the principal branch W 0 {\displaystyle W_{0}} (Theorem 2.17): 0 < w n ( x ) − W 0 ( x ) < ( 1 / 10 ) 2 n , {\displaystyle 0<w_{n}(x)-W_{0}(x)<\left(1/10\right)^{2^{n}},} for the branch W − 1 {\displaystyle W_{-1}} (Theorem 2.23): 0 < W − 1 ( x ) − w n ( x ) < ( 1 / 2 ) 2 n . {\displaystyle 0<W_{-1}(x)-w_{n}(x)<\left(1/2\right)^{2^{n}}.} Toshio Fukushima has presented a fast method for approximating the real valued parts of the principal and secondary branches of the W function without using any iteration. In this method the W function is evaluated as a conditional switch of rational functions on transformed variables: W 0 ( z ) = { X k ( x ) , ( z k − 1 <= z < z k , k = 1 , 2 , … , 17 ) , U k ( u ) , ( z k − 1 <= z < z k , k = 18 , 19 ) , {\displaystyle W_{0}(z)={\begin{cases}X_{k}(x),&(z_{k-1}<=z<z_{k},\quad k=1,2,\ldots ,17),\\U_{k}(u),&(z_{k-1}<=z<z_{k},\quad k=18,19),\end{cases}}} W − 1 ( z ) = { Y k ( y ) , ( z k − 1 <= z < z k , k = − 1 , − 2 , … , − 7 ) , V k ( u ) , ( z k − 1 <= z < z k , k = − 8 , − 9 , − 10 ) , {\displaystyle W_{-1}(z)={\begin{cases}Y_{k}(y),&(z_{k-1}<=z<z_{k},\quad k=-1,-2,\ldots ,-7),\\V_{k}(u),&(z_{k-1}<=z<z_{k},\quad k=-8,-9,-10),\end{cases}}} where x, u, y and v are transformations of z: x = z + 1 / e , u = ln ⁡ z , y = − z / ( x + 1 / e ) , v = ln ⁡ ( − z ) {\displaystyle x={\sqrt {z+1/e}},\quad u=\ln {z},\quad y=-z/(x+1/{\sqrt {e}}),\quad v=\ln(-z)} . Here X k ( x ) {\displaystyle X_{k}(x)} , U k ( u ) {\displaystyle U_{k}(u)} , Y k ( y ) {\displaystyle Y_{k}(y)} , and V k ( v ) {\displaystyle V_{k}(v)} are rational functions whose coefficients for different k-values are listed in the referenced paper together with the z k {\displaystyle z_{k}} values that determine the subdomains. With higher degree polynomials in these rational functions the method can approximate the W function more accurately. For example, when − 1 / e ≤ z ≤ 2.0082178115844727 {\displaystyle -1/e\leq z\leq 2.0082178115844727} , W 0 ( z ) {\displaystyle W_{0}(z)} can be approximated to 24 bits of accuracy on 64-bit floating point values as W 0 ( z ) ≈ X 1 ( x ) = ∑ i 4 P i x i ∑ i 3 Q i x i {\displaystyle W_{0}(z)\approx X_{1}(x)={\frac {\sum _{i}^{4}P_{i}x^{i}}{\sum _{i}^{3}Q_{i}x^{i}}}} where x is defined with the transformation above and the coefficients P i {\displaystyle P_{i}} and Q i {\displaystyle Q_{i}} are given in the table below. Fukushima also offers an approximation with 50 bits of accuracy on 64-bit floats that uses 8th- and 7th-degree polynomials. == Software == The Lambert W function is implemented in many programming languages. Some of them are listed below: == See also == Wright omega function Lambert's trinomial equation Lagrange inversion theorem Experimental mathematics Holstein–Herring method R = T model Ross' π lemma == Notes == == References == Corless, R.; Gonnet, G.; Hare, D.; Jeffrey, D.; Knuth, Donald (1996). "On the Lambert W function" (PDF). Advances in Computational Mathematics. 5: 329–359. doi:10.1007/BF02124750. ISSN 1019-7168. S2CID 29028411. Archived from the original (PDF) on 2010-12-14. Retrieved 2007-03-10. Chapeau-Blondeau, F.; Monir, A. (2002). "Evaluation of the Lambert W Function and Application to Generation of Generalized Gaussian Noise With Exponent 1/2" (PDF). IEEE Trans. Signal Process. 50 (9). doi:10.1109/TSP.2002.801912. Archived from the original (PDF) on 2012-03-28. Retrieved 2004-03-10. Francis; et al. (2000). "Quantitative General Theory for Periodic Breathing". Circulation. 102 (18): 2214–21. CiteSeerX 10.1.1.505.7194. doi:10.1161/01.cir.102.18.2214. PMID 11056095. S2CID 14410926. (Lambert function is used to solve delay-differential dynamics in human disease.) Hayes, B. (2005). "Why W?" (PDF). American Scientist. 93 (2): 104–108. doi:10.1511/2005.2.104. Archived (PDF) from the original on 2022-10-10. Roy, R.; Olver, F. W. J. (2010), "Lambert W function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. Stewart, Seán M. (2005). "A New Elementary Function for Our Curricula?" (PDF). Australian Senior Mathematics Journal. 19 (2): 8–26. ISSN 0819-4564. Archived (PDF) from the original on 2022-10-10. Veberic, D., "Having Fun with Lambert W(x) Function" arXiv:1003.1628 (2010); Veberic, D. (2012). "Lambert W function for applications in physics". Computer Physics Communications. 183 (12): 2622–2628. arXiv:1209.0735. Bibcode:2012CoPhC.183.2622V. doi:10.1016/j.cpc.2012.07.008. S2CID 315088. Chatzigeorgiou, I. (2013). "Bounds on the Lambert function and their Application to the Outage Analysis of User Cooperation". IEEE Communications Letters. 17 (8): 1505–1508. arXiv:1601.04895. doi:10.1109/LCOMM.2013.070113.130972. S2CID 10062685. == External links == National Institute of Science and Technology Digital Library – Lambert W MathWorld – Lambert W-Function Computing the Lambert W function Corless et al. Notes about Lambert W research GPL C++ implementation with Halley's and Fritsch's iteration. Special Functions of the GNU Scientific Library – GSL [3]
Wikipedia/Lambert_W_function
In mathematics, a half-exponential function is a functional square root of an exponential function. That is, a function f {\displaystyle f} such that f {\displaystyle f} composed with itself results in an exponential function: f ( f ( x ) ) = a b x , {\displaystyle f{\bigl (}f(x){\bigr )}=ab^{x},} for some constants a {\displaystyle a} and b {\displaystyle b} . Hellmuth Kneser first proposed a holomorphic construction of the solution of f ( f ( x ) ) = e x {\displaystyle f{\bigl (}f(x){\bigr )}=e^{x}} in 1950. It is closely related to the problem of extending tetration to non-integer values; the value of 1 2 a {\displaystyle {}^{\frac {1}{2}}a} can be understood as the value of f ( 1 ) {\displaystyle f{\bigl (}1)} , where f ( x ) {\displaystyle f{\bigl (}x)} satisfies f ( f ( x ) ) = a x {\displaystyle f{\bigl (}f(x){\bigr )}=a^{x}} . Example values from Kneser's solution of f ( f ( x ) ) = e x {\displaystyle f{\bigl (}f(x){\bigr )}=e^{x}} include f ( 0 ) ≈ 0.49856 {\displaystyle f{\bigl (}0)\approx 0.49856} and f ( 1 ) ≈ 1.64635 {\displaystyle f{\bigl (}1)\approx 1.64635} . == Impossibility of a closed-form formula == If a function f {\displaystyle f} is defined using the standard arithmetic operations, exponentials, logarithms, and real-valued constants, then f ( f ( x ) ) {\displaystyle f{\bigl (}f(x){\bigr )}} is either subexponential or superexponential. Thus, a Hardy L-function cannot be half-exponential. == Construction == Any exponential function can be written as the self-composition f ( f ( x ) ) {\displaystyle f(f(x))} for infinitely many possible choices of f {\displaystyle f} . In particular, for every A {\displaystyle A} in the open interval ( 0 , 1 ) {\displaystyle (0,1)} and for every continuous strictly increasing function g {\displaystyle g} from [ 0 , A ] {\displaystyle [0,A]} onto [ A , 1 ] {\displaystyle [A,1]} , there is an extension of this function to a continuous strictly increasing function f {\displaystyle f} on the real numbers such that f ( f ( x ) ) = exp ⁡ x {\displaystyle f{\bigl (}f(x){\bigr )}=\exp x} . The function f {\displaystyle f} is the unique solution to the functional equation f ( x ) = { g ( x ) if x ∈ [ 0 , A ] , exp ⁡ g − 1 ( x ) if x ∈ ( A , 1 ] , exp ⁡ f ( ln ⁡ x ) if x ∈ ( 1 , ∞ ) , ln ⁡ f ( exp ⁡ x ) if x ∈ ( − ∞ , 0 ) . {\displaystyle f(x)={\begin{cases}g(x)&{\mbox{if }}x\in [0,A],\\\exp g^{-1}(x)&{\mbox{if }}x\in (A,1],\\\exp f(\ln x)&{\mbox{if }}x\in (1,\infty ),\\\ln f(\exp x)&{\mbox{if }}x\in (-\infty ,0).\\\end{cases}}} A simple example, which leads to f {\displaystyle f} having a continuous first derivative f ′ {\displaystyle f'} everywhere, and also causes f ″ ≥ 0 {\displaystyle f''\geq 0} everywhere (i.e. f ( x ) {\displaystyle f(x)} is concave-up, and f ′ ( x ) {\displaystyle f'(x)} increasing, for all real x {\displaystyle x} ), is to take A = 1 2 {\displaystyle A={\tfrac {1}{2}}} and g ( x ) = x + 1 2 {\displaystyle g(x)=x+{\tfrac {1}{2}}} , giving f ( x ) = { log e ⁡ ( e x + 1 2 ) if x ≤ − log e ⁡ 2 , e x − 1 2 if − log e ⁡ 2 ≤ x ≤ 0 , x + 1 2 if 0 ≤ x ≤ 1 2 , e x − 1 / 2 if 1 2 ≤ x ≤ 1 , x e if 1 ≤ x ≤ e , e x / e if e ≤ x ≤ e , x e if e ≤ x ≤ e e , e x 1 / e if e e ≤ x ≤ e e , … {\displaystyle f(x)={\begin{cases}\log _{e}\left(e^{x}+{\tfrac {1}{2}}\right)&{\mbox{if }}x\leq -\log _{e}2,\\e^{x}-{\tfrac {1}{2}}&{\mbox{if }}{-\log _{e}2}\leq x\leq 0,\\x+{\tfrac {1}{2}}&{\mbox{if }}0\leq x\leq {\tfrac {1}{2}},\\e^{x-1/2}&{\mbox{if }}{\tfrac {1}{2}}\leq x\leq 1,\\x{\sqrt {e}}&{\mbox{if }}1\leq x\leq {\sqrt {e}},\\e^{x/{\sqrt {e}}}&{\mbox{if }}{\sqrt {e}}\leq x\leq e,\\x^{\sqrt {e}}&{\mbox{if }}e\leq x\leq e^{\sqrt {e}},\\e^{x^{1/{\sqrt {e}}}}&{\mbox{if }}e^{\sqrt {e}}\leq x\leq e^{e},\ldots \\\end{cases}}} Crone and Neuendorffer claim that there is no semi-exponential function f(x) that is both (a) analytic and (b) always maps reals to reals. The piecewise solution above achieves goal (b) but not (a). Achieving goal (a) is possible by writing e x {\displaystyle e^{x}} as a Taylor series based at a fixpoint Q (there are an infinitude of such fixpoints, but they all are nonreal complex, for example Q = 0.3181315 + 1.3372357 i {\displaystyle Q=0.3181315+1.3372357i} ), making Q also be a fixpoint of f, that is f ( Q ) = e Q = Q {\displaystyle f(Q)=e^{Q}=Q} , then computing the Maclaurin series coefficients of f ( x − Q ) {\displaystyle f(x-Q)} one by one. This results in Kneser's construction mentioned above. == Application == Half-exponential functions are used in computational complexity theory for growth rates "intermediate" between polynomial and exponential. A function f {\displaystyle f} grows at least as quickly as some half-exponential function (its composition with itself grows exponentially) if it is non-decreasing and f − 1 ( x C ) = o ( log ⁡ x ) {\displaystyle f^{-1}(x^{C})=o(\log x)} , for every C > 0 {\displaystyle C>0} . == See also == Iterated function – Result of repeatedly applying a mathematical function Schröder's equation – Equation for fixed point of functional composition Abel equation – Equation for function that computes iterated values == References == == External links == Does the exponential function have a (compositional) square root? “Closed-form” functions with half-exponential growth
Wikipedia/Half-exponential_function
In complex analysis, a Padé table is an array, possibly of infinite extent, of the rational Padé approximants Rm, n to a given complex formal power series. Certain sequences of approximants lying within a Padé table can often be shown to correspond with successive convergents of a continued fraction representation of a holomorphic or meromorphic function. == History == Although earlier mathematicians had obtained sporadic results involving sequences of rational approximations to transcendental functions, Frobenius (in 1881) was apparently the first to organize the approximants in the form of a table. Henri Padé further expanded this notion in his doctoral thesis Sur la representation approchee d'une fonction par des fractions rationelles, in 1892. Over the ensuing 16 years Padé published 28 additional papers exploring the properties of his table, and relating the table to analytic continued fractions. Modern interest in Padé tables was revived by H. S. Wall and Oskar Perron, who were primarily interested in the connections between the tables and certain classes of continued fractions. Daniel Shanks and Peter Wynn published influential papers about 1955, and W. B. Gragg obtained far-reaching convergence results during the '70s. More recently, the widespread use of electronic computers has stimulated a great deal of additional interest in the subject. == Notation == A function f(z) is represented by a formal power series: f ( z ) = c 0 + c 1 z + c 2 z 2 + ⋯ = ∑ l = 0 ∞ c l z l , {\displaystyle f(z)=c_{0}+c_{1}z+c_{2}z^{2}+\cdots =\sum _{l=0}^{\infty }c_{l}z^{l},} where c0 ≠ 0, by convention. The (m, n)th entry Rm, n in the Padé table for f(z) is then given by R m , n ( z ) = P m ( z ) Q n ( z ) = a 0 + a 1 z + a 2 z 2 + ⋯ + a m z m b 0 + b 1 z + b 2 z 2 + ⋯ + b n z n {\displaystyle R_{m,n}(z)={\frac {P_{m}(z)}{Q_{n}(z)}}={\frac {a_{0}+a_{1}z+a_{2}z^{2}+\cdots +a_{m}z^{m}}{b_{0}+b_{1}z+b_{2}z^{2}+\cdots +b_{n}z^{n}}}} where Pm(z) and Qn(z) are polynomials of degrees not more than m and n, respectively. The coefficients {ai} and {bi} can always be found by considering the expression f ( z ) ≈ ∑ l = 0 m + n c l z l =: f a p p r o x ( z ) {\displaystyle f(z)\approx \sum _{l=0}^{m+n}c_{l}z^{l}=:f_{\mathrm {approx} }(z)} Q n ( z ) f a p p r o x ( z ) = P m ( z ) {\displaystyle Q_{n}(z)f_{\mathrm {approx} }(z)=P_{m}(z)} Q n ( z ) ( c 0 + c 1 z + c 2 z 2 + ⋯ + c m + n z m + n ) = P m ( z ) {\displaystyle Q_{n}(z)\left(c_{0}+c_{1}z+c_{2}z^{2}+\cdots +c_{m+n}z^{m+n}\right)=P_{m}(z)} and equating coefficients of like powers of z up through m + n. For the coefficients of powers m + 1 to m + n, the right hand side is 0 and the resulting system of linear equations contains a homogeneous system of n equations in the n + 1 unknowns bi, and so admits of infinitely many solutions each of which determines a possible Qn. Pm is then easily found by equating the first m coefficients of the equation above. However, it can be shown that, due to cancellation, the generated rational functions Rm, n are all the same, so that the (m, n)th entry in the Padé table is unique. Alternatively, we may require that b0 = 1, thus putting the table in a standard form. Although the entries in the Padé table can always be generated by solving this system of equations, that approach is computationally expensive. Usage of the Padé table has been extended to meromorphic functions by newer, timesaving methods such as the epsilon algorithm. == The block theorem and normal approximants == Because of the way the (m, n)th approximant is constructed, the difference Qn(z)f(z) − Pm(z) is a power series whose first term is of degree no less than m + n + 1. If the first term of that difference is of degree m + n + r + 1, r > 0, then the rational function Rm, n occupies (r + 1)2 cells in the Padé table, from position (m, n) through position (m+r, n+r), inclusive. In other words, if the same rational function appears more than once in the table, that rational function occupies a square block of cells within the table. This result is known as the block theorem. If a particular rational function occurs exactly once in the Padé table, it is called a normal approximant to f(z). If every entry in the complete Padé table is normal, the table itself is said to be normal. Normal Padé approximants can be characterized using determinants of the coefficients cn in the Taylor series expansion of f(z), as follows. Define the (m, n)th determinant by D m , n = | c m c m − 1 … c m − n + 2 c m − n + 1 c m + 1 c m … c m − n + 3 c m − n + 2 ⋮ ⋮ ⋮ ⋮ c m + n − 2 c m + n − 3 … c m c m − 1 c m + n − 1 c m + n − 2 … c m + 1 c m | {\displaystyle D_{m,n}=\left|{\begin{matrix}c_{m}&c_{m-1}&\ldots &c_{m-n+2}&c_{m-n+1}\\c_{m+1}&c_{m}&\ldots &c_{m-n+3}&c_{m-n+2}\\\vdots &\vdots &&\vdots &\vdots \\c_{m+n-2}&c_{m+n-3}&\ldots &c_{m}&c_{m-1}\\c_{m+n-1}&c_{m+n-2}&\ldots &c_{m+1}&c_{m}\\\end{matrix}}\right|} with Dm,0 = 1, Dm,1 = cm, and ck = 0 for k < 0. Then the (m, n)th approximant to f(z) is normal if and only if none of the four determinants Dm,n−1, Dm,n, Dm+1,n, and Dm+1,n+1 vanish; and the Padé table is normal if and only if none of the determinants Dm,n are equal to zero (note in particular that this means none of the coefficients ck in the series representation of f(z) can be zero). == Connection with continued fractions == One of the most important forms in which an analytic continued fraction can appear is as a regular continued fraction, which is a continued fraction of the form f ( z ) = b 0 + a 1 z 1 − a 2 z 1 − a 3 z 1 − a 4 z 1 − ⋱ . {\displaystyle f(z)=b_{0}+{\cfrac {a_{1}z}{1-{\cfrac {a_{2}z}{1-{\cfrac {a_{3}z}{1-{\cfrac {a_{4}z}{1-\ddots }}}}}}}}.} where the ai ≠ 0 are complex constants, and z is a complex variable. There is an intimate connection between regular continued fractions and Padé tables with normal approximants along the main diagonal: the "stairstep" sequence of Padé approximants R0,0, R1,0, R1,1, R2,1, R2,2, ... is normal if and only if that sequence coincides with the successive convergents of a regular continued fraction. In other words, if the Padé table is normal along the main diagonal, it can be used to construct a regular continued fraction, and if a regular continued fraction representation for the function f(z) exists, then the main diagonal of the Padé table representing f(z) is normal. == An example – the exponential function == Here is an example of a Padé table, for the exponential function. Several features are immediately apparent. The first column of the table consists of the successive truncations of the Taylor series for ez. Similarly, the first row contains the reciprocals of successive truncations of the series expansion of e−z. The approximants Rm,n and Rn,m are quite symmetrical – the numerators and denominators are interchanged, and the patterns of plus and minus signs are different, but the same coefficients appear in both of these approximants. They can be expressed in terms of special functions as R m , n = 1 F 1 ( − m ; − m − n ; z ) 1 F 1 ( − n ; − m − n ; − z ) = n ! 2 m θ m ( z 2 ; n − m + 2 , 2 ) m ! 2 n θ n ( − z 2 ; m − n + 2 , 2 ) {\displaystyle R_{m,n}={\frac {{}_{1}F_{1}(-m;-m-n;z)}{{}_{1}F_{1}(-n;-m-n;-z)}}={\frac {n!\,2^{m}\theta _{m}\left({\frac {z}{2}};n-m+2,2\right)}{m!\,2^{n}\theta _{n}\left(-{\frac {z}{2}};m-n+2,2\right)}}} , where 1 F 1 ( a ; b ; z ) {\displaystyle {}_{1}F_{1}(a;b;z)} is a generalized hypergeometric series and θ n ( x ; α , β ) {\displaystyle \theta _{n}(x;\alpha ,\beta )} is a generalized reverse Bessel polynomial. The expressions on the main diagonal reduce to R n , n = θ n ( z / 2 ) / θ n ( − z / 2 ) {\displaystyle R_{n,n}=\theta _{n}(z/2)/\theta _{n}(-z/2)} , where θ n ( x ) {\displaystyle \theta _{n}(x)} is a reverse Bessel polynomial. Computations involving the Rn,n (on the main diagonal) can be done quite efficiently. For example, R3,3 reproduces the power series for the exponential function perfectly up through 1/720 z6, but because of the symmetry of the two cubic polynomials, a very fast evaluation algorithm can be devised. The procedure used to derive Gauss's continued fraction can be applied to a certain confluent hypergeometric series to derive the following C-fraction expansion for the exponential function, valid throughout the entire complex plane: e z = 1 + z 1 − 1 2 z 1 + 1 6 z 1 − 1 6 z 1 + 1 10 z 1 − 1 10 z 1 + − ⋱ . {\displaystyle e^{z}=1+{\cfrac {z}{1-{\cfrac {{\frac {1}{2}}z}{1+{\cfrac {{\frac {1}{6}}z}{1-{\cfrac {{\frac {1}{6}}z}{1+{\cfrac {{\frac {1}{10}}z}{1-{\cfrac {{\frac {1}{10}}z}{1+-\ddots }}}}}}}}}}}}.} By applying the fundamental recurrence formulas one may easily verify that the successive convergents of this C-fraction are the stairstep sequence of Padé approximants R0,0, R1,0, R1,1, ... In this particular case a closely related continued fraction can be obtained from the identity e z = 1 e − z ; {\displaystyle e^{z}={\frac {1}{e^{-z}}};} that continued fraction looks like this: e z = 1 1 − z 1 + 1 2 z 1 − 1 6 z 1 + 1 6 z 1 − 1 10 z 1 + 1 10 z 1 − + ⋱ . {\displaystyle e^{z}={\cfrac {1}{1-{\cfrac {z}{1+{\cfrac {{\frac {1}{2}}z}{1-{\cfrac {{\frac {1}{6}}z}{1+{\cfrac {{\frac {1}{6}}z}{1-{\cfrac {{\frac {1}{10}}z}{1+{\cfrac {{\frac {1}{10}}z}{1-+\ddots }}}}}}}}}}}}}}.} This fraction's successive convergents also appear in the Padé table, and form the sequence R0,0, R0,1, R1,1, R1,2, R2,2, ... == Generalizations == A formal Newton series L is of the form L ( z ) = c 0 + ∑ n = 1 ∞ c n ∏ k = 1 n ( z − β k ) {\displaystyle L(z)=c_{0}+\sum _{n=1}^{\infty }c_{n}\prod _{k=1}^{n}(z-\beta _{k})} where the sequence {βk} of points in the complex plane is known as the set of interpolation points. A sequence of rational approximants Rm,n can be formed for such a series L in a manner entirely analogous to the procedure described above, and the approximants can be arranged in a Newton-Padé table. It has been shown that some "staircase" sequences in the Newton-Padé table correspond with the successive convergents of a Thiele-type continued fraction, which is of the form a 0 + a 1 ( z − β 1 ) 1 − a 2 ( z − β 2 ) 1 − a 3 ( z − β 3 ) 1 − ⋱ . {\displaystyle a_{0}+{\cfrac {a_{1}(z-\beta _{1})}{1-{\cfrac {a_{2}(z-\beta _{2})}{1-{\cfrac {a_{3}(z-\beta _{3})}{1-\ddots }}}}}}.} Mathematicians have also constructed two-point Padé tables by considering two series, one in powers of z, the other in powers of 1/z, which alternately represent the function f(z) in a neighborhood of zero and in a neighborhood of infinity. == See also == Shanks transformation == Notes == == References == Jones, William B.; Thron, W. J. (1980). Continued Fractions: Theory and Applications. Reading, Massachusetts: Addison-Wesley Publishing Company. pp. 185–197. ISBN 0-201-13510-8. Wall, H. S. (1973). Analytic Theory of Continued Fractions. Chelsea Publishing Company. pp. 377–415. ISBN 0-8284-0207-8.(This is a reprint of the volume originally published by D. Van Nostrand Company, Inc., in 1948.)
Wikipedia/Padé_table_for_exponential_function
In mathematics, the Mittag-Leffler functions are a family of special functions. They are complex-valued functions of a complex argument z, and moreover depend on one or two complex parameters. The one-parameter Mittag-Leffler function, introduced by Gösta Mittag-Leffler in 1903, can be defined by the Maclaurin series E α ( z ) = ∑ k = 0 ∞ z k Γ ( α k + 1 ) , {\displaystyle E_{\alpha }(z)=\sum _{k=0}^{\infty }{\frac {z^{k}}{\Gamma (\alpha k+1)}},} where Γ ( x ) {\displaystyle \Gamma (x)} is the gamma function, and α {\displaystyle \alpha } is a complex parameter with Re ⁡ ( α ) > 0 {\displaystyle \operatorname {Re} \left(\alpha \right)>0} . The two-parameter Mittag-Leffler function, introduced by Wiman in 1905, is occasionally called the generalized Mittag-Leffler function. It has an additional complex parameter β {\displaystyle \beta } , and may be defined by the series E α , β ( z ) = ∑ k = 0 ∞ z k Γ ( α k + β ) , {\displaystyle E_{\alpha ,\beta }(z)=\sum _{k=0}^{\infty }{\frac {z^{k}}{\Gamma (\alpha k+\beta )}},} When β = 1 {\displaystyle \beta =1} , the one-parameter function E α = E α , 1 {\displaystyle E_{\alpha }=E_{\alpha ,1}} is recovered. In the case α {\displaystyle \alpha } and β {\displaystyle \beta } are real and positive, the series converges for all values of the argument z {\displaystyle z} , so the Mittag-Leffler function is an entire function. This class of functions are important in the theory of the fractional calculus. See below for three-parameter generalizations. == Some basic properties == For α > 0 {\displaystyle \alpha >0} , the Mittag-Leffler function E α , β ( z ) {\displaystyle E_{\alpha ,\beta }(z)} is an entire function of order 1 / α {\displaystyle 1/\alpha } , and type 1 {\displaystyle 1} for any value of β {\displaystyle \beta } . In some sense, the Mittag-Leffler function is the simplest entire function of its order. The indicator function of E α ( z ) {\displaystyle E_{\alpha }(z)} is: 50  h E α ( θ ) = { cos ⁡ ( θ α ) , for | θ | ≤ 1 2 α π ; 0 , otherwise . {\displaystyle h_{E_{\alpha }}(\theta )={\begin{cases}\cos \left({\frac {\theta }{\alpha }}\right),&{\text{for }}|\theta |\leq {\frac {1}{2}}\alpha \pi ;\\0,&{\text{otherwise}}.\end{cases}}} This result actually holds for β ≠ 1 {\displaystyle \beta \neq 1} as well with some restrictions on β {\displaystyle \beta } when α = 1 {\displaystyle \alpha =1} .: 67  The Mittag-Leffler function satisfies the recurrence property (Theorem 5.1 of ) E α , β ( z ) = 1 z E α , β − α ( z ) − 1 z Γ ( β − α ) , {\displaystyle E_{\alpha ,\beta }(z)={\frac {1}{z}}E_{\alpha ,\beta -\alpha }(z)-{\frac {1}{z\Gamma (\beta -\alpha )}},} from which the following asymptotic expansion holds : for 0 < α < 2 {\displaystyle 0<\alpha <2} and μ {\displaystyle \mu } real such that π α 2 < μ < min ( π , π α ) {\displaystyle {\frac {\pi \alpha }{2}}<\mu <\min(\pi ,\pi \alpha )} then for all N ∈ N ∗ , N ≠ 1 {\displaystyle N\in \mathbb {N} ^{*},N\neq 1} , we can show the following asymptotic expansions (Section 6. of ): -as | z | → + ∞ , | arg ( z ) | ≤ μ {\displaystyle \,|z|\to +\infty ,|{\text{arg}}(z)|\leq \mu } : E α ( z ) = 1 α exp ⁡ ( z 1 α ) − ∑ k = 1 N 1 z k Γ ( 1 − α k ) + O ( 1 z N + 1 ) {\displaystyle E_{\alpha }(z)={\frac {1}{\alpha }}\exp(z^{\frac {1}{\alpha }})-\sum \limits _{k=1}^{N}{\frac {1}{z^{k}\,\Gamma (1-\alpha k)}}+O\left({\frac {1}{z^{N+1}}}\right)} , -and as | z | → + ∞ , μ ≤ | arg ( z ) | ≤ π {\displaystyle \,|z|\to +\infty ,\mu \leq |{\text{arg}}(z)|\leq \pi } : E α ( z ) = − ∑ k = 1 N 1 z k Γ ( 1 − α k ) + O ( 1 z N + 1 ) {\displaystyle E_{\alpha }(z)=-\sum \limits _{k=1}^{N}{\frac {1}{z^{k}\Gamma (1-\alpha k)}}+O\left({\frac {1}{z^{N+1}}}\right)} . A simpler estimate that can often be useful is given, thanks to the fact that the order and type of E α , β ( z ) {\displaystyle E_{\alpha ,\beta }(z)} is 1 / α {\displaystyle 1/\alpha } and 1 {\displaystyle 1} , respectively:: 62  | E α , β ( z ) | ≤ C exp ⁡ ( σ | z | 1 / α ) {\displaystyle |E_{\alpha ,\beta }(z)|\leq C\exp \left(\sigma |z|^{1/\alpha }\right)} for any positive C {\displaystyle C} and any σ > 1 {\displaystyle \sigma >1} . == Special cases == For α = 0 {\displaystyle \alpha =0} , the series above equals the Taylor expansion of the geometric series and consequently E 0 , β ( z ) = 1 Γ ( β ) 1 1 − z {\displaystyle E_{0,\beta }(z)={\frac {1}{\Gamma (\beta )}}{\frac {1}{1-z}}} . For α = 1 / 2 , 1 , 2 {\displaystyle \alpha =1/2,1,2} we find: (Section 2 of ) Error function: E 1 2 ( z ) = exp ⁡ ( z 2 ) erfc ⁡ ( − z ) . {\displaystyle E_{\frac {1}{2}}(z)=\exp(z^{2})\operatorname {erfc} (-z).} Exponential function: E 1 ( z ) = ∑ k = 0 ∞ z k Γ ( k + 1 ) = ∑ k = 0 ∞ z k k ! = exp ⁡ ( z ) . {\displaystyle E_{1}(z)=\sum _{k=0}^{\infty }{\frac {z^{k}}{\Gamma (k+1)}}=\sum _{k=0}^{\infty }{\frac {z^{k}}{k!}}=\exp(z).} Hyperbolic cosine: E 2 ( z ) = cosh ⁡ ( z ) , and E 2 ( − z 2 ) = cos ⁡ ( z ) . {\displaystyle E_{2}(z)=\cosh({\sqrt {z}}),{\text{ and }}E_{2}(-z^{2})=\cos(z).} For β = 2 {\displaystyle \beta =2} , we have E 1 , 2 ( z ) = e z − 1 z , {\displaystyle E_{1,2}(z)={\frac {e^{z}-1}{z}},} E 2 , 2 ( z ) = sinh ⁡ ( z ) z . {\displaystyle E_{2,2}(z)={\frac {\sinh({\sqrt {z}})}{\sqrt {z}}}.} For α = 0 , 1 , 2 {\displaystyle \alpha =0,1,2} , the integral ∫ 0 z E α ( − s 2 ) d s {\displaystyle \int _{0}^{z}E_{\alpha }(-s^{2})\,{\mathrm {d} }s} gives, respectively: arctan ⁡ ( z ) {\displaystyle \arctan(z)} , π 2 erf ⁡ ( z ) {\displaystyle {\tfrac {\sqrt {\pi }}{2}}\operatorname {erf} (z)} , sin ⁡ ( z ) {\displaystyle \sin(z)} . == Mittag-Leffler's integral representation == The integral representation of the Mittag-Leffler function is (Section 6 of ) E α , β ( z ) = 1 2 π i ∮ C t α − β e t t α − z d t , ℜ ( α ) > 0 , ℜ ( β ) > 0 , {\displaystyle E_{\alpha ,\beta }(z)={\frac {1}{2\pi i}}\oint _{C}{\frac {t^{\alpha -\beta }e^{t}}{t^{\alpha }-z}}\,dt,\Re (\alpha )>0,\Re (\beta )>0,} where the contour C {\displaystyle C} starts and ends at − ∞ {\displaystyle -\infty } and circles around the singularities and branch points of the integrand. Related to the Laplace transform and Mittag-Leffler summation is the expression (Eq (7.5) of with m = 0 {\displaystyle m=0} ) ∫ 0 ∞ e − t z t β − 1 E α , β ( ± r t α ) d t = z α − β z α ∓ r , ℜ ( z ) > 0 , ℜ ( α ) > 0 , ℜ ( β ) > 0. {\displaystyle \int _{0}^{\infty }e^{-tz}t^{\beta -1}E_{\alpha ,\beta }(\pm r\,t^{\alpha })\,dt={\frac {z^{\alpha -\beta }}{z^{\alpha }\mp r}},\Re (z)>0,\Re (\alpha )>0,\Re (\beta )>0.} == Three-parameter generalizations == One generalization, characterized by three parameters, is E α , β γ ( z ) = ( 1 Γ ( γ ) ) ∑ k = 1 ∞ Γ ( γ + k ) z k k ! Γ ( α k + β ) , {\displaystyle E_{\alpha ,\beta }^{\gamma }(z)=\left({\frac {1}{\Gamma (\gamma )}}\right)\sum \limits _{k=1}^{\infty }{\frac {\Gamma (\gamma +k)z^{k}}{k!\Gamma (\alpha k+\beta )}},} where α , β {\displaystyle \alpha ,\beta } and γ {\displaystyle \gamma } are complex parameters and ℜ ( α ) > 0 {\displaystyle \Re (\alpha )>0} . Another generalization is the Prabhakar function E α , β γ ( z ) = ∑ k = 0 ∞ ( γ ) k z k k ! Γ ( α k + β ) , {\displaystyle E_{\alpha ,\beta }^{\gamma }(z)=\sum _{k=0}^{\infty }{\frac {(\gamma )_{k}z^{k}}{k!\Gamma (\alpha k+\beta )}},} where ( γ ) k {\displaystyle (\gamma )_{k}} is the Pochhammer symbol. == Applications of Mittag-Leffler function == One of the applications of the Mittag-Leffler function is in modeling fractional order viscoelastic materials. Experimental investigations into the time-dependent relaxation behavior of viscoelastic materials are characterized by a very fast decrease of the stress at the beginning of the relaxation process and an extremely slow decay for large times, i.e. it takes a long time to approach a constant asymptotic value. Therefore, many Maxwell elements are required to describe relaxation behavior to sufficient accuracy. This results in a difficult optimization problem in order to identify the large number of material parameters required. On the other hand, over the years, the concept of fractional derivatives has been introduced into the theory of viscoelasticity. Among these models, the fractional Zener model was found to be very effective for predicting the dynamic nature of rubber-like materials using only a small number of material parameters. The solution of the corresponding constitutive equation leads to a relaxation function of the Mittag-Leffler type. It is defined by the power series with negative arguments. This function represents all essential properties of the relaxation process under the influence of an arbitrary and continuous signal with a jump at the origin. == See also == Mittag-Leffler summation Mittag-Leffler distribution == Notes == R Package 'MittagLeffleR' by Gurtek Gill, Peter Straka. Implements the Mittag-Leffler function, distribution, random variate generation, and estimation. == References == Gorenflo R., Kilbas A.A., Mainardi F., Rogosin S.V., Mittag-Leffler Functions, Related Topics and Applications (Springer, New York, 2014) 443 pages ISBN 978-3-662-43929-6 Igor Podlubny (1998). "chapter 1". Fractional Differential Equations. An Introduction to Fractional Derivatives, Fractional Differential Equations, Some Methods of Their Solution and Some of Their Applications. Mathematics in Science and Engineering. Academic Press. ISBN 0-12-558840-2. Kai Diethelm (2010). "chapter 4". The analysis of fractional differential equations: an application-oriented exposition using differential operators of Caputo type. Lecture Notes in Mathematics. Heidelberg and New York: Springer-Verlag. ISBN 978-3-642-14573-5. == External links == Mittag-Leffler function: MATLAB code Mittag-Leffler and stable random numbers: Continuous-time random walks and stochastic solution of space-time fractional diffusion equations
Wikipedia/Mittag-Leffler_function
In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is approximating. The technique was developed around 1890 by Henri Padé, but goes back to Georg Frobenius, who introduced the idea and investigated the features of rational approximations of power series. The Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. For these reasons Padé approximants are used extensively in computer calculations. They have also been used as auxiliary functions in Diophantine approximation and transcendental number theory, though for sharp results ad hoc methods—in some sense inspired by the Padé theory—typically replace them. Since a Padé approximant is a rational function, an artificial singular point may occur as an approximation, but this can be avoided by Borel–Padé analysis. The reason the Padé approximant tends to be a better approximation than a truncating Taylor series is clear from the viewpoint of the multi-point summation method. Since there are many cases in which the asymptotic expansion at infinity becomes 0 or a constant, it can be interpreted as the "incomplete two-point Padé approximation", in which the ordinary Padé approximation improves on the method of truncating a Taylor series. == Definition == Given a function f and two integers m ≥ 0 and n ≥ 1, the Padé approximant of order [m/n] is the rational function R ( x ) = ∑ j = 0 m a j x j 1 + ∑ k = 1 n b k x k = a 0 + a 1 x + a 2 x 2 + ⋯ + a m x m 1 + b 1 x + b 2 x 2 + ⋯ + b n x n , {\displaystyle R(x)={\frac {\sum _{j=0}^{m}a_{j}x^{j}}{1+\sum _{k=1}^{n}b_{k}x^{k}}}={\frac {a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{m}x^{m}}{1+b_{1}x+b_{2}x^{2}+\dots +b_{n}x^{n}}},} which agrees with f(x) to the highest possible order, which amounts to f ( 0 ) = R ( 0 ) , f ′ ( 0 ) = R ′ ( 0 ) , f ″ ( 0 ) = R ″ ( 0 ) , ⋮ f ( m + n ) ( 0 ) = R ( m + n ) ( 0 ) . {\displaystyle {\begin{aligned}f(0)&=R(0),\\f'(0)&=R'(0),\\f''(0)&=R''(0),\\&\mathrel {\;\vdots } \\f^{(m+n)}(0)&=R^{(m+n)}(0).\end{aligned}}} Equivalently, if R ( x ) {\displaystyle R(x)} is expanded in a Maclaurin series (Taylor series at 0), its first m + n {\displaystyle m+n} terms would equal the first m + n {\displaystyle m+n} terms of f ( x ) {\displaystyle f(x)} , and thus f ( x ) − R ( x ) = c m + n + 1 x m + n + 1 + c m + n + 2 x m + n + 2 + ⋯ {\displaystyle f(x)-R(x)=c_{m+n+1}x^{m+n+1}+c_{m+n+2}x^{m+n+2}+\cdots } When it exists, the Padé approximant is unique as a formal power series for the given m and n. The Padé approximant defined above is also denoted as [ m / n ] f ( x ) . {\displaystyle [m/n]_{f}(x).} == Computation == For given x, Padé approximants can be computed by Wynn's epsilon algorithm and also other sequence transformations from the partial sums T N ( x ) = c 0 + c 1 x + c 2 x 2 + ⋯ + c N x N {\displaystyle T_{N}(x)=c_{0}+c_{1}x+c_{2}x^{2}+\cdots +c_{N}x^{N}} of the Taylor series of f, i.e., we have c k = f ( k ) ( 0 ) k ! . {\displaystyle c_{k}={\frac {f^{(k)}(0)}{k!}}.} f can also be a formal power series, and, hence, Padé approximants can also be applied to the summation of divergent series. One way to compute a Padé approximant is via the extended Euclidean algorithm for the polynomial greatest common divisor. The relation R ( x ) = P ( x ) / Q ( x ) = T m + n ( x ) mod x m + n + 1 {\displaystyle R(x)=P(x)/Q(x)=T_{m+n}(x){\bmod {x}}^{m+n+1}} is equivalent to the existence of some factor K ( x ) {\displaystyle K(x)} such that P ( x ) = Q ( x ) T m + n ( x ) + K ( x ) x m + n + 1 , {\displaystyle P(x)=Q(x)T_{m+n}(x)+K(x)x^{m+n+1},} which can be interpreted as the Bézout identity of one step in the computation of the extended greatest common divisor of the polynomials T m + n ( x ) {\displaystyle T_{m+n}(x)} and x m + n + 1 {\displaystyle x^{m+n+1}} . Recall that, to compute the greatest common divisor of two polynomials p and q, one computes via long division the remainder sequence r 0 = p , r 1 = q , r k − 1 = q k r k + r k + 1 , {\displaystyle r_{0}=p,\;r_{1}=q,\quad r_{k-1}=q_{k}r_{k}+r_{k+1},} k = 1, 2, 3, ... with deg ⁡ r k + 1 < deg ⁡ r k {\displaystyle \deg r_{k+1}<\deg r_{k}\,} , until r k + 1 = 0 {\displaystyle r_{k+1}=0} . For the Bézout identities of the extended greatest common divisor one computes simultaneously the two polynomial sequences u 0 = 1 , v 0 = 0 , u 1 = 0 , v 1 = 1 , u k + 1 = u k − 1 − q k u k , v k + 1 = v k − 1 − q k v k {\displaystyle u_{0}=1,\;v_{0}=0,\quad u_{1}=0,\;v_{1}=1,\quad u_{k+1}=u_{k-1}-q_{k}u_{k},\;v_{k+1}=v_{k-1}-q_{k}v_{k}} to obtain in each step the Bézout identity r k ( x ) = u k ( x ) p ( x ) + v k ( x ) q ( x ) . {\displaystyle r_{k}(x)=u_{k}(x)p(x)+v_{k}(x)q(x).} For the [m/n] approximant, one thus carries out the extended Euclidean algorithm for r 0 = x m + n + 1 , r 1 = T m + n ( x ) {\displaystyle r_{0}=x^{m+n+1},\;r_{1}=T_{m+n}(x)} and stops it at the last instant that v k {\displaystyle v_{k}} has degree n or smaller. Then the polynomials P = r k , Q = v k {\displaystyle P=r_{k},\;Q=v_{k}} give the [m/n] Padé approximant. If one were to compute all steps of the extended greatest common divisor computation, one would obtain an anti-diagonal of the Padé table. == Riemann–Padé zeta function == To study the resummation of a divergent series, say ∑ z = 1 ∞ f ( z ) , {\displaystyle \sum _{z=1}^{\infty }f(z),} it can be useful to introduce the Padé or simply rational zeta function as ζ R ( s ) = ∑ z = 1 ∞ R ( z ) z s , {\displaystyle \zeta _{R}(s)=\sum _{z=1}^{\infty }{\frac {R(z)}{z^{s}}},} where R ( x ) = [ m / n ] f ( x ) {\displaystyle R(x)=[m/n]_{f}(x)} is the Padé approximation of order (m, n) of the function f(x). The zeta regularization value at s = 0 is taken to be the sum of the divergent series. The functional equation for this Padé zeta function is ∑ j = 0 n a j ζ R ( s − j ) = ∑ j = 0 m b j ζ 0 ( s − j ) , {\displaystyle \sum _{j=0}^{n}a_{j}\zeta _{R}(s-j)=\sum _{j=0}^{m}b_{j}\zeta _{0}(s-j),} where aj and bj are the coefficients in the Padé approximation. The subscript '0' means that the Padé is of order [0/0] and hence, we have the Riemann zeta function. == DLog Padé method == Padé approximants can be used to extract critical points and exponents of functions. In thermodynamics, if a function f(x) behaves in a non-analytic way near a point x = r like f ( x ) ∼ | x − r | p {\displaystyle f(x)\sim |x-r|^{p}} , one calls x = r a critical point and p the associated critical exponent of f. If sufficient terms of the series expansion of f are known, one can approximately extract the critical points and the critical exponents from respectively the poles and residues of the Padé approximants [ n / n + 1 ] g ( x ) {\displaystyle [n/n+1]_{g}(x)} , where g = f ′ / f {\displaystyle g=f'/f} . == Generalizations == A Padé approximant approximates a function in one variable. An approximant in two variables is called a Chisholm approximant (after J. S. R. Chisholm), in multiple variables a Canterbury approximant (after Graves-Morris at the University of Kent). == Two-points Padé approximant == The conventional Padé approximation is determined to reproduce the Maclaurin expansion up to a given order. Therefore, the approximation at the value apart from the expansion point may be poor. This is avoided by the 2-point Padé approximation, which is a type of multipoint summation method. At x = 0 {\displaystyle x=0} , consider a case that a function f ( x ) {\displaystyle f(x)} which is expressed by asymptotic behavior f 0 ( x ) {\displaystyle f_{0}(x)} : f ∼ f 0 ( x ) + o ( f 0 ( x ) ) , x → 0 , {\displaystyle f\sim f_{0}(x)+o{\big (}f_{0}(x){\big )},\quad x\to 0,} and at x → ∞ {\displaystyle x\to \infty } , additional asymptotic behavior f ∞ ( x ) {\displaystyle f_{\infty }(x)} : f ( x ) ∼ f ∞ ( x ) + o ( f ∞ ( x ) ) , x → ∞ . {\displaystyle f(x)\sim f_{\infty }(x)+o{\big (}f_{\infty }(x){\big )},\quad x\to \infty .} By selecting the major behavior of f 0 ( x ) , f ∞ ( x ) {\displaystyle f_{0}(x),f_{\infty }(x)} , approximate functions F ( x ) {\displaystyle F(x)} such that simultaneously reproduce asymptotic behavior by developing the Padé approximation can be found in various cases. As a result, at the point x → ∞ {\displaystyle x\to \infty } , where the accuracy of the approximation may be the worst in the ordinary Padé approximation, good accuracy of the 2-point Padé approximant is guaranteed. Therefore, the 2-point Padé approximant can be a method that gives a good approximation globally for x = 0 ∼ ∞ {\displaystyle x=0\sim \infty } . In cases where f 0 ( x ) , f ∞ ( x ) {\displaystyle f_{0}(x),f_{\infty }(x)} are expressed by polynomials or series of negative powers, exponential function, logarithmic function or x ln ⁡ x {\displaystyle x\ln x} , we can apply 2-point Padé approximant to f ( x ) {\displaystyle f(x)} . There is a method of using this to give an approximate solution of a differential equation with high accuracy. Also, for the nontrivial zeros of the Riemann zeta function, the first nontrivial zero can be estimated with some accuracy from the asymptotic behavior on the real axis. == Multi-point Padé approximant == A further extension of the 2-point Padé approximant is the multi-point Padé approximant. This method treats singularity points x = x j ( j = 1 , 2 , 3 , … , N ) {\displaystyle x=x_{j}(j=1,2,3,\dots ,N)} of a function f ( x ) {\displaystyle f(x)} which is to be approximated. Consider the cases when singularities of a function are expressed with index n j {\displaystyle n_{j}} by f ( x ) ∼ A j ( x − x j ) n j , x → x j . {\displaystyle f(x)\sim {\frac {A_{j}}{(x-x_{j})^{n_{j}}}},\quad x\to x_{j}.} Besides the 2-point Padé approximant, which includes information at x = 0 , x → ∞ {\displaystyle x=0,x\to \infty } , this method approximates to reduce the property of diverging at x ∼ x j {\displaystyle x\sim x_{j}} . As a result, since the information of the peculiarity of the function is captured, the approximation of a function f ( x ) {\displaystyle f(x)} can be performed with higher accuracy. == Examples == sin(x) sin ⁡ ( x ) ≈ 12671 4363920 x 5 − 2363 18183 x 3 + x 1 + 445 12122 x 2 + 601 872784 x 4 + 121 16662240 x 6 {\displaystyle \sin(x)\approx {\frac {{\frac {12671}{4363920}}x^{5}-{\frac {2363}{18183}}x^{3}+x}{1+{\frac {445}{12122}}x^{2}+{\frac {601}{872784}}x^{4}+{\frac {121}{16662240}}x^{6}}}} exp(x) exp ⁡ ( x ) ≈ 1 + 1 2 x + 1 9 x 2 + 1 72 x 3 + 1 1008 x 4 + 1 30240 x 5 1 − 1 2 x + 1 9 x 2 − 1 72 x 3 + 1 1008 x 4 − 1 30240 x 5 {\displaystyle \exp(x)\approx {\frac {1+{\frac {1}{2}}x+{\frac {1}{9}}x^{2}+{\frac {1}{72}}x^{3}+{\frac {1}{1008}}x^{4}+{\frac {1}{30240}}x^{5}}{1-{\frac {1}{2}}x+{\frac {1}{9}}x^{2}-{\frac {1}{72}}x^{3}+{\frac {1}{1008}}x^{4}-{\frac {1}{30240}}x^{5}}}} ln(1+x) ln ⁡ ( 1 + x ) ≈ x + 1 2 x 2 1 + x + 1 6 x 2 {\displaystyle \ln(1+x)\approx {\frac {x+{\frac {1}{2}}x^{2}}{1+x+{\frac {1}{6}}x^{2}}}} Jacobi sn(z|3) s n ( z | 3 ) ≈ − 9851629 283609260 z 5 − 572744 4726821 z 3 + z 1 + 859490 1575607 z 2 − 5922035 56721852 z 4 + 62531591 2977897230 z 6 {\displaystyle \mathrm {sn} (z|3)\approx {\frac {-{\frac {9851629}{283609260}}z^{5}-{\frac {572744}{4726821}}z^{3}+z}{1+{\frac {859490}{1575607}}z^{2}-{\frac {5922035}{56721852}}z^{4}+{\frac {62531591}{2977897230}}z^{6}}}} Bessel J5(x) J 5 ( x ) ≈ − 107 28416000 x 7 + 1 3840 x 5 1 + 151 5550 x 2 + 1453 3729600 x 4 + 1339 358041600 x 6 + 2767 120301977600 x 8 {\displaystyle J_{5}(x)\approx {\frac {-{\frac {107}{28416000}}x^{7}+{\frac {1}{3840}}x^{5}}{1+{\frac {151}{5550}}x^{2}+{\frac {1453}{3729600}}x^{4}+{\frac {1339}{358041600}}x^{6}+{\frac {2767}{120301977600}}x^{8}}}} erf(x) erf ⁡ ( x ) ≈ 2 15 π ⋅ 49140 x + 3570 x 3 + 739 x 5 165 x 4 + 1330 x 2 + 3276 {\displaystyle \operatorname {erf} (x)\approx {\frac {2}{15{\sqrt {\pi }}}}\cdot {\frac {49140x+3570x^{3}+739x^{5}}{165x^{4}+1330x^{2}+3276}}} Fresnel C(x) C ( x ) ≈ 1 135 ⋅ 990791 π 4 x 9 − 147189744 π 2 x 5 + 8714684160 x 1749 π 4 x 8 + 523536 π 2 x 4 + 64553216 {\displaystyle C(x)\approx {\frac {1}{135}}\cdot {\frac {990791\pi ^{4}x^{9}-147189744\pi ^{2}x^{5}+8714684160x}{1749\pi ^{4}x^{8}+523536\pi ^{2}x^{4}+64553216}}} == See also == Padé table Bhaskara I's sine approximation formula – Formula to estimate the sine functionPages displaying short descriptions of redirect targets Approximation theory – Theory of getting acceptably close inexact mathematical calculations Function approximation – Approximating an arbitrary function with a well-behaved one == References == == Literature == Baker, G. A., Jr.; and Graves-Morris, P. Padé Approximants. Cambridge U.P., 1996. Baker, G. A., Jr. Padé approximant, Scholarpedia, 7(6):9756. Brezinski, C.; Redivo Zaglia, M. Extrapolation Methods. Theory and Practice. North-Holland, 1991. ISBN 978-0444888143 Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007), "Section 5.12 Padé Approximants", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8, archived from the original on 2016-03-03, retrieved 2011-08-09. Frobenius, G.; Ueber Relationen zwischen den Näherungsbrüchen von Potenzreihen, [Journal für die reine und angewandte Mathematik (Crelle's Journal)]. Volume 1881, Issue 90, Pages 1–17. Gragg, W. B.; The Pade Table and Its Relation to Certain Algorithms of Numerical Analysis [SIAM Review], Vol. 14, No. 1, 1972, pp. 1–62. Padé, H.; Sur la répresentation approchée d'une fonction par des fractions rationelles, Thesis, [Ann. École Nor. (3), 9, 1892, pp. 1–93 supplement. Wynn, P. (1966), "Upon systems of recursions which obtain among the quotients of the Padé table", Numerische Mathematik, 8 (3): 264–269, doi:10.1007/BF02162562, S2CID 123789548. == External links == Weisstein, Eric W. "Padé Approximant". MathWorld. Padé Approximants, Oleksandr Pavlyk, The Wolfram Demonstrations Project. Data Analysis BriefBook: Pade Approximation, Rudolf K. Bock European Laboratory for Particle Physics, CERN. Sinewave, Scott Dattalo, last accessed 2010-11-11. MATLAB function for Padé approximation of models with time delays.
Wikipedia/Padé_approximation
In philosophy, empiricism is an epistemological view which holds that true knowledge or justification comes only or primarily from sensory experience and empirical evidence. It is one of several competing views within epistemology, along with rationalism and skepticism. Empiricists argue that empiricism is a more reliable method of finding the truth than purely using logical reasoning, because humans have cognitive biases and limitations which lead to errors of judgement. Empiricism emphasizes the central role of empirical evidence in the formation of ideas, rather than innate ideas or traditions. Empiricists may argue that traditions (or customs) arise due to relations of previous sensory experiences. Historically, empiricism was associated with the "blank slate" concept (tabula rasa), according to which the human mind is "blank" at birth and develops its thoughts only through later experience. Empiricism in the philosophy of science emphasizes evidence, especially as discovered in experiments. It is a fundamental part of the scientific method that all hypotheses and theories must be tested against observations of the natural world rather than resting solely on a priori reasoning, intuition, or revelation. Empiricism, often used by natural scientists, believes that "knowledge is based on experience" and that "knowledge is tentative and probabilistic, subject to continued revision and falsification". Empirical research, including experiments and validated measurement tools, guides the scientific method. == Etymology == The English term empirical derives from the Ancient Greek word ἐμπειρία, empeiria, which is cognate with and translates to the Latin experientia, from which the words experience and experiment are derived. == Background == A central concept in science and the scientific method is that conclusions must be empirically based on the evidence of the senses. Both natural and social sciences use working hypotheses that are testable by observation and experiment. The term semi-empirical is sometimes used to describe theoretical methods that make use of basic axioms, established scientific laws, and previous experimental results to engage in reasoned model building and theoretical inquiry. Philosophical empiricists hold no knowledge to be properly inferred or deduced unless it is derived from one's sense-based experience. In epistemology (theory of knowledge) empiricism is typically contrasted with rationalism, which holds that knowledge may be derived from reason independently of the senses, and in the philosophy of mind it is often contrasted with innatism, which holds that some knowledge and ideas are already present in the mind at birth. However, many Enlightenment rationalists and empiricists still made concessions to each other. For example, the empiricist John Locke admitted that some knowledge (e.g. knowledge of God's existence) could be arrived at through intuition and reasoning alone. Similarly, Robert Boyle, a prominent advocate of the experimental method, held that we also have innate ideas. At the same time, the main continental rationalists (Descartes, Spinoza, and Leibniz) were also advocates of the empirical "scientific method". == History == === Early empiricism === Between 600 and 200 BCE, the Vaisheshika school of Hindu philosophy, founded by the ancient Indian philosopher Kanada, accepted perception and inference as the only two reliable sources of knowledge. This is enumerated in his work Vaiśeṣika Sūtra. The Charvaka school held similar beliefs, asserting that perception is the only reliable source of knowledge while inference obtains knowledge with uncertainty. The earliest Western proto-empiricists were the empiric school of ancient Greek medical practitioners, founded in 330 BCE. Its members rejected the doctrines of the dogmatic school, preferring to rely on the observation of phantasiai (i.e., phenomena, the appearances). The Empiric school was closely allied with the Pyrrhonist school of philosophy, which made the philosophical case for their proto-empiricism. The notion of tabula rasa ("clean slate" or "blank tablet") connotes a view of the mind as an originally blank or empty recorder (Locke used the words "white paper") on which experience leaves marks. This denies that humans have innate ideas. The notion dates back to Aristotle, c. 350 BC: What the mind (nous) thinks must be in it in the same sense as letters are on a tablet (grammateion) which bears no actual writing (grammenon); this is just what happens in the case of the mind. (Aristotle, On the Soul, 3.4.430a1). Aristotle's explanation of how this was possible was not strictly empiricist in a modern sense, but rather based on his theory of potentiality and actuality, and experience of sense perceptions still requires the help of the active nous. These notions contrasted with Platonic notions of the human mind as an entity that pre-existed somewhere in the heavens, before being sent down to join a body on Earth (see Plato's Phaedo and Apology, as well as others). Aristotle was considered to give a more important position to sense perception than Plato, and commentators in the Middle Ages summarized one of his positions as "nihil in intellectu nisi prius fuerit in sensu" (Latin for "nothing in the intellect without first being in the senses"). This idea was later developed in ancient philosophy by the Stoic school, from about 330 BCE. Stoic epistemology generally emphasizes that the mind starts blank, but acquires knowledge as the outside world is impressed upon it. The doxographer Aetius summarizes this view as "When a man is born, the Stoics say, he has the commanding part of his soul like a sheet of paper ready for writing upon." === Islamic Golden Age and Pre-Renaissance (5th to 15th centuries CE) === During the Middle Ages (from the 5th to the 15th century CE) Aristotle's theory of tabula rasa was developed by Islamic philosophers starting with Al Farabi (c. 872 – c. 951 CE), developing into an elaborate theory by Avicenna (c.  980 – 1037 CE) and demonstrated as a thought experiment by Ibn Tufail. For Avicenna (Ibn Sina), for example, the tabula rasa is a pure potentiality that is actualized through education, and knowledge is attained through "empirical familiarity with objects in this world from which one abstracts universal concepts" developed through a "syllogistic method of reasoning in which observations lead to propositional statements which when compounded lead to further abstract concepts". The intellect itself develops from a material intellect (al-'aql al-hayulani), which is a potentiality "that can acquire knowledge to the active intellect (al-'aql al-fa'il), the state of the human intellect in conjunction with the perfect source of knowledge". So the immaterial "active intellect", separate from any individual person, is still essential for understanding to occur. In the 12th century CE, the Andalusian Muslim philosopher and novelist Abu Bakr Ibn Tufail (known as "Abubacer" or "Ebu Tophail" in the West) included the theory of tabula rasa as a thought experiment in his Arabic philosophical novel, Hayy ibn Yaqdhan in which he depicted the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from society" on a desert island, through experience alone. The Latin translation of his philosophical novel, entitled Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation of tabula rasa in An Essay Concerning Human Understanding. A similar Islamic theological novel, Theologus Autodidactus, was written by the Arab theologian and physician Ibn al-Nafis in the 13th century. It also dealt with the theme of empiricism through the story of a feral child on a desert island, but departed from its predecessor by depicting the development of the protagonist's mind through contact with society rather than in isolation from society. During the 13th century Thomas Aquinas adopted into scholasticism the Aristotelian position that the senses are essential to the mind. Bonaventure (1221–1274), one of Aquinas' strongest intellectual opponents, offered some of the strongest arguments in favour of the Platonic idea of the mind. === Renaissance Italy === In the late renaissance various writers began to question the medieval and classical understanding of knowledge acquisition in a more fundamental way. In political and historical writing Niccolò Machiavelli and his friend Francesco Guicciardini initiated a new realistic style of writing. Machiavelli in particular was scornful of writers on politics who judged everything in comparison to mental ideals and demanded that people should study the "effectual truth" instead. Their contemporary, Leonardo da Vinci (1452–1519) said, "If you find from your own experience that something is a fact and it contradicts what some authority has written down, then you must abandon the authority and base your reasoning on your own findings." Significantly, an empirical metaphysical system was developed by the Italian philosopher Bernardino Telesio which had an enormous impact on the development of later Italian thinkers, including Telesio's students Antonio Persio and Sertorio Quattromani, his contemporaries Thomas Campanella and Giordano Bruno, and later British philosophers such as Francis Bacon, who regarded Telesio as "the first of the moderns". Telesio's influence can also be seen on the French philosophers René Descartes and Pierre Gassendi. The decidedly anti-Aristotelian and anti-clerical music theorist Vincenzo Galilei (c. 1520 – 1591), father of Galileo and the inventor of monody, made use of the method in successfully solving musical problems, firstly, of tuning such as the relationship of pitch to string tension and mass in stringed instruments, and to volume of air in wind instruments; and secondly to composition, by his various suggestions to composers in his Dialogo della musica antica e moderna (Florence, 1581). The Italian word he used for "experiment" was esperimento. It is known that he was the essential pedagogical influence upon the young Galileo, his eldest son (cf. Coelho, ed. Music and Science in the Age of Galileo Galilei), arguably one of the most influential empiricists in history. Vincenzo, through his tuning research, found the underlying truth at the heart of the misunderstood myth of 'Pythagoras' hammers' (the square of the numbers concerned yielded those musical intervals, not the actual numbers, as believed), and through this and other discoveries that demonstrated the fallibility of traditional authorities, a radically empirical attitude developed, passed on to Galileo, which regarded "experience and demonstration" as the sine qua non of valid rational enquiry. === British empiricism === British empiricism, a retrospective characterization, emerged during the 17th century as an approach to early modern philosophy and modern science. Although both integral to this overarching transition, Francis Bacon, in England, first advocated for empiricism in 1620, whereas René Descartes, in France, laid the main groundwork upholding rationalism around 1640. (Bacon's natural philosophy was influenced by Italian philosopher Bernardino Telesio and by Swiss physician Paracelsus.) Contributing later in the 17th century, Thomas Hobbes and Baruch Spinoza are retrospectively identified likewise as an empiricist and a rationalist, respectively. In the Enlightenment of the late 17th century, John Locke in England, and in the 18th century, both George Berkeley in Ireland and David Hume in Scotland, all became leading exponents of empiricism, hence the dominance of empiricism in British philosophy. The distinction between rationalism and empiricism was not formally made until Immanuel Kant, in Germany, around 1780, who sought to merge the two views. In response to the early-to-mid-17th-century "continental rationalism", John Locke (1632–1704) proposed in An Essay Concerning Human Understanding (1689) a very influential view wherein the only knowledge humans can have is a posteriori, i.e., based upon experience. Locke is famously attributed with holding the proposition that the human mind is a tabula rasa, a "blank tablet", in Locke's words "white paper", on which the experiences derived from sense impressions as a person's life proceeds are written. There are two sources of our ideas: sensation and reflection. In both cases, a distinction is made between simple and complex ideas. The former are unanalysable, and are broken down into primary and secondary qualities. Primary qualities are essential for the object in question to be what it is. Without specific primary qualities, an object would not be what it is. For example, an apple is an apple because of the arrangement of its atomic structure. If an apple were structured differently, it would cease to be an apple. Secondary qualities are the sensory information we can perceive from its primary qualities. For example, an apple can be perceived in various colours, sizes, and textures but it is still identified as an apple. Therefore, its primary qualities dictate what the object essentially is, while its secondary qualities define its attributes. Complex ideas combine simple ones, and divide into substances, modes, and relations. According to Locke, our knowledge of things is a perception of ideas that are in accordance or discordance with each other, which is very different from the quest for certainty of Descartes. A generation later, the Irish Anglican bishop George Berkeley (1685–1753) determined that Locke's view immediately opened a door that would lead to eventual atheism. In response to Locke, he put forth in his Treatise Concerning the Principles of Human Knowledge (1710) an important challenge to empiricism in which things only exist either as a result of their being perceived, or by virtue of the fact that they are an entity doing the perceiving. (For Berkeley, God fills in for humans by doing the perceiving whenever humans are not around to do it.) In his text Alciphron, Berkeley maintained that any order humans may see in nature is the language or handwriting of God. Berkeley's approach to empiricism would later come to be called subjective idealism. Scottish philosopher David Hume (1711–1776) responded to Berkeley's criticisms of Locke, as well as other differences between early modern philosophers, and moved empiricism to a new level of skepticism. Hume argued in keeping with the empiricist view that all knowledge derives from sense experience, but he accepted that this has implications not normally acceptable to philosophers. He wrote for example, "Locke divides all arguments into demonstrative and probable. On this view, we must say that it is only probable that all men must die or that the sun will rise to-morrow, because neither of these can be demonstrated. But to conform our language more to common use, we ought to divide arguments into demonstrations, proofs, and probabilities—by ‘proofs’ meaning arguments from experience that leave no room for doubt or opposition." And, I believe the most general and most popular explication of this matter, is to say [See Mr. Locke, chapter of power.], that finding from experience, that there are several new productions in matter, such as the motions and variations of body, and concluding that there must somewhere be a power capable of producing them, we arrive at last by this reasoning at the idea of power and efficacy. But to be convinced that this explication is more popular than philosophical, we need but reflect on two very obvious principles. First, That reason alone can never give rise to any original idea, and secondly, that reason, as distinguished from experience, can never make us conclude, that a cause or productive quality is absolutely requisite to every beginning of existence. Both these considerations have been sufficiently explained: and therefore shall not at present be any farther insisted on. Hume divided all of human knowledge into two categories: relations of ideas and matters of fact (see also Kant's analytic-synthetic distinction). Mathematical and logical propositions (e.g. "that the square of the hypotenuse is equal to the sum of the squares of the two sides") are examples of the first, while propositions involving some contingent observation of the world (e.g. "the sun rises in the East") are examples of the second. All of people's "ideas", in turn, are derived from their "impressions". For Hume, an "impression" corresponds roughly with what we call a sensation. To remember or to imagine such impressions is to have an "idea". Ideas are therefore the faint copies of sensations. Hume maintained that no knowledge, even the most basic beliefs about the natural world, can be conclusively established by reason. Rather, he maintained, our beliefs are more a result of accumulated habits, developed in response to accumulated sense experiences. Among his many arguments Hume also added another important slant to the debate about scientific method—that of the problem of induction. Hume argued that it requires inductive reasoning to arrive at the premises for the principle of inductive reasoning, and therefore the justification for inductive reasoning is a circular argument. Among Hume's conclusions regarding the problem of induction is that there is no certainty that the future will resemble the past. Thus, as a simple instance posed by Hume, we cannot know with certainty by inductive reasoning that the sun will continue to rise in the East, but instead come to expect it to do so because it has repeatedly done so in the past. Hume concluded that such things as belief in an external world and belief in the existence of the self were not rationally justifiable. According to Hume these beliefs were to be accepted nonetheless because of their profound basis in instinct and custom. Hume's lasting legacy, however, was the doubt that his skeptical arguments cast on the legitimacy of inductive reasoning, allowing many skeptics who followed to cast similar doubt. === Phenomenalism === Most of Hume's followers have disagreed with his conclusion that belief in an external world is rationally unjustifiable, contending that Hume's own principles implicitly contained the rational justification for such a belief, that is, beyond being content to let the issue rest on human instinct, custom and habit. According to an extreme empiricist theory known as phenomenalism, anticipated by the arguments of both Hume and George Berkeley, a physical object is a kind of construction out of our experiences. Phenomenalism is the view that physical objects, properties, events (whatever is physical) are reducible to mental objects, properties, events. Ultimately, only mental objects, properties, events, exist—hence the closely related term subjective idealism. By the phenomenalistic line of thinking, to have a visual experience of a real physical thing is to have an experience of a certain kind of group of experiences. This type of set of experiences possesses a constancy and coherence that is lacking in the set of experiences of which hallucinations, for example, are a part. As John Stuart Mill put it in the mid-19th century, matter is the "permanent possibility of sensation". Mill's empiricism went a significant step beyond Hume in still another respect: in maintaining that induction is necessary for all meaningful knowledge including mathematics. As summarized by D.W. Hamlin: [Mill] claimed that mathematical truths were merely very highly confirmed generalizations from experience; mathematical inference, generally conceived as deductive [and a priori] in nature, Mill set down as founded on induction. Thus, in Mill's philosophy there was no real place for knowledge based on relations of ideas. In his view logical and mathematical necessity is psychological; we are merely unable to conceive any other possibilities than those that logical and mathematical propositions assert. This is perhaps the most extreme version of empiricism known, but it has not found many defenders. Mill's empiricism thus held that knowledge of any kind is not from direct experience but an inductive inference from direct experience. The problems other philosophers have had with Mill's position center around the following issues: Firstly, Mill's formulation encounters difficulty when it describes what direct experience is by differentiating only between actual and possible sensations. This misses some key discussion concerning conditions under which such "groups of permanent possibilities of sensation" might exist in the first place. Berkeley put God in that gap; the phenomenalists, including Mill, essentially left the question unanswered. In the end, lacking an acknowledgement of an aspect of "reality" that goes beyond mere "possibilities of sensation", such a position leads to a version of subjective idealism. Questions of how floor beams continue to support a floor while unobserved, how trees continue to grow while unobserved and untouched by human hands, etc., remain unanswered, and perhaps unanswerable in these terms. Secondly, Mill's formulation leaves open the unsettling possibility that the "gap-filling entities are purely possibilities and not actualities at all". Thirdly, Mill's position, by calling mathematics merely another species of inductive inference, misapprehends mathematics. It fails to fully consider the structure and method of mathematical science, the products of which are arrived at through an internally consistent deductive set of procedures which do not, either today or at the time Mill wrote, fall under the agreed meaning of induction. The phenomenalist phase of post-Humean empiricism ended by the 1940s, for by that time it had become obvious that statements about physical things could not be translated into statements about actual and possible sense data. If a physical object statement is to be translatable into a sense-data statement, the former must be at least deducible from the latter. But it came to be realized that there is no finite set of statements about actual and possible sense-data from which we can deduce even a single physical-object statement. The translating or paraphrasing statement must be couched in terms of normal observers in normal conditions of observation. There is, however, no finite set of statements that are couched in purely sensory terms and can express the satisfaction of the condition of the presence of a normal observer. According to phenomenalism, to say that a normal observer is present is to make the hypothetical statement that were a doctor to inspect the observer, the observer would appear to the doctor to be normal. But, of course, the doctor himself must be a normal observer. If we are to specify this doctor's normality in sensory terms, we must make reference to a second doctor who, when inspecting the sense organs of the first doctor, would himself have to have the sense data a normal observer has when inspecting the sense organs of a subject who is a normal observer. And if we are to specify in sensory terms that the second doctor is a normal observer, we must refer to a third doctor, and so on (also see the third man). === Logical empiricism === Logical empiricism (also logical positivism or neopositivism) was an early 20th-century attempt to synthesize the essential ideas of British empiricism (e.g. a strong emphasis on sensory experience as the basis for knowledge) with certain insights from mathematical logic that had been developed by Gottlob Frege and Ludwig Wittgenstein. Some of the key figures in this movement were Otto Neurath, Moritz Schlick and the rest of the Vienna Circle, along with A. J. Ayer, Rudolf Carnap and Hans Reichenbach. The neopositivists subscribed to a notion of philosophy as the conceptual clarification of the methods, insights and discoveries of the sciences. They saw in the logical symbolism elaborated by Frege (1848–1925) and Bertrand Russell (1872–1970) a powerful instrument that could rationally reconstruct all scientific discourse into an ideal, logically perfect, language that would be free of the ambiguities and deformations of natural language. This gave rise to what they saw as metaphysical pseudoproblems and other conceptual confusions. By combining Frege's thesis that all mathematical truths are logical with the early Wittgenstein's idea that all logical truths are mere linguistic tautologies, they arrived at a twofold classification of all propositions: the "analytic" (a priori) and the "synthetic" (a posteriori). On this basis, they formulated a strong principle of demarcation between sentences that have sense and those that do not: the so-called "verification principle". Any sentence that is not purely logical, or is unverifiable, is devoid of meaning. As a result, most metaphysical, ethical, aesthetic and other traditional philosophical problems came to be considered pseudoproblems. In the extreme empiricism of the neopositivists—at least before the 1930s—any genuinely synthetic assertion must be reducible to an ultimate assertion (or set of ultimate assertions) that expresses direct observations or perceptions. In later years, Carnap and Neurath abandoned this sort of phenomenalism in favor of a rational reconstruction of knowledge into the language of an objective spatio-temporal physics. That is, instead of translating sentences about physical objects into sense-data, such sentences were to be translated into so-called protocol sentences, for example, "X at location Y and at time T observes such and such". The central theses of logical positivism (verificationism, the analytic–synthetic distinction, reductionism, etc.) came under sharp attack after World War II by thinkers such as Nelson Goodman, W. V. Quine, Hilary Putnam, Karl Popper, and Richard Rorty. By the late 1960s, it had become evident to most philosophers that the movement had pretty much run its course, though its influence is still significant among contemporary analytic philosophers such as Michael Dummett and other anti-realists. === Pragmatism === In the late 19th and early 20th century, several forms of pragmatic philosophy arose. The ideas of pragmatism, in its various forms, developed mainly from discussions between Charles Sanders Peirce and William James when both men were at Harvard in the 1870s. James popularized the term "pragmatism", giving Peirce full credit for its patrimony, but Peirce later demurred from the tangents that the movement was taking, and redubbed what he regarded as the original idea with the name of "pragmaticism". Along with its pragmatic theory of truth, this perspective integrates the basic insights of empirical (experience-based) and rational (concept-based) thinking. Charles Peirce (1839–1914) was highly influential in laying the groundwork for today's empirical scientific method. Although Peirce severely criticized many elements of Descartes' peculiar brand of rationalism, he did not reject rationalism outright. Indeed, he concurred with the main ideas of rationalism, most importantly the idea that rational concepts can be meaningful and the idea that rational concepts necessarily go beyond the data given by empirical observation. In later years he even emphasized the concept-driven side of the then ongoing debate between strict empiricism and strict rationalism, in part to counterbalance the excesses to which some of his cohorts had taken pragmatism under the "data-driven" strict-empiricist view. Among Peirce's major contributions was to place inductive reasoning and deductive reasoning in a complementary rather than competitive mode, the latter of which had been the primary trend among the educated since David Hume wrote a century before. To this, Peirce added the concept of abductive reasoning. The combined three forms of reasoning serve as a primary conceptual foundation for the empirically based scientific method today. Peirce's approach "presupposes that (1) the objects of knowledge are real things, (2) the characters (properties) of real things do not depend on our perceptions of them, and (3) everyone who has sufficient experience of real things will agree on the truth about them. According to Peirce's doctrine of fallibilism, the conclusions of science are always tentative. The rationality of the scientific method does not depend on the certainty of its conclusions, but on its self-corrective character: by continued application of the method science can detect and correct its own mistakes, and thus eventually lead to the discovery of truth". In his Harvard "Lectures on Pragmatism" (1903), Peirce enumerated what he called the "three cotary propositions of pragmatism" (L: cos, cotis whetstone), saying that they "put the edge on the maxim of pragmatism". First among these, he listed the peripatetic-thomist observation mentioned above, but he further observed that this link between sensory perception and intellectual conception is a two-way street. That is, it can be taken to say that whatever we find in the intellect is also incipiently in the senses. Hence, if theories are theory-laden then so are the senses, and perception itself can be seen as a species of abductive inference, its difference being that it is beyond control and hence beyond critique—in a word, incorrigible. This in no way conflicts with the fallibility and revisability of scientific concepts, since it is only the immediate percept in its unique individuality or "thisness"—what the Scholastics called its haecceity—that stands beyond control and correction. Scientific concepts, on the other hand, are general in nature, and transient sensations do in another sense find correction within them. This notion of perception as abduction has received periodic revivals in artificial intelligence and cognitive science research, most recently for instance with the work of Irvin Rock on indirect perception. Around the beginning of the 20th century, William James (1842–1910) coined the term "radical empiricism" to describe an offshoot of his form of pragmatism, which he argued could be dealt with separately from his pragmatism—though in fact the two concepts are intertwined in James's published lectures. James maintained that the empirically observed "directly apprehended universe needs ... no extraneous trans-empirical connective support", by which he meant to rule out the perception that there can be any value added by seeking supernatural explanations for natural phenomena. James' "radical empiricism" is thus not radical in the context of the term "empiricism", but is instead fairly consistent with the modern use of the term "empirical". His method of argument in arriving at this view, however, still readily encounters debate within philosophy even today. John Dewey (1859–1952) modified James' pragmatism to form a theory known as instrumentalism. The role of sense experience in Dewey's theory is crucial, in that he saw experience as unified totality of things through which everything else is interrelated. Dewey's basic thought, in accordance with empiricism, was that reality is determined by past experience. Therefore, humans adapt their past experiences of things to perform experiments upon and test the pragmatic values of such experience. The value of such experience is measured experientially and scientifically, and the results of such tests generate ideas that serve as instruments for future experimentation, in physical sciences as in ethics. Thus, ideas in Dewey's system retain their empiricist flavour in that they are only known a posteriori. == See also == == Notes == == References == == External links == Fasko, Manuel; West, Peter. "British Empiricism". Internet Encyclopedia of Philosophy. Zalta, Edward N. (ed.). "Rationalism vs. Empiricism". Stanford Encyclopedia of Philosophy. Rationalism vs. Empiricism at the Indiana Philosophy Ontology Project Empiricism on In Our Time at the BBC Empiricist Man
Wikipedia/Empirical_science
In mathematics, the exponential function can be characterized in many ways. This article presents some common characterizations, discusses why each makes sense, and proves that they are all equivalent. The exponential function occurs naturally in many branches of mathematics. Walter Rudin called it "the most important function in mathematics". It is therefore useful to have multiple ways to define (or characterize) it. Each of the characterizations below may be more or less useful depending on context. The "product limit" characterization of the exponential function was discovered by Leonhard Euler. == Characterizations == The six most common definitions of the exponential function exp ⁡ ( x ) = e x {\displaystyle \exp(x)=e^{x}} for real values x ∈ R {\displaystyle x\in \mathbb {R} } are as follows. Product limit. Define e x {\displaystyle e^{x}} by the limit: e x = lim n → ∞ ( 1 + x n ) n . {\displaystyle e^{x}=\lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}.} Power series. Define ex as the value of the infinite series e x = ∑ n = 0 ∞ x n n ! = 1 + x + x 2 2 ! + x 3 3 ! + x 4 4 ! + ⋯ {\displaystyle e^{x}=\sum _{n=0}^{\infty }{x^{n} \over n!}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\cdots } (Here n! denotes the factorial of n. One proof that e is irrational uses a special case of this formula.) Inverse of logarithm integral. Define e x {\displaystyle e^{x}} to be the unique number y > 0 such that ∫ 1 y d t t = x . {\displaystyle \int _{1}^{y}{\frac {dt}{t}}=x.} That is, e x {\displaystyle e^{x}} is the inverse of the natural logarithm function x = ln ⁡ ( y ) {\displaystyle x=\ln(y)} , which is defined by this integral. Differential equation. Define y ( x ) = e x {\displaystyle y(x)=e^{x}} to be the unique solution to the differential equation with initial value: y ′ = y , y ( 0 ) = 1 , {\displaystyle y'=y,\quad y(0)=1,} where y ′ = d y d x {\displaystyle y'={\tfrac {dy}{dx}}} denotes the derivative of y. Functional equation. The exponential function e x {\displaystyle e^{x}} is the unique function f with the multiplicative property f ( x + y ) = f ( x ) f ( y ) {\displaystyle f(x+y)=f(x)f(y)} for all x , y {\displaystyle x,y} and f ′ ( 0 ) = 1 {\displaystyle f'(0)=1} . The condition f ′ ( 0 ) = 1 {\displaystyle f'(0)=1} can be replaced with f ( 1 ) = e {\displaystyle f(1)=e} together with any of the following regularity conditions: For the uniqueness, one must impose some regularity condition, since other functions satisfying f ( x + y ) = f ( x ) f ( y ) {\displaystyle f(x+y)=f(x)f(y)} can be constructed using a basis for the real numbers over the rationals, as described by Hewitt and Stromberg. Elementary definition by powers. Define the exponential function with base a > 0 {\displaystyle a>0} to be the continuous function a x {\displaystyle a^{x}} whose value on integers x = n {\displaystyle x=n} is given by repeated multiplication or division of a {\displaystyle a} , and whose value on rational numbers x = n / m {\displaystyle x=n/m} is given by a n / m = A 2 a n m {\displaystyle a^{n/m}=\ \ {\sqrt[{m}]{{\vphantom {A^{2}}}a^{n}}}} . Then define e x {\displaystyle e^{x}} to be the exponential function whose base a = e {\displaystyle a=e} is the unique positive real number satisfying: lim h → 0 e h − 1 h = 1. {\displaystyle \lim _{h\to 0}{\frac {e^{h}-1}{h}}=1.} == Larger domains == One way of defining the exponential function over the complex numbers is to first define it for the domain of real numbers using one of the above characterizations, and then extend it as an analytic function, which is characterized by its values on any infinite domain set. Also, characterisations (1), (2), and (4) for e x {\displaystyle e^{x}} apply directly for x {\displaystyle x} a complex number. Definition (3) presents a problem because there are non-equivalent paths along which one could integrate; but the equation of (3) should hold for any such path modulo 2 π i {\displaystyle 2\pi i} . As for definition (5), the additive property together with the complex derivative f ′ ( 0 ) = 1 {\displaystyle f'(0)=1} are sufficient to guarantee f ( x ) = e x {\displaystyle f(x)=e^{x}} . However, the initial value condition f ( 1 ) = e {\displaystyle f(1)=e} together with the other regularity conditions are not sufficient. For example, for real x and y, the function f ( x + i y ) = e x ( cos ⁡ ( 2 y ) + i sin ⁡ ( 2 y ) ) = e x + 2 i y {\displaystyle f(x+iy)=e^{x}(\cos(2y)+i\sin(2y))=e^{x+2iy}} satisfies the three listed regularity conditions in (5) but is not equal to exp ⁡ ( x + i y ) {\displaystyle \exp(x+iy)} . A sufficient condition is that f ( 1 ) = e {\displaystyle f(1)=e} and that f {\displaystyle f} is a conformal map at some point; or else the two initial values f ( 1 ) = e {\displaystyle f(1)=e} and f ( i ) = cos ⁡ ( 1 ) + i sin ⁡ ( 1 ) {\textstyle f(i)=\cos(1)+i\sin(1)} together with the other regularity conditions. One may also define the exponential on other domains, such as matrices and other algebras. Definitions (1), (2), and (4) all make sense for arbitrary Banach algebras. == Proof that each characterization makes sense == Some of these definitions require justification to demonstrate that they are well-defined. For example, when the value of the function is defined as the result of a limiting process (i.e. an infinite sequence or series), it must be demonstrated that such a limit always exists. === Characterization 1 === The error of the product limit expression is described by: ( 1 + x n ) n = e x ( 1 − x 2 2 n + x 3 ( 8 + 3 x ) 24 n 2 + ⋯ ) , {\displaystyle \left(1+{\frac {x}{n}}\right)^{n}=e^{x}\left(1-{\frac {x^{2}}{2n}}+{\frac {x^{3}(8+3x)}{24n^{2}}}+\cdots \right),} where the polynomial's degree (in x) in the term with denominator nk is 2k. === Characterization 2 === Since lim n → ∞ | x n + 1 / ( n + 1 ) ! x n / n ! | = lim n → ∞ | x n + 1 | = 0 < 1. {\displaystyle \lim _{n\to \infty }\left|{\frac {x^{n+1}/(n+1)!}{x^{n}/n!}}\right|=\lim _{n\to \infty }\left|{\frac {x}{n+1}}\right|=0<1.} it follows from the ratio test that ∑ n = 0 ∞ x n n ! {\textstyle \sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}} converges for all x. === Characterization 3 === Since the integrand is an integrable function of t, the integral expression is well-defined. It must be shown that the function from R + {\displaystyle \mathbb {R} ^{+}} to R {\displaystyle \mathbb {R} } defined by x ↦ ∫ 1 x d t t {\displaystyle x\mapsto \int _{1}^{x}{\frac {dt}{t}}} is a bijection. Since 1/t is positive for positive t, this function is strictly increasing, hence injective. If the two integrals ∫ 1 ∞ d t t = ∞ ∫ 1 0 d t t = − ∞ {\displaystyle {\begin{aligned}\int _{1}^{\infty }{\frac {dt}{t}}&=\infty \\[8pt]\int _{1}^{0}{\frac {dt}{t}}&=-\infty \end{aligned}}} hold, then it is surjective as well. Indeed, these integrals do hold; they follow from the integral test and the divergence of the harmonic series. === Characterization 6 === The definition depends on the unique positive real number a = e {\displaystyle a=e} satisfying: lim h → 0 a h − 1 h = 1. {\displaystyle \lim _{h\to 0}{\frac {a^{h}-1}{h}}=1.} This limit can be shown to exist for any a {\displaystyle a} , and it defines a continuous increasing function f ( a ) = ln ⁡ ( a ) {\displaystyle f(a)=\ln(a)} with f ( 1 ) = 0 {\displaystyle f(1)=0} and lim a → ∞ f ( a ) = ∞ {\displaystyle \lim _{a\to \infty }f(a)=\infty } , so the Intermediate value theorem guarantees the existence of such a value a = e {\displaystyle a=e} . == Equivalence of the characterizations == The following arguments demonstrate the equivalence of the above characterizations for the exponential function. === Characterization 1 ⇔ characterization 2 === The following argument is adapted from Rudin, theorem 3.31, p. 63–65. Let x ≥ 0 {\displaystyle x\geq 0} be a fixed non-negative real number. Define t n = ( 1 + x n ) n , s n = ∑ k = 0 n x k k ! , e x = lim n → ∞ s n . {\displaystyle t_{n}=\left(1+{\frac {x}{n}}\right)^{n},\qquad s_{n}=\sum _{k=0}^{n}{\frac {x^{k}}{k!}},\qquad e^{x}=\lim _{n\to \infty }s_{n}.} By the binomial theorem, t n = ∑ k = 0 n ( n k ) x k n k = 1 + x + ∑ k = 2 n n ( n − 1 ) ( n − 2 ) ⋯ ( n − ( k − 1 ) ) x k k ! n k = 1 + x + x 2 2 ! ( 1 − 1 n ) + x 3 3 ! ( 1 − 1 n ) ( 1 − 2 n ) + ⋯ ⋯ + x n n ! ( 1 − 1 n ) ⋯ ( 1 − n − 1 n ) ≤ s n {\displaystyle {\begin{aligned}t_{n}&=\sum _{k=0}^{n}{n \choose k}{\frac {x^{k}}{n^{k}}}=1+x+\sum _{k=2}^{n}{\frac {n(n-1)(n-2)\cdots (n-(k-1))x^{k}}{k!\,n^{k}}}\\[8pt]&=1+x+{\frac {x^{2}}{2!}}\left(1-{\frac {1}{n}}\right)+{\frac {x^{3}}{3!}}\left(1-{\frac {1}{n}}\right)\left(1-{\frac {2}{n}}\right)+\cdots \\[8pt]&{}\qquad \cdots +{\frac {x^{n}}{n!}}\left(1-{\frac {1}{n}}\right)\cdots \left(1-{\frac {n-1}{n}}\right)\leq s_{n}\end{aligned}}} (using x ≥ 0 to obtain the final inequality) so that: lim sup n → ∞ t n ≤ lim sup n → ∞ s n = e x {\displaystyle \limsup _{n\to \infty }t_{n}\leq \limsup _{n\to \infty }s_{n}=e^{x}} One must use lim sup because it is not known if tn converges. For the other inequality, by the above expression for tn, if 2 ≤ m ≤ n, we have: 1 + x + x 2 2 ! ( 1 − 1 n ) + ⋯ + x m m ! ( 1 − 1 n ) ( 1 − 2 n ) ⋯ ( 1 − m − 1 n ) ≤ t n . {\displaystyle 1+x+{\frac {x^{2}}{2!}}\left(1-{\frac {1}{n}}\right)+\cdots +{\frac {x^{m}}{m!}}\left(1-{\frac {1}{n}}\right)\left(1-{\frac {2}{n}}\right)\cdots \left(1-{\frac {m-1}{n}}\right)\leq t_{n}.} Fix m, and let n approach infinity. Then s m = 1 + x + x 2 2 ! + ⋯ + x m m ! ≤ lim inf n → ∞ t n {\displaystyle s_{m}=1+x+{\frac {x^{2}}{2!}}+\cdots +{\frac {x^{m}}{m!}}\leq \liminf _{n\to \infty }\ t_{n}} (again, one must use lim inf because it is not known if tn converges). Now, take the above inequality, let m approach infinity, and put it together with the other inequality to obtain: lim sup n → ∞ t n ≤ e x ≤ lim inf n → ∞ t n {\displaystyle \limsup _{n\to \infty }t_{n}\leq e^{x}\leq \liminf _{n\to \infty }t_{n}} so that lim n → ∞ t n = e x . {\displaystyle \lim _{n\to \infty }t_{n}=e^{x}.} This equivalence can be extended to the negative real numbers by noting ( 1 − r n ) n ( 1 + r n ) n = ( 1 − r 2 n 2 ) n {\textstyle \left(1-{\frac {r}{n}}\right)^{n}\left(1+{\frac {r}{n}}\right)^{n}=\left(1-{\frac {r^{2}}{n^{2}}}\right)^{n}} and taking the limit as n goes to infinity. === Characterization 1 ⇔ characterization 3 === Here, the natural logarithm function is defined in terms of a definite integral as above. By the first part of fundamental theorem of calculus, d d x ln ⁡ x = d d x ∫ 1 x 1 t d t = 1 x . {\displaystyle {\frac {d}{dx}}\ln x={\frac {d}{dx}}\int _{1}^{x}{\frac {1}{t}}\,dt={\frac {1}{x}}.} Besides, ln ⁡ 1 = ∫ 1 1 d t t = 0 {\textstyle \ln 1=\int _{1}^{1}{\frac {dt}{t}}=0} Now, let x be any fixed real number, and let y = lim n → ∞ ( 1 + x n ) n . {\displaystyle y=\lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}.} Ln(y) = x, which implies that y = ex, where ex is in the sense of definition 3. We have ln ⁡ y = ln ⁡ lim n → ∞ ( 1 + x n ) n = lim n → ∞ ln ⁡ ( 1 + x n ) n . {\displaystyle \ln y=\ln \lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}=\lim _{n\to \infty }\ln \left(1+{\frac {x}{n}}\right)^{n}.} Here, the continuity of ln(y) is used, which follows from the continuity of 1/t: ln ⁡ y = lim n → ∞ n ln ⁡ ( 1 + x n ) = lim n → ∞ x ln ⁡ ( 1 + ( x / n ) ) ( x / n ) . {\displaystyle \ln y=\lim _{n\to \infty }n\ln \left(1+{\frac {x}{n}}\right)=\lim _{n\to \infty }{\frac {x\ln \left(1+(x/n)\right)}{(x/n)}}.} Here, the result lnan = nlna has been used. This result can be established for n a natural number by induction, or using integration by substitution. (The extension to real powers must wait until ln and exp have been established as inverses of each other, so that ab can be defined for real b as eb lna.) = x ⋅ lim h → 0 ln ⁡ ( 1 + h ) h where h = x n {\displaystyle =x\cdot \lim _{h\to 0}{\frac {\ln \left(1+h\right)}{h}}\quad {\text{ where }}h={\frac {x}{n}}} = x ⋅ lim h → 0 ln ⁡ ( 1 + h ) − ln ⁡ 1 h {\displaystyle =x\cdot \lim _{h\to 0}{\frac {\ln \left(1+h\right)-\ln 1}{h}}} = x ⋅ d d t ln ⁡ t | t = 1 {\displaystyle =x\cdot {\frac {d}{dt}}\ln t{\Bigg |}_{t=1}} = x . {\displaystyle \!\,=x.} === Characterization 1 ⇔ characterization 4 === Let y ( t ) {\displaystyle y(t)} denote the solution to the initial value problem y ′ = y , y ( 0 ) = 1 {\displaystyle y'=y,\ y(0)=1} . Applying the simplest form of Euler's method with increment Δ t = x n {\displaystyle \Delta t={\frac {x}{n}}} and sample points t = 0 , Δ t , 2 Δ t , … , n Δ t {\displaystyle t\ =\ 0,\ \Delta t,\ 2\Delta t,\ldots ,\ n\Delta t} gives the recursive formula: y ( t + Δ t ) ≈ y ( t ) + y ′ ( t ) Δ t = y ( t ) + y ( t ) Δ t = y ( t ) ( 1 + Δ t ) . {\displaystyle y(t+\Delta t)\ \approx \ y(t)+y'(t)\Delta t\ =\ y(t)+y(t)\Delta t\ =\ y(t)\,(1+\Delta t).} This recursion is immediately solved to give the approximate value y ( x ) = y ( n Δ t ) ≈ ( 1 + Δ t ) n {\displaystyle y(x)=y(n\Delta t)\approx (1+\Delta t)^{n}} , and since Euler's Method is known to converge to the exact solution, we have: y ( x ) = lim n → ∞ ( 1 + x n ) n . {\displaystyle y(x)=\lim _{n\to \infty }\left(1+{\frac {x}{n}}\right)^{n}.} === Characterization 2 ⇔ characterization 4 === Let n be a non-negative integer. In the sense of definition 4 and by induction, d n y d x n = y {\displaystyle {\frac {d^{n}y}{dx^{n}}}=y} . Therefore d n y d x n | x = 0 = y ( 0 ) = 1. {\displaystyle {\frac {d^{n}y}{dx^{n}}}{\Bigg |}_{x=0}=y(0)=1.} Using Taylor series, y = ∑ n = 0 ∞ f ( n ) ( 0 ) n ! x n = ∑ n = 0 ∞ 1 n ! x n = ∑ n = 0 ∞ x n n ! . {\displaystyle y=\sum _{n=0}^{\infty }{\frac {f^{(n)}(0)}{n!}}\,x^{n}=\sum _{n=0}^{\infty }{\frac {1}{n!}}\,x^{n}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}.} This shows that definition 4 implies definition 2. In the sense of definition 2, d d x e x = d d x ( 1 + ∑ n = 1 ∞ x n n ! ) = ∑ n = 1 ∞ n x n − 1 n ! = ∑ n = 1 ∞ x n − 1 ( n − 1 ) ! = ∑ k = 0 ∞ x k k ! , where k = n − 1 = e x {\displaystyle {\begin{aligned}{\frac {d}{dx}}e^{x}&={\frac {d}{dx}}\left(1+\sum _{n=1}^{\infty }{\frac {x^{n}}{n!}}\right)=\sum _{n=1}^{\infty }{\frac {nx^{n-1}}{n!}}=\sum _{n=1}^{\infty }{\frac {x^{n-1}}{(n-1)!}}\\[6pt]&=\sum _{k=0}^{\infty }{\frac {x^{k}}{k!}},{\text{ where }}k=n-1\\[6pt]&=e^{x}\end{aligned}}} Besides, e 0 = 1 + 0 + 0 2 2 ! + 0 3 3 ! + ⋯ = 1. {\textstyle e^{0}=1+0+{\frac {0^{2}}{2!}}+{\frac {0^{3}}{3!}}+\cdots =1.} This shows that definition 2 implies definition 4. === Characterization 2 ⇒ characterization 5 === In the sense of definition 2, the equation exp ⁡ ( x + y ) = exp ⁡ ( x ) exp ⁡ ( y ) {\displaystyle \exp(x+y)=\exp(x)\exp(y)} follows from the term-by-term manipulation of power series justified by uniform convergence, and the resulting equality of coefficients is just the Binomial theorem. Furthermore: exp ′ ⁡ ( 0 ) = lim h → 0 e h − 1 h = lim h → 0 1 h ( ( 1 + h + h 2 2 ! + h 3 3 ! + h 4 4 ! + ⋯ ) − 1 ) = lim h → 0 ( 1 + h 2 ! + h 2 3 ! + h 3 4 ! + ⋯ ) = 1. {\displaystyle {\begin{aligned}\exp '(0)&=\lim _{h\to 0}{\frac {e^{h}-1}{h}}\\&=\lim _{h\to 0}{\frac {1}{h}}\left(\left(1+h+{\frac {h^{2}}{2!}}+{\frac {h^{3}}{3!}}+{\frac {h^{4}}{4!}}+\cdots \right)-1\right)\\&=\lim _{h\to 0}\left(1+{\frac {h}{2!}}+{\frac {h^{2}}{3!}}+{\frac {h^{3}}{4!}}+\cdots \right)\ =\ 1.\\\end{aligned}}} === Characterization 3 ⇔ characterization 4 === Characterisation 3 first defines the natural logarithm: log ⁡ x = def ∫ 1 x d t t , {\displaystyle \log x\ \ {\stackrel {\text{def}}{=}}\ \int _{1}^{x}\!{\frac {dt}{t}},} then exp {\displaystyle \exp } as the inverse function with x = log ⁡ ( exp ⁡ x ) {\textstyle x=\log(\exp x)} . Then by the Chain rule: 1 = d d x [ log ⁡ ( exp ⁡ ( x ) ) ] = log ′ ⁡ ( exp ⁡ ( x ) ) ⋅ exp ′ ⁡ ( x ) = exp ′ ⁡ ( x ) exp ⁡ ( x ) , {\displaystyle 1={\frac {d}{dx}}[\log(\exp(x))]=\log '(\exp(x))\cdot \exp '(x)={\frac {\exp '(x)}{\exp(x)}},} i.e. exp ′ ⁡ ( x ) = exp ⁡ ( x ) {\displaystyle \exp '(x)=\exp(x)} . Finally, log ⁡ ( 1 ) = 0 {\displaystyle \log(1)=0} , so exp ′ ⁡ ( 0 ) = exp ⁡ ( 0 ) = 1 {\displaystyle \exp '(0)=\exp(0)=1} . That is, y = exp ⁡ ( x ) {\displaystyle y=\exp(x)} is the unique solution of the initial value problem d y d x = y {\displaystyle {\frac {dy}{dx}}=y} , y ( 0 ) = 1 {\displaystyle y(0)=1} of characterization 4. Conversely, assume y = exp ⁡ ( x ) {\displaystyle y=\exp(x)} has exp ′ ⁡ ( x ) = exp ⁡ ( x ) {\displaystyle \exp '(x)=\exp(x)} and exp ⁡ ( 0 ) = 1 {\displaystyle \exp(0)=1} , and define log ⁡ ( x ) {\displaystyle \log(x)} as its inverse function with x = exp ⁡ ( log ⁡ x ) {\displaystyle x=\exp(\log x)} and log ⁡ ( 1 ) = 0 {\displaystyle \log(1)=0} . Then: 1 = d d x [ exp ⁡ ( log ⁡ ( x ) ) ] = exp ′ ⁡ ( log ⁡ ( x ) ) ⋅ log ′ ⁡ ( x ) = exp ⁡ ( log ⁡ ( x ) ) ⋅ log ′ ⁡ ( x ) = x ⋅ log ′ ⁡ ( x ) , {\displaystyle 1={\frac {d}{dx}}[\exp(\log(x))]=\exp '(\log(x))\cdot \log '(x)=\exp(\log(x))\cdot \log '(x)=x\cdot \log '(x),} i.e. log ′ ⁡ ( x ) = 1 x {\displaystyle \log '(x)={\frac {1}{x}}} . By the Fundamental theorem of calculus, ∫ 1 x 1 t d t = log ⁡ ( x ) − log ⁡ ( 1 ) = log ⁡ ( x ) . {\displaystyle \int _{1}^{x}{\frac {1}{t}}\,dt=\log(x)-\log(1)=\log(x).} === Characterization 5 ⇒ characterization 4 === The conditions f'(0) = 1 and f(x + y) = f(x) f(y) imply both conditions in characterization 4. Indeed, one gets the initial condition f(0) = 1 by dividing both sides of the equation f ( 0 ) = f ( 0 + 0 ) = f ( 0 ) f ( 0 ) {\displaystyle f(0)=f(0+0)=f(0)f(0)} by f(0), and the condition that f′(x) = f(x) follows from the condition that f′(0) = 1 and the definition of the derivative as follows: f ′ ( x ) = lim h → 0 f ( x + h ) − f ( x ) h = lim h → 0 f ( x ) f ( h ) − f ( x ) h = lim h → 0 f ( x ) f ( h ) − 1 h = f ( x ) lim h → 0 f ( h ) − 1 h = f ( x ) lim h → 0 f ( 0 + h ) − f ( 0 ) h = f ( x ) f ′ ( 0 ) = f ( x ) . {\displaystyle {\begin{array}{rcccccc}f'(x)&=&\lim \limits _{h\to 0}{\frac {f(x+h)-f(x)}{h}}&=&\lim \limits _{h\to 0}{\frac {f(x)f(h)-f(x)}{h}}&=&\lim \limits _{h\to 0}f(x){\frac {f(h)-1}{h}}\\[1em]&=&f(x)\lim \limits _{h\to 0}{\frac {f(h)-1}{h}}&=&f(x)\lim \limits _{h\to 0}{\frac {f(0+h)-f(0)}{h}}&=&f(x)f'(0)=f(x).\end{array}}} === Characterization 5 ⇒ characterization 4 === Assum characterization 5, the multiplicative property together with the initial condition exp ′ ⁡ ( 0 ) = 1 {\displaystyle \exp '(0)=1} imply that: d d x exp ⁡ ( x ) = lim h → 0 exp ⁡ ( x + h ) − exp ⁡ ( x ) h = exp ⁡ ( x ) ⋅ lim h → 0 exp ⁡ ( h ) − 1 h = exp ⁡ ( x ) exp ′ ⁡ ( 0 ) = exp ⁡ ( x ) . {\displaystyle {\begin{array}{rcl}{\frac {d}{dx}}\exp(x)&=&\lim _{h\to 0}{\frac {\exp(x{+}h)-\exp(x)}{h}}\\&=&\exp(x)\cdot \lim _{h\to 0}{\frac {\exp(h)-1}{h}}\\&=&\exp(x)\exp '(0)=\exp(x).\end{array}}} === Characterization 5 ⇔ characterization 6 === By inductively applying the multiplication rule, we get: f ( n m ) m = f ( n m + ⋯ + n m ) = f ( n ) = f ( 1 ) n , {\displaystyle f\left({\frac {n}{m}}\right)^{m}=f\left({\frac {n}{m}}+\cdots +{\frac {n}{m}}\right)=f(n)=f(1)^{n},} and thus f ( n m ) = f ( 1 ) n m = def a n / m {\displaystyle f\left({\frac {n}{m}}\right)={\sqrt[{m}]{f(1)^{n}}}\ {\stackrel {\text{def}}{=}}\ a^{n/m}} for a = f ( 1 ) {\displaystyle a=f(1)} . Then the condition f ′ ( 0 ) = 1 {\displaystyle f'(0)=1} means that lim h → 0 a h − 1 h = 1 {\displaystyle \lim _{h\to 0}{\tfrac {a^{h}-1}{h}}=1} , so a = e {\displaystyle a=e} by definition. Also, any of the regularity conditions of definition 5 imply that f ( x ) {\displaystyle f(x)} is continuous at all real x {\displaystyle x} (see below). The converse is similar. === Characterization 5 ⇒ characterization 6 === Let f ( x ) {\displaystyle f(x)} be a Lebesgue-integrable non-zero function satisfying the mulitiplicative property f ( x + y ) = f ( x ) f ( y ) {\displaystyle f(x+y)=f(x)f(y)} with f ( 1 ) = e {\displaystyle f(1)=e} . Following Hewitt and Stromberg, exercise 18.46, we will prove that Lebesgue-integrability implies continuity. This is sufficient to imply f ( x ) = e x {\displaystyle f(x)=e^{x}} according to characterization 6, arguing as above. First, a few elementary properties: If f ( x ) {\displaystyle f(x)} is nonzero anywhere (say at x = y {\displaystyle x=y} ), then it is non-zero everywhere. Proof: f ( y ) = f ( x ) f ( y − x ) ≠ 0 {\displaystyle f(y)=f(x)f(y-x)\neq 0} implies f ( x ) ≠ 0 {\displaystyle f(x)\neq 0} . f ( 0 ) = 1 {\displaystyle f(0)=1} . Proof: f ( x ) = f ( x + 0 ) = f ( x ) f ( 0 ) {\displaystyle f(x)=f(x+0)=f(x)f(0)} and f ( x ) {\displaystyle f(x)} is non-zero. f ( − x ) = 1 / f ( x ) {\displaystyle f(-x)=1/f(x)} . Proof: 1 = f ( 0 ) = f ( x − x ) = f ( x ) f ( − x ) {\displaystyle 1=f(0)=f(x-x)=f(x)f(-x)} . If f ( x ) {\displaystyle f(x)} is continuous anywhere (say at x = y {\displaystyle x=y} ), then it is continuous everywhere. Proof: f ( x + δ ) − f ( x ) = f ( x − y ) [ f ( y + δ ) − f ( y ) ] → 0 {\displaystyle f(x+\delta )-f(x)=f(x-y)[f(y+\delta )-f(y)]\to 0} as δ → 0 {\displaystyle \delta \to 0} by continuity at y {\displaystyle y} . The second and third properties mean that it is sufficient to prove f ( x ) = e x {\displaystyle f(x)=e^{x}} for positive x. Since f ( x ) {\displaystyle f(x)} is a Lebesgue-integrable function, then we may define g ( x ) = ∫ 0 x f ( t ) d t {\textstyle g(x)=\int _{0}^{x}f(t)\,dt} . It then follows that g ( x + y ) − g ( x ) = ∫ x x + y f ( t ) d t = ∫ 0 y f ( x + t ) d t = f ( x ) g ( y ) . {\displaystyle g(x+y)-g(x)=\int _{x}^{x+y}f(t)\,dt=\int _{0}^{y}f(x+t)\,dt=f(x)g(y).} Since f ( x ) {\displaystyle f(x)} is nonzero, some y can be chosen such that g ( y ) ≠ 0 {\displaystyle g(y)\neq 0} and solve for f ( x ) {\displaystyle f(x)} in the above expression. Therefore: f ( x + δ ) − f ( x ) = [ g ( x + δ + y ) − g ( x + δ ) ] − [ g ( x + y ) − g ( x ) ] g ( y ) = [ g ( x + y + δ ) − g ( x + y ) ] − [ g ( x + δ ) − g ( x ) ] g ( y ) = f ( x + y ) g ( δ ) − f ( x ) g ( δ ) g ( y ) = g ( δ ) f ( x + y ) − f ( x ) g ( y ) . {\displaystyle {\begin{aligned}f(x+\delta )-f(x)&={\frac {[g(x+\delta +y)-g(x+\delta )]-[g(x+y)-g(x)]}{g(y)}}\\&={\frac {[g(x+y+\delta )-g(x+y)]-[g(x+\delta )-g(x)]}{g(y)}}\\&={\frac {f(x+y)g(\delta )-f(x)g(\delta )}{g(y)}}=g(\delta ){\frac {f(x+y)-f(x)}{g(y)}}.\end{aligned}}} The final expression must go to zero as δ → 0 {\displaystyle \delta \to 0} since g ( 0 ) = 0 {\displaystyle g(0)=0} and g ( x ) {\displaystyle g(x)} is continuous. It follows that f ( x ) {\displaystyle f(x)} is continuous. == References == Walter Rudin, Principles of Mathematical Analysis, 3rd edition (McGraw–Hill, 1976), chapter 8. Edwin Hewitt and Karl Stromberg, Real and Abstract Analysis (Springer, 1965).
Wikipedia/Characterizations_of_the_exponential_function
In mathematics, a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is in numerical integration, i.e., approximating the area of functions or lines on a graph, where it is also known as the rectangle rule. It can also be applied for approximating the length of curves and other approximations. The sum is calculated by partitioning the region into shapes (rectangles, trapezoids, parabolas, or cubics—sometimes infinitesimally small) that together form a region that is similar to the region being measured, then calculating the area for each of these shapes, and finally adding all of these small areas together. This approach can be used to find a numerical approximation for a definite integral even if the fundamental theorem of calculus does not make it easy to find a closed-form solution. Because the region by the small shapes is usually not exactly the same shape as the region being measured, the Riemann sum will differ from the area being measured. This error can be reduced by dividing up the region more finely, using smaller and smaller shapes. As the shapes get smaller and smaller, the sum approaches the Riemann integral. == Definition == Let f : [ a , b ] → R {\displaystyle f:[a,b]\to \mathbb {R} } be a function defined on a closed interval [ a , b ] {\displaystyle [a,b]} of the real numbers, R {\displaystyle \mathbb {R} } , and P = ( x 0 , x 1 , … , x n ) {\displaystyle P=(x_{0},x_{1},\ldots ,x_{n})} as a partition of [ a , b ] {\displaystyle [a,b]} , that is a = x 0 < x 1 < x 2 < ⋯ < x n = b . {\displaystyle a=x_{0}<x_{1}<x_{2}<\dots <x_{n}=b.} A Riemann sum S {\displaystyle S} of f {\displaystyle f} over [ a , b ] {\displaystyle [a,b]} with partition P {\displaystyle P} is defined as S = ∑ i = 1 n f ( x i ∗ ) Δ x i , {\displaystyle S=\sum _{i=1}^{n}f(x_{i}^{*})\,\Delta x_{i},} where Δ x i = x i − x i − 1 {\displaystyle \Delta x_{i}=x_{i}-x_{i-1}} and x i ∗ ∈ [ x i − 1 , x i ] {\displaystyle x_{i}^{*}\in [x_{i-1},x_{i}]} . One might produce different Riemann sums depending on which x i ∗ {\displaystyle x_{i}^{*}} 's are chosen. In the end this will not matter, if the function is Riemann integrable, when the difference or width of the summands Δ x i {\displaystyle \Delta x_{i}} approaches zero. == Types of Riemann sums == Specific choices of x i ∗ {\displaystyle x_{i}^{*}} give different types of Riemann sums: If x i ∗ = x i − 1 {\displaystyle x_{i}^{*}=x_{i-1}} for all i, the method is the left rule and gives a left Riemann sum. If x i ∗ = x i {\displaystyle x_{i}^{*}=x_{i}} for all i, the method is the right rule and gives a right Riemann sum. If x i ∗ = ( x i + x i − 1 ) / 2 {\displaystyle x_{i}^{*}=(x_{i}+x_{i-1})/2} for all i, the method is the midpoint rule and gives a middle Riemann sum. If f ( x i ∗ ) = sup f ( [ x i − 1 , x i ] ) {\displaystyle f(x_{i}^{*})=\sup f([x_{i-1},x_{i}])} (that is, the supremum of f {\textstyle f} over [ x i − 1 , x i ] {\displaystyle [x_{i-1},x_{i}]} ), the method is the upper rule and gives an upper Riemann sum or upper Darboux sum. If f ( x i ∗ ) = inf f ( [ x i − 1 , x i ] ) {\displaystyle f(x_{i}^{*})=\inf f([x_{i-1},x_{i}])} (that is, the infimum of f over [ x i − 1 , x i ] {\displaystyle [x_{i-1},x_{i}]} ), the method is the lower rule and gives a lower Riemann sum or lower Darboux sum. All these Riemann summation methods are among the most basic ways to accomplish numerical integration. Loosely speaking, a function is Riemann integrable if all Riemann sums converge as the partition "gets finer and finer". While not derived as a Riemann sum, taking the average of the left and right Riemann sums is the trapezoidal rule and gives a trapezoidal sum. It is one of the simplest of a very general way of approximating integrals using weighted averages. This is followed in complexity by Simpson's rule and Newton–Cotes formulas. Any Riemann sum on a given partition (that is, for any choice of x i ∗ {\displaystyle x_{i}^{*}} between x i − 1 {\displaystyle x_{i-1}} and x i {\displaystyle x_{i}} ) is contained between the lower and upper Darboux sums. This forms the basis of the Darboux integral, which is ultimately equivalent to the Riemann integral. == Riemann summation methods == The four Riemann summation methods are usually best approached with subintervals of equal size. The interval [a, b] is therefore divided into n {\displaystyle n} subintervals, each of length Δ x = b − a n . {\displaystyle \Delta x={\frac {b-a}{n}}.} The points in the partition will then be a , a + Δ x , a + 2 Δ x , … , a + ( n − 2 ) Δ x , a + ( n − 1 ) Δ x , b . {\displaystyle a,\;a+\Delta x,\;a+2\Delta x,\;\ldots ,\;a+(n-2)\Delta x,\;a+(n-1)\Delta x,\;b.} === Left rule === For the left rule, the function is approximated by its values at the left endpoints of the subintervals. This gives multiple rectangles with base Δx and height f(a + iΔx). Doing this for i = 0, 1, ..., n − 1, and summing the resulting areas gives S l e f t = Δ x [ f ( a ) + f ( a + Δ x ) + f ( a + 2 Δ x ) + ⋯ + f ( b − Δ x ) ] . {\displaystyle S_{\mathrm {left} }=\Delta x\left[f(a)+f(a+\Delta x)+f(a+2\Delta x)+\dots +f(b-\Delta x)\right].} The left Riemann sum amounts to an overestimation if f is monotonically decreasing on this interval, and an underestimation if it is monotonically increasing. The error of this formula will be | ∫ a b f ( x ) d x − S l e f t | ≤ M 1 ( b − a ) 2 2 n , {\displaystyle \left\vert \int _{a}^{b}f(x)\,dx-S_{\mathrm {left} }\right\vert \leq {\frac {M_{1}(b-a)^{2}}{2n}},} where M 1 {\displaystyle M_{1}} is the maximum value of the absolute value of f ′ ( x ) {\displaystyle f^{\prime }(x)} over the interval. === Right rule === For the right rule, the function is approximated by its values at the right endpoints of the subintervals. This gives multiple rectangles with base Δx and height f(a + iΔx). Doing this for i = 1, ..., n, and summing the resulting areas gives S r i g h t = Δ x [ f ( a + Δ x ) + f ( a + 2 Δ x ) + ⋯ + f ( b ) ] . {\displaystyle S_{\mathrm {right} }=\Delta x\left[f(a+\Delta x)+f(a+2\Delta x)+\dots +f(b)\right].} The right Riemann sum amounts to an underestimation if f is monotonically decreasing, and an overestimation if it is monotonically increasing. The error of this formula will be | ∫ a b f ( x ) d x − S r i g h t | ≤ M 1 ( b − a ) 2 2 n , {\displaystyle \left\vert \int _{a}^{b}f(x)\,dx-S_{\mathrm {right} }\right\vert \leq {\frac {M_{1}(b-a)^{2}}{2n}},} where M 1 {\displaystyle M_{1}} is the maximum value of the absolute value of f ′ ( x ) {\displaystyle f^{\prime }(x)} over the interval. === Midpoint rule === For the midpoint rule, the function is approximated by its values at the midpoints of the subintervals. This gives f(a + Δx/2) for the first subinterval, f(a + 3Δx/2) for the next one, and so on until f(b − Δx/2). Summing the resulting areas gives S m i d = Δ x [ f ( a + Δ x 2 ) + f ( a + 3 Δ x 2 ) + ⋯ + f ( b − Δ x 2 ) ] . {\displaystyle S_{\mathrm {mid} }=\Delta x\left[f\left(a+{\tfrac {\Delta x}{2}}\right)+f\left(a+{\tfrac {3\Delta x}{2}}\right)+\dots +f\left(b-{\tfrac {\Delta x}{2}}\right)\right].} The error of this formula will be | ∫ a b f ( x ) d x − S m i d | ≤ M 2 ( b − a ) 3 24 n 2 , {\displaystyle \left\vert \int _{a}^{b}f(x)\,dx-S_{\mathrm {mid} }\right\vert \leq {\frac {M_{2}(b-a)^{3}}{24n^{2}}},} where M 2 {\displaystyle M_{2}} is the maximum value of the absolute value of f ′ ′ ( x ) {\displaystyle f^{\prime \prime }(x)} over the interval. This error is half of that of the trapezoidal sum; as such the middle Riemann sum is the most accurate approach to the Riemann sum. ==== Generalized midpoint rule ==== A generalized midpoint rule formula, also known as the enhanced midpoint integration, is given by ∫ 0 1 f ( x ) d x = 2 ∑ m = 1 M ∑ n = 0 ∞ 1 ( 2 M ) 2 n + 1 ( 2 n + 1 ) ! f ( 2 n ) ( x ) | x = m − 1 / 2 M , {\displaystyle \int _{0}^{1}f(x)\,dx=2\sum _{m=1}^{M}{\sum _{n=0}^{\infty }{{\frac {1}{{\left(2M\right)^{2n+1}}\left({2n+1}\right)!}}{{\left.f^{(2n)}(x)\right|}_{x={\frac {m-1/2}{M}}}}}}\,\,,} where f ( 2 n ) {\displaystyle f^{(2n)}} denotes even derivative. For a function g ( t ) {\displaystyle g(t)} defined over interval ( a , b ) {\displaystyle (a,b)} , its integral is ∫ a b g ( t ) d t = ∫ 0 b − a g ( τ + a ) d τ = ( b − a ) ∫ 0 1 g ( ( b − a ) x + a ) d x . {\displaystyle \int _{a}^{b}g(t)\,dt=\int _{0}^{b-a}g(\tau +a)\,d\tau =(b-a)\int _{0}^{1}g((b-a)x+a)\,dx.} Therefore, we can apply this generalized midpoint integration formula by assuming that f ( x ) = ( b − a ) g ( ( b − a ) x + a ) {\displaystyle f(x)=(b-a)\,g((b-a)x+a)} . This formula is particularly efficient for the numerical integration when the integrand f ( x ) {\displaystyle f(x)} is a highly oscillating function. === Trapezoidal rule === For the trapezoidal rule, the function is approximated by the average of its values at the left and right endpoints of the subintervals. Using the area formula 1 2 h ( b 1 + b 2 ) {\displaystyle {\tfrac {1}{2}}h(b_{1}+b_{2})} for a trapezium with parallel sides b1 and b2, and height h, and summing the resulting areas gives S t r a p = 1 2 Δ x [ f ( a ) + 2 f ( a + Δ x ) + 2 f ( a + 2 Δ x ) + ⋯ + f ( b ) ] . {\displaystyle S_{\mathrm {trap} }={\tfrac {1}{2}}\Delta x\left[f(a)+2f(a+\Delta x)+2f(a+2\Delta x)+\dots +f(b)\right].} The error of this formula will be | ∫ a b f ( x ) d x − S t r a p | ≤ M 2 ( b − a ) 3 12 n 2 , {\displaystyle \left\vert \int _{a}^{b}f(x)\,dx-S_{\mathrm {trap} }\right\vert \leq {\frac {M_{2}(b-a)^{3}}{12n^{2}}},} where M 2 {\displaystyle M_{2}} is the maximum value of the absolute value of f ″ ( x ) {\displaystyle f''(x)} . The approximation obtained with the trapezoidal sum for a function is the same as the average of the left hand and right hand sums of that function. == Connection with integration == For a one-dimensional Riemann sum over domain [ a , b ] {\displaystyle [a,b]} , as the maximum size of a subinterval shrinks to zero (that is the limit of the norm of the subintervals goes to zero), some functions will have all Riemann sums converge to the same value. This limiting value, if it exists, is defined as the definite Riemann integral of the function over the domain, ∫ a b f ( x ) d x = lim ‖ Δ x ‖ → 0 ∑ i = 1 n f ( x i ∗ ) Δ x i . {\displaystyle \int _{a}^{b}f(x)\,dx=\lim _{\|\Delta x\|\rightarrow 0}\sum _{i=1}^{n}f(x_{i}^{*})\,\Delta x_{i}.} For a finite-sized domain, if the maximum size of a subinterval shrinks to zero, this implies the number of subinterval goes to infinity. For finite partitions, Riemann sums are always approximations to the limiting value and this approximation gets better as the partition gets finer. The following animations help demonstrate how increasing the number of subintervals (while lowering the maximum subinterval size) better approximates the "area" under the curve: Since the red function here is assumed to be a smooth function, all three Riemann sums will converge to the same value as the number of subintervals goes to infinity. == Example == Taking an example, the area under the curve y = x2 over [0, 2] can be procedurally computed using Riemann's method. The interval [0, 2] is firstly divided into n subintervals, each of which is given a width of 2 n {\displaystyle {\tfrac {2}{n}}} ; these are the widths of the Riemann rectangles (hereafter "boxes"). Because the right Riemann sum is to be used, the sequence of x coordinates for the boxes will be x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} . Therefore, the sequence of the heights of the boxes will be x 1 2 , x 2 2 , … , x n 2 {\displaystyle x_{1}^{2},x_{2}^{2},\ldots ,x_{n}^{2}} . It is an important fact that x i = 2 i n {\displaystyle x_{i}={\tfrac {2i}{n}}} , and x n = 2 {\displaystyle x_{n}=2} . The area of each box will be 2 n × x i 2 {\displaystyle {\tfrac {2}{n}}\times x_{i}^{2}} and therefore the nth right Riemann sum will be: S = 2 n ( 2 n ) 2 + ⋯ + 2 n ( 2 i n ) 2 + ⋯ + 2 n ( 2 n n ) 2 = 8 n 3 ( 1 + ⋯ + i 2 + ⋯ + n 2 ) = 8 n 3 ( n ( n + 1 ) ( 2 n + 1 ) 6 ) = 8 n 3 ( 2 n 3 + 3 n 2 + n 6 ) = 8 3 + 4 n + 4 3 n 2 . {\displaystyle {\begin{aligned}S&={\frac {2}{n}}\left({\frac {2}{n}}\right)^{2}+\dots +{\frac {2}{n}}\left({\frac {2i}{n}}\right)^{2}+\dots +{\frac {2}{n}}\left({\frac {2n}{n}}\right)^{2}\\[1ex]&={\frac {8}{n^{3}}}\left(1+\dots +i^{2}+\dots +n^{2}\right)\\[1ex]&={\frac {8}{n^{3}}}\left({\frac {n(n+1)(2n+1)}{6}}\right)\\[1ex]&={\frac {8}{n^{3}}}\left({\frac {2n^{3}+3n^{2}+n}{6}}\right)\\[1ex]&={\frac {8}{3}}+{\frac {4}{n}}+{\frac {4}{3n^{2}}}.\end{aligned}}} If the limit is viewed as n → ∞, it can be concluded that the approximation approaches the actual value of the area under the curve as the number of boxes increases. Hence: lim n → ∞ S = lim n → ∞ ( 8 3 + 4 n + 4 3 n 2 ) = 8 3 . {\displaystyle \lim _{n\to \infty }S=\lim _{n\to \infty }\left({\frac {8}{3}}+{\frac {4}{n}}+{\frac {4}{3n^{2}}}\right)={\frac {8}{3}}.} This method agrees with the definite integral as calculated in more mechanical ways: ∫ 0 2 x 2 d x = 8 3 . {\displaystyle \int _{0}^{2}x^{2}\,dx={\frac {8}{3}}.} Because the function is continuous and monotonically increasing over the interval, a right Riemann sum overestimates the integral by the largest amount (while a left Riemann sum would underestimate the integral by the largest amount). This fact, which is intuitively clear from the diagrams, shows how the nature of the function determines how accurate the integral is estimated. While simple, right and left Riemann sums are often less accurate than more advanced techniques of estimating an integral such as the Trapezoidal rule or Simpson's rule. The example function has an easy-to-find anti-derivative so estimating the integral by Riemann sums is mostly an academic exercise; however it must be remembered that not all functions have anti-derivatives so estimating their integrals by summation is practically important. == Higher dimensions == The basic idea behind a Riemann sum is to "break-up" the domain via a partition into pieces, multiply the "size" of each piece by some value the function takes on that piece, and sum all these products. This can be generalized to allow Riemann sums for functions over domains of more than one dimension. While intuitively, the process of partitioning the domain is easy to grasp, the technical details of how the domain may be partitioned get much more complicated than the one dimensional case and involves aspects of the geometrical shape of the domain. === Two dimensions === In two dimensions, the domain A {\displaystyle A} may be divided into a number of two-dimensional cells A i {\displaystyle A_{i}} such that A = ⋃ i A i {\textstyle A=\bigcup _{i}A_{i}} . Each cell then can be interpreted as having an "area" denoted by Δ A i {\displaystyle \Delta A_{i}} . The two-dimensional Riemann sum is S = ∑ i = 1 n f ( x i ∗ , y i ∗ ) Δ A i , {\displaystyle S=\sum _{i=1}^{n}f(x_{i}^{*},y_{i}^{*})\,\Delta A_{i},} where ( x i ∗ , y i ∗ ) ∈ A i {\displaystyle (x_{i}^{*},y_{i}^{*})\in A_{i}} . === Three dimensions === In three dimensions, the domain V {\displaystyle V} is partitioned into a number of three-dimensional cells V i {\displaystyle V_{i}} such that V = ⋃ i V i {\textstyle V=\bigcup _{i}V_{i}} . Each cell then can be interpreted as having a "volume" denoted by Δ V i {\displaystyle \Delta V_{i}} . The three-dimensional Riemann sum is S = ∑ i = 1 n f ( x i ∗ , y i ∗ , z i ∗ ) Δ V i , {\displaystyle S=\sum _{i=1}^{n}f(x_{i}^{*},y_{i}^{*},z_{i}^{*})\,\Delta V_{i},} where ( x i ∗ , y i ∗ , z i ∗ ) ∈ V i {\displaystyle (x_{i}^{*},y_{i}^{*},z_{i}^{*})\in V_{i}} . === Arbitrary number of dimensions === Higher dimensional Riemann sums follow a similar pattern. An n-dimensional Riemann sum is S = ∑ i f ( P i ∗ ) Δ V i , {\displaystyle S=\sum _{i}f(P_{i}^{*})\,\Delta V_{i},} where P i ∗ ∈ V i {\displaystyle P_{i}^{*}\in V_{i}} , that is, it is a point in the n-dimensional cell V i {\displaystyle V_{i}} with n-dimensional volume Δ V i {\displaystyle \Delta V_{i}} . === Generalization === In high generality, Riemann sums can be written S = ∑ i f ( P i ∗ ) μ ( V i ) , {\displaystyle S=\sum _{i}f(P_{i}^{*})\mu (V_{i}),} where P i ∗ {\displaystyle P_{i}^{*}} stands for any arbitrary point contained in the set V i {\displaystyle V_{i}} and μ {\displaystyle \mu } is a measure on the underlying set. Roughly speaking, a measure is a function that gives a "size" of a set, in this case the size of the set V i {\displaystyle V_{i}} ; in one dimension this can often be interpreted as a length, in two dimensions as an area, in three dimensions as a volume, and so on. == See also == Antiderivative Euler method and midpoint method, related methods for solving differential equations Lebesgue integration Riemann integral, limit of Riemann sums as the partition becomes infinitely fine Simpson's rule, a powerful numerical method more powerful than basic Riemann sums or even the Trapezoidal rule Trapezoidal rule, numerical method based on the average of the left and right Riemann sum == References == == External links == Weisstein, Eric W. "Riemann Sum". MathWorld. A simulation showing the convergence of Riemann sums
Wikipedia/Rectangle_method
Numerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term can also refer to the computation of integrals. Many differential equations cannot be solved exactly. For practical purposes, however – such as in engineering – a numeric approximation to the solution is often sufficient. The algorithms studied here can be used to compute such an approximation. An alternative method is to use techniques from calculus to obtain a series expansion of the solution. Ordinary differential equations occur in many scientific disciplines, including physics, chemistry, biology, and economics. In addition, some methods in numerical partial differential equations convert the partial differential equation into an ordinary differential equation, which must then be solved. == The problem == A first-order differential equation is an Initial value problem (IVP) of the form, where f {\displaystyle f} is a function f : [ t 0 , ∞ ) × R d → R d {\displaystyle f:[t_{0},\infty )\times \mathbb {R} ^{d}\to \mathbb {R} ^{d}} , and the initial condition y 0 ∈ R d {\displaystyle y_{0}\in \mathbb {R} ^{d}} is a given vector. First-order means that only the first derivative of y appears in the equation, and higher derivatives are absent. Without loss of generality to higher-order systems, we restrict ourselves to first-order differential equations, because a higher-order ODE can be converted into a larger system of first-order equations by introducing extra variables. For example, the second-order equation y′′ = −y can be rewritten as two first-order equations: y′ = z and z′ = −y. In this section, we describe numerical methods for IVPs, and remark that boundary value problems (BVPs) require a different set of tools. In a BVP, one defines values, or components of the solution y at more than one point. Because of this, different methods need to be used to solve BVPs. For example, the shooting method (and its variants) or global methods like finite differences, Galerkin methods, or collocation methods are appropriate for that class of problems. The Picard–Lindelöf theorem states that there is a unique solution, provided f is Lipschitz-continuous. == Methods == Numerical methods for solving first-order IVPs often fall into one of two large categories: linear multistep methods, or Runge–Kutta methods. A further division can be realized by dividing methods into those that are explicit and those that are implicit. For example, implicit linear multistep methods include Adams-Moulton methods, and backward differentiation methods (BDF), whereas implicit Runge–Kutta methods include diagonally implicit Runge–Kutta (DIRK), singly diagonally implicit Runge–Kutta (SDIRK), and Gauss–Radau (based on Gaussian quadrature) numerical methods. Explicit examples from the linear multistep family include the Adams–Bashforth methods, and any Runge–Kutta method with a lower diagonal Butcher tableau is explicit. A loose rule of thumb dictates that stiff differential equations require the use of implicit schemes, whereas non-stiff problems can be solved more efficiently with explicit schemes. The so-called general linear methods (GLMs) are a generalization of the above two large classes of methods. === Euler method === From any point on a curve, you can find an approximation of a nearby point on the curve by moving a short distance along a line tangent to the curve. Starting with the differential equation (1), we replace the derivative y′ by the finite difference approximation which when re-arranged yields the following formula y ( t + h ) ≈ y ( t ) + h y ′ ( t ) {\displaystyle y(t+h)\approx y(t)+hy'(t)} and using (1) gives: This formula is usually applied in the following way. We choose a step size h, and we construct the sequence t 0 , t 1 = t 0 + h , t 2 = t 0 + 2 h , . . . {\displaystyle t_{0},t_{1}=t_{0}+h,t_{2}=t_{0}+2h,...} We denote by y n {\displaystyle y_{n}} a numerical estimate of the exact solution y ( t n ) {\displaystyle y(t_{n})} . Motivated by (3), we compute these estimates by the following recursive scheme This is the Euler method (or forward Euler method, in contrast with the backward Euler method, to be described below). The method is named after Leonhard Euler who described it in 1768. The Euler method is an example of an explicit method. This means that the new value yn+1 is defined in terms of things that are already known, like yn. === Backward Euler method === If, instead of (2), we use the approximation we get the backward Euler method: The backward Euler method is an implicit method, meaning that we have to solve an equation to find yn+1. One often uses fixed-point iteration or (some modification of) the Newton–Raphson method to achieve this. It costs more time to solve this equation than explicit methods; this cost must be taken into consideration when one selects the method to use. The advantage of implicit methods such as (6) is that they are usually more stable for solving a stiff equation, meaning that a larger step size h can be used. === First-order exponential integrator method === Exponential integrators describe a large class of integrators that have recently seen a lot of development. They date back to at least the 1960s. In place of (1), we assume the differential equation is either of the form or it has been locally linearized about a background state to produce a linear term − A y {\displaystyle -Ay} and a nonlinear term N ( y ) {\displaystyle {\mathcal {N}}(y)} . Exponential integrators are constructed by multiplying (7) by e A t {\textstyle e^{At}} , and exactly integrating the result over a time interval [ t n , t n + 1 = t n + h ] {\displaystyle [t_{n},t_{n+1}=t_{n}+h]} : y n + 1 = e − A h y n + ∫ 0 h e − ( h − τ ) A N ( y ( t n + τ ) ) d τ . {\displaystyle y_{n+1}=e^{-Ah}y_{n}+\int _{0}^{h}e^{-(h-\tau )A}{\mathcal {N}}\left(y\left(t_{n}+\tau \right)\right)\,d\tau .} This integral equation is exact, but it doesn't define the integral. The first-order exponential integrator can be realized by holding N ( y ( t n + τ ) ) {\displaystyle {\mathcal {N}}(y(t_{n}+\tau ))} constant over the full interval: === Generalizations === The Euler method is often not accurate enough. In more precise terms, it only has order one (the concept of order is explained below). This caused mathematicians to look for higher-order methods. One possibility is to use not only the previously computed value yn to determine yn+1, but to make the solution depend on more past values. This yields a so-called multistep method. Perhaps the simplest is the leapfrog method which is second order and (roughly speaking) relies on two time values. Almost all practical multistep methods fall within the family of linear multistep methods, which have the form α k y n + k + α k − 1 y n + k − 1 + ⋯ + α 0 y n = h [ β k f ( t n + k , y n + k ) + β k − 1 f ( t n + k − 1 , y n + k − 1 ) + ⋯ + β 0 f ( t n , y n ) ] . {\displaystyle {\begin{aligned}&{}\alpha _{k}y_{n+k}+\alpha _{k-1}y_{n+k-1}+\cdots +\alpha _{0}y_{n}\\&{}\quad =h\left[\beta _{k}f(t_{n+k},y_{n+k})+\beta _{k-1}f(t_{n+k-1},y_{n+k-1})+\cdots +\beta _{0}f(t_{n},y_{n})\right].\end{aligned}}} Another possibility is to use more points in the interval [ t n , t n + 1 ] {\displaystyle [t_{n},t_{n+1}]} . This leads to the family of Runge–Kutta methods, named after Carl Runge and Martin Kutta. One of their fourth-order methods is especially popular. === Advanced features === A good implementation of one of these methods for solving an ODE entails more than the time-stepping formula. It is often inefficient to use the same step size all the time, so variable step-size methods have been developed. Usually, the step size is chosen such that the (local) error per step is below some tolerance level. This means that the methods must also compute an error indicator, an estimate of the local error. An extension of this idea is to choose dynamically between different methods of different orders (this is called a variable order method). Methods based on Richardson extrapolation, such as the Bulirsch–Stoer algorithm, are often used to construct various methods of different orders. Other desirable features include: dense output: cheap numerical approximations for the whole integration interval, and not only at the points t0, t1, t2, ... event location: finding the times where, say, a particular function vanishes. This typically requires the use of a root-finding algorithm. support for parallel computing. when used for integrating with respect to time, time reversibility === Alternative methods === Many methods do not fall within the framework discussed here. Some classes of alternative methods are: multiderivative methods, which use not only the function f but also its derivatives. This class includes Hermite–Obreschkoff methods and Fehlberg methods, as well as methods like the Parker–Sochacki method or Bychkov–Scherbakov method, which compute the coefficients of the Taylor series of the solution y recursively. methods for second order ODEs. We said that all higher-order ODEs can be transformed to first-order ODEs of the form (1). While this is certainly true, it may not be the best way to proceed. In particular, Nyström methods work directly with second-order equations. geometric integration methods are especially designed for special classes of ODEs (for example, symplectic integrators for the solution of Hamiltonian equations). They take care that the numerical solution respects the underlying structure or geometry of these classes. Quantized state systems methods are a family of ODE integration methods based on the idea of state quantization. They are efficient when simulating sparse systems with frequent discontinuities. === Parallel-in-time methods === Some IVPs require integration at such high temporal resolution and/or over such long time intervals that classical serial time-stepping methods become computationally infeasible to run in real-time (e.g. IVPs in numerical weather prediction, plasma modelling, and molecular dynamics). Parallel-in-time (PinT) methods have been developed in response to these issues in order to reduce simulation runtimes through the use of parallel computing. Early PinT methods (the earliest being proposed in the 1960s) were initially overlooked by researchers due to the fact that the parallel computing architectures that they required were not yet widely available. With more computing power available, interest was renewed in the early 2000s with the development of Parareal, a flexible, easy-to-use PinT algorithm that is suitable for solving a wide variety of IVPs. The advent of exascale computing has meant that PinT algorithms are attracting increasing research attention and are being developed in such a way that they can harness the world's most powerful supercomputers. The most popular methods as of 2023 include Parareal, PFASST, ParaDiag, and MGRIT. == Analysis == Numerical analysis is not only the design of numerical methods, but also their analysis. Three central concepts in this analysis are: convergence: whether the method approximates the solution, order: how well it approximates the solution, and stability: whether errors are damped out. === Convergence === A numerical method is said to be convergent if the numerical solution approaches the exact solution as the step size h goes to 0. More precisely, we require that for every ODE (1) with a Lipschitz function f and every t* > 0, lim h → 0 + max n = 0 , 1 , … , ⌊ t ∗ / h ⌋ ‖ y n , h − y ( t n ) ‖ = 0. {\displaystyle \lim _{h\to 0^{+}}\max _{n=0,1,\dots ,\lfloor t^{*}/h\rfloor }\left\|y_{n,h}-y(t_{n})\right\|=0.} All the methods mentioned above are convergent. === Consistency and order === Suppose the numerical method is y n + k = Ψ ( t n + k ; y n , y n + 1 , … , y n + k − 1 ; h ) . {\displaystyle y_{n+k}=\Psi (t_{n+k};y_{n},y_{n+1},\dots ,y_{n+k-1};h).\,} The local (truncation) error of the method is the error committed by one step of the method. That is, it is the difference between the result given by the method, assuming that no error was made in earlier steps, and the exact solution: δ n + k h = Ψ ( t n + k ; y ( t n ) , y ( t n + 1 ) , … , y ( t n + k − 1 ) ; h ) − y ( t n + k ) . {\displaystyle \delta _{n+k}^{h}=\Psi \left(t_{n+k};y(t_{n}),y(t_{n+1}),\dots ,y(t_{n+k-1});h\right)-y(t_{n+k}).} The method is said to be consistent if lim h → 0 δ n + k h h = 0. {\displaystyle \lim _{h\to 0}{\frac {\delta _{n+k}^{h}}{h}}=0.} The method has order p {\displaystyle p} if δ n + k h = O ( h p + 1 ) as h → 0. {\displaystyle \delta _{n+k}^{h}=O(h^{p+1})\quad {\mbox{as }}h\to 0.} Hence a method is consistent if it has an order greater than 0. The (forward) Euler method (4) and the backward Euler method (6) introduced above both have order 1, so they are consistent. Most methods being used in practice attain higher order. Consistency is a necessary condition for convergence, but not sufficient; for a method to be convergent, it must be both consistent and zero-stable. A related concept is the global (truncation) error, the error sustained in all the steps one needs to reach a fixed time t {\displaystyle t} . Explicitly, the global error at time t {\displaystyle t} is y N − y ( t ) {\displaystyle y_{N}-y(t)} where N = ( t − t 0 ) / h {\displaystyle N=(t-t_{0})/h} . The global error of a p {\displaystyle p} th order one-step method is O ( h p ) {\displaystyle O(h^{p})} ; in particular, such a method is convergent. This statement is not necessarily true for multi-step methods. === Stability and stiffness === For some differential equations, application of standard methods—such as the Euler method, explicit Runge–Kutta methods, or multistep methods (for example, Adams–Bashforth methods)—exhibit instability in the solutions, though other methods may produce stable solutions. This "difficult behaviour" in the equation (which may not necessarily be complex itself) is described as stiffness, and is often caused by the presence of different time scales in the underlying problem. For example, a collision in a mechanical system like in an impact oscillator typically occurs at much smaller time scale than the time for the motion of objects; this discrepancy makes for very "sharp turns" in the curves of the state parameters. Stiff problems are ubiquitous in chemical kinetics, control theory, solid mechanics, weather forecasting, biology, plasma physics, and electronics. One way to overcome stiffness is to extend the notion of differential equation to that of differential inclusion, which allows for and models non-smoothness. == History == Below is a timeline of some important developments in this field. 1768 - Leonhard Euler publishes his method. 1824 - Augustin Louis Cauchy proves convergence of the Euler method. In this proof, Cauchy uses the implicit Euler method. 1855 - First mention of the multistep methods of John Couch Adams in a letter written by Francis Bashforth. 1895 - Carl Runge publishes the first Runge–Kutta method. 1901 - Martin Kutta describes the popular fourth-order Runge–Kutta method. 1910 - Lewis Fry Richardson announces his extrapolation method, Richardson extrapolation. 1952 - Charles F. Curtiss and Joseph Oakland Hirschfelder coin the term stiff equations. 1963 - Germund Dahlquist introduces A-stability of integration methods. == Numerical solutions to second-order one-dimensional boundary value problems == Boundary value problems (BVPs) are usually solved numerically by solving an approximately equivalent matrix problem obtained by discretizing the original BVP. The most commonly used method for numerically solving BVPs in one dimension is called the Finite Difference Method. This method takes advantage of linear combinations of point values to construct finite difference coefficients that describe derivatives of the function. For example, the second-order central difference approximation to the first derivative is given by: u i + 1 − u i − 1 2 h = u ′ ( x i ) + O ( h 2 ) , {\displaystyle {\frac {u_{i+1}-u_{i-1}}{2h}}=u'(x_{i})+{\mathcal {O}}(h^{2}),} and the second-order central difference for the second derivative is given by: u i + 1 − 2 u i + u i − 1 h 2 = u ″ ( x i ) + O ( h 2 ) . {\displaystyle {\frac {u_{i+1}-2u_{i}+u_{i-1}}{h^{2}}}=u''(x_{i})+{\mathcal {O}}(h^{2}).} In both of these formulae, h = x i − x i − 1 {\displaystyle h=x_{i}-x_{i-1}} is the distance between neighbouring x values on the discretized domain. One then constructs a linear system that can then be solved by standard matrix methods. For example, suppose the equation to be solved is: d 2 u d x 2 − u = 0 , u ( 0 ) = 0 , u ( 1 ) = 1. {\displaystyle {\begin{aligned}&{}{\frac {d^{2}u}{dx^{2}}}-u=0,\\&{}u(0)=0,\\&{}u(1)=1.\end{aligned}}} The next step would be to discretize the problem and use linear derivative approximations such as u i ″ = u i + 1 − 2 u i + u i − 1 h 2 {\displaystyle u''_{i}={\frac {u_{i+1}-2u_{i}+u_{i-1}}{h^{2}}}} and solve the resulting system of linear equations. This would lead to equations such as: u i + 1 − 2 u i + u i − 1 h 2 − u i = 0 , ∀ i = 1 , 2 , 3 , . . . , n − 1 . {\displaystyle {\frac {u_{i+1}-2u_{i}+u_{i-1}}{h^{2}}}-u_{i}=0,\quad \forall i={1,2,3,...,n-1}.} On first viewing, this system of equations appears to have difficulty associated with the fact that the equation involves no terms that are not multiplied by variables, but in fact this is false. At i = 1 and n − 1 there is a term involving the boundary values u ( 0 ) = u 0 {\displaystyle u(0)=u_{0}} and u ( 1 ) = u n {\displaystyle u(1)=u_{n}} and since these two values are known, one can simply substitute them into this equation and as a result have a non-homogeneous system of linear equations that has non-trivial solutions. == See also == Courant–Friedrichs–Lewy condition Energy drift General linear methods List of numerical analysis topics#Numerical methods for ordinary differential equations Reversible reference system propagation algorithm Modelica Language and OpenModelica software == Notes == == References == Bradie, Brian (2006). A Friendly Introduction to Numerical Analysis. Upper Saddle River, New Jersey: Pearson Prentice Hall. ISBN 978-0-13-013054-9. J. C. Butcher, Numerical methods for ordinary differential equations, ISBN 0-471-96758-0 Hairer, E.; Nørsett, S. P.; Wanner, G. (1993). Solving Ordinary Differential Equations. I. Nonstiff Problems. Springer Series in Computational Mathematics. Vol. 8 (2nd ed.). Springer-Verlag, Berlin. ISBN 3-540-56670-8. MR 1227985. Ernst Hairer and Gerhard Wanner, Solving ordinary differential equations II: Stiff and differential-algebraic problems, second edition, Springer Verlag, Berlin, 1996. ISBN 3-540-60452-9. (This two-volume monograph systematically covers all aspects of the field.) Hochbruck, Marlis; Ostermann, Alexander (May 2010). "Exponential integrators". Acta Numerica. 19: 209–286. Bibcode:2010AcNum..19..209H. CiteSeerX 10.1.1.187.6794. doi:10.1017/S0962492910000048. S2CID 4841957. Arieh Iserles, A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, 1996. ISBN 0-521-55376-8 (hardback), ISBN 0-521-55655-4 (paperback). (Textbook, targeting advanced undergraduate and postgraduate students in mathematics, which also discusses numerical partial differential equations.) John Denholm Lambert, Numerical Methods for Ordinary Differential Systems, John Wiley & Sons, Chichester, 1991. ISBN 0-471-92990-5. (Textbook, slightly more demanding than the book by Iserles.) == External links == Joseph W. Rudmin, Application of the Parker–Sochacki Method to Celestial Mechanics Archived 2016-05-16 at the Portuguese Web Archive, 1998. Dominique Tournès, L'intégration approchée des équations différentielles ordinaires (1671–1914), thèse de doctorat de l'université Paris 7 - Denis Diderot, juin 1996. Réimp. Villeneuve d'Ascq : Presses universitaires du Septentrion, 1997, 468 p. (Extensive online material on ODE numerical analysis history, for English-language material on the history of ODE numerical analysis, see, for example, the paper books by Chabert and Goldstine quoted by him.) Pchelintsev, A.N. (2020). "An accurate numerical method and algorithm for constructing solutions of chaotic systems". Journal of Applied Nonlinear Dynamics. 9 (2): 207–221. arXiv:2011.10664. doi:10.5890/JAND.2020.06.004. S2CID 225853788. kv on GitHub (C++ library with rigorous ODE solvers) INTLAB (A library made by MATLAB/GNU Octave which includes rigorous ODE solvers)
Wikipedia/Exponential_Euler_method
In numerical analysis and scientific computing, the backward Euler method (or implicit Euler method) is one of the most basic numerical methods for the solution of ordinary differential equations. It is similar to the (standard) Euler method, but differs in that it is an implicit method. The backward Euler method has error of order one in time. == Description == Consider the ordinary differential equation d y d t = f ( t , y ) {\displaystyle {\frac {\mathrm {d} y}{\mathrm {d} t}}=f(t,y)} with initial value y ( t 0 ) = y 0 . {\displaystyle y(t_{0})=y_{0}.} Here the function f {\displaystyle f} and the initial data t 0 {\displaystyle t_{0}} and y 0 {\displaystyle y_{0}} are known; the function y {\displaystyle y} depends on the real variable t {\displaystyle t} and is unknown. A numerical method produces a sequence y 0 , y 1 , y 2 , … {\displaystyle y_{0},y_{1},y_{2},\ldots } such that y k {\displaystyle y_{k}} approximates y ( t 0 + k h ) {\displaystyle y(t_{0}+kh)} , where h {\displaystyle h} is called the step size. The backward Euler method computes the approximations using y k + 1 = y k + h f ( t k + 1 , y k + 1 ) . {\displaystyle y_{k+1}=y_{k}+hf(t_{k+1},y_{k+1}).} This differs from the (forward) Euler method in that the forward method uses f ( t k , y k ) {\displaystyle f(t_{k},y_{k})} in place of f ( t k + 1 , y k + 1 ) {\displaystyle f(t_{k+1},y_{k+1})} . The backward Euler method is an implicit method: the new approximation y k + 1 {\displaystyle y_{k+1}} appears on both sides of the equation, and thus the method needs to solve an algebraic equation for the unknown y k + 1 {\displaystyle y_{k+1}} . For non-stiff problems, this can be done with fixed-point iteration: y k + 1 [ 0 ] = y k , y k + 1 [ i + 1 ] = y k + h f ( t k + 1 , y k + 1 [ i ] ) . {\displaystyle y_{k+1}^{[0]}=y_{k},\quad y_{k+1}^{[i+1]}=y_{k}+hf(t_{k+1},y_{k+1}^{[i]}).} If this sequence converges (within a given tolerance), then the method takes its limit as the new approximation y k + 1 {\displaystyle y_{k+1}} . Alternatively, one can use (some modification of) the Newton–Raphson method to solve the algebraic equation. == Derivation == Integrating the differential equation d y d t = f ( t , y ) {\displaystyle {\frac {\mathrm {d} y}{\mathrm {d} t}}=f(t,y)} from t n {\displaystyle t_{n}} to t n + 1 = t n + h {\displaystyle t_{n+1}=t_{n}+h} yields y ( t n + 1 ) − y ( t n ) = ∫ t n t n + 1 f ( t , y ( t ) ) d t . {\displaystyle y(t_{n+1})-y(t_{n})=\int _{t_{n}}^{t_{n+1}}f(t,y(t))\,\mathrm {d} t.} Now approximate the integral on the right by the right-hand rectangle method (with one rectangle): y ( t n + 1 ) − y ( t n ) ≈ h f ( t n + 1 , y ( t n + 1 ) ) . {\displaystyle y(t_{n+1})-y(t_{n})\approx hf(t_{n+1},y(t_{n+1})).} Finally, use that y n {\displaystyle y_{n}} is supposed to approximate y ( t n ) {\displaystyle y(t_{n})} and the formula for the backward Euler method follows. The same reasoning leads to the (standard) Euler method if the left-hand rectangle rule is used instead of the right-hand one. == Analysis == The local truncation error (defined as the error made in one step) of the backward Euler Method is O ( h 2 ) {\displaystyle O(h^{2})} , using the big O notation. The error at a specific time t {\displaystyle t} is O ( h 2 ) {\displaystyle O(h^{2})} . It means that this method has order one. In general, a method with O ( h k + 1 ) {\displaystyle O(h^{k+1})} LTE (local truncation error) is said to be of kth order. The region of absolute stability for the backward Euler method is the complement in the complex plane of the disk with radius 1 centered at 1, depicted in the figure. This includes the whole left half of the complex plane, making it suitable for the solution of stiff equations. In fact, the backward Euler method is even L-stable. The region for a discrete stable system by Backward Euler Method is a circle with radius 0.5 which is located at (0.5, 0) in the z-plane. == Extensions and modifications == The backward Euler method is a variant of the (forward) Euler method. Other variants are the semi-implicit Euler method and the exponential Euler method. The backward Euler method can be seen as a Runge–Kutta method with one stage, described by the Butcher tableau: 1 1 1 {\displaystyle {\begin{array}{c|c}1&1\\\hline &1\\\end{array}}} The method can also be seen as a linear multistep method with one step. It is the first method of the family of Adams–Moulton methods, and also of the family of backward differentiation formulas. == See also == Crank–Nicolson method == Notes == == References == Butcher, John C. (2003), Numerical Methods for Ordinary Differential Equations, New York: John Wiley & Sons, ISBN 978-0-471-96758-3.
Wikipedia/Backward_Euler_method
The Euler number (Eu) is a dimensionless number used in fluid flow calculations. It expresses the relationship between a local pressure drop caused by a restriction and the kinetic energy per volume of the flow, and is used to characterize energy losses in the flow, where a perfect frictionless flow corresponds to an Euler number of 0. The inverse of the Euler number is referred to as the Ruark Number with the symbol Ru. The Euler number is defined as E u = pressure forces inertial forces = ( pressure ) ( area ) ( mass ) ( acceleration ) = ( p u − p d ) L 2 ( ρ L 3 ) ( v 2 / L ) = p u − p d ρ v 2 {\displaystyle \mathrm {Eu} ={\frac {\text{pressure forces}}{\text{inertial forces}}}={\frac {({\text{pressure}})({\text{area}})}{({\text{mass}})({\text{acceleration}})}}={\frac {(p_{u}-p_{d})\,L^{2}}{(\rho L^{3})(v^{2}/L)}}={\frac {p_{u}-p_{d}}{\rho v^{2}}}} where ρ {\displaystyle \rho } is the density of the fluid. p u {\displaystyle p_{u}} is the upstream pressure. p d {\displaystyle p_{d}} is the downstream pressure. v {\displaystyle v} is a characteristic velocity of the flow. An alternative definition of the Euler number is given by Shah and Sekulic E u = pressure drop dynamic head = Δ p ρ v 2 / 2 {\displaystyle \mathrm {Eu} ={\frac {\text{pressure drop}}{\text{dynamic head}}}={\frac {\Delta p}{\rho v^{2}/2}}} where Δ p {\displaystyle \Delta p} is the pressure drop = p u − p d {\displaystyle =p_{u}-p_{d}} == See also == Darcy–Weisbach equation is a different way of interpreting the Euler number Reynolds number for use in flow analysis and similarity of flows Cavitation number a similarly formulated number with different meaning == References == == Further reading == Batchelor, G. K. (1967). An Introduction to Fluid Dynamics. Cambridge University Press. ISBN 0-521-09817-3.
Wikipedia/Euler_number_(physics)
In mathematics, the Euler–Poisson–Darboux(EPD) equation is the partial differential equation u x , y + N ( u x + u y ) x + y = 0. {\displaystyle u_{x,y}+{\frac {N(u_{x}+u_{y})}{x+y}}=0.} This equation is named for Siméon Poisson, Leonhard Euler, and Gaston Darboux. It plays an important role in solving the classical wave equation. This equation is related to u r r + m r u r − u t t = 0 , {\displaystyle u_{rr}+{\frac {m}{r}}u_{r}-u_{tt}=0,} by x = r + t {\displaystyle x=r+t} , y = r − t {\displaystyle y=r-t} , where N = m 2 {\displaystyle N={\frac {m}{2}}} and some sources quote this equation when referring to the Euler–Poisson–Darboux equation. The EPD equation equation is the simplest linear hyperbolic equation in two independent variables whose coefficients exhibit singularities, therefore it has an interest as a paradigm to relativity theory. Compact support self-similar solution of the EPD equation for thermal conduction was derived starting from the modified Fourier-Cattaneo law. It is also possible to solve the non-linear EPD equations with the method of generalized separation of variables. == References == == External links == Moroşanu, C. (2001) [1994], "Euler–Poisson–Darboux equation", Encyclopedia of Mathematics, EMS Press
Wikipedia/Euler–Poisson–Darboux_equation
In mathematics, the Euler function is given by ϕ ( q ) = ∏ k = 1 ∞ ( 1 − q k ) , | q | < 1. {\displaystyle \phi (q)=\prod _{k=1}^{\infty }(1-q^{k}),\quad |q|<1.} Named after Leonhard Euler, it is a model example of a q-series and provides the prototypical example of a relation between combinatorics and complex analysis. == Properties == The coefficient p ( k ) {\displaystyle p(k)} in the formal power series expansion for 1 / ϕ ( q ) {\displaystyle 1/\phi (q)} gives the number of partitions of k. That is, 1 ϕ ( q ) = ∑ k = 0 ∞ p ( k ) q k {\displaystyle {\frac {1}{\phi (q)}}=\sum _{k=0}^{\infty }p(k)q^{k}} where p {\displaystyle p} is the partition function. The Euler identity, also known as the Pentagonal number theorem, is ϕ ( q ) = ∑ n = − ∞ ∞ ( − 1 ) n q ( 3 n 2 − n ) / 2 . {\displaystyle \phi (q)=\sum _{n=-\infty }^{\infty }(-1)^{n}q^{(3n^{2}-n)/2}.} ( 3 n 2 − n ) / 2 {\displaystyle (3n^{2}-n)/2} is a pentagonal number. The Euler function is related to the Dedekind eta function as ϕ ( e 2 π i τ ) = e − π i τ / 12 η ( τ ) . {\displaystyle \phi (e^{2\pi i\tau })=e^{-\pi i\tau /12}\eta (\tau ).} The Euler function may be expressed as a q-Pochhammer symbol: ϕ ( q ) = ( q ; q ) ∞ . {\displaystyle \phi (q)=(q;q)_{\infty }.} The logarithm of the Euler function is the sum of the logarithms in the product expression, each of which may be expanded about q = 0, yielding ln ⁡ ( ϕ ( q ) ) = − ∑ n = 1 ∞ 1 n q n 1 − q n , {\displaystyle \ln(\phi (q))=-\sum _{n=1}^{\infty }{\frac {1}{n}}\,{\frac {q^{n}}{1-q^{n}}},} which is a Lambert series with coefficients -1/n. The logarithm of the Euler function may therefore be expressed as ln ⁡ ( ϕ ( q ) ) = ∑ n = 1 ∞ b n q n {\displaystyle \ln(\phi (q))=\sum _{n=1}^{\infty }b_{n}q^{n}} where b n = − ∑ d | n 1 d = {\displaystyle b_{n}=-\sum _{d|n}{\frac {1}{d}}=} -[1/1, 3/2, 4/3, 7/4, 6/5, 12/6, 8/7, 15/8, 13/9, 18/10, ...] (see OEIS A000203) On account of the identity σ ( n ) = ∑ d | n d = ∑ d | n n d {\displaystyle \sigma (n)=\sum _{d|n}d=\sum _{d|n}{\frac {n}{d}}} , where σ ( n ) {\displaystyle \sigma (n)} is the sum-of-divisors function, this may also be written as ln ⁡ ( ϕ ( q ) ) = − ∑ n = 1 ∞ σ ( n ) n q n {\displaystyle \ln(\phi (q))=-\sum _{n=1}^{\infty }{\frac {\sigma (n)}{n}}\ q^{n}} . Also if a , b ∈ R + {\displaystyle a,b\in \mathbb {R} ^{+}} and a b = π 2 {\displaystyle ab=\pi ^{2}} , then a 1 / 4 e − a / 12 ϕ ( e − 2 a ) = b 1 / 4 e − b / 12 ϕ ( e − 2 b ) . {\displaystyle a^{1/4}e^{-a/12}\phi (e^{-2a})=b^{1/4}e^{-b/12}\phi (e^{-2b}).} == Special values == The next identities come from Ramanujan's Notebooks: ϕ ( e − π ) = e π / 24 Γ ( 1 4 ) 2 7 / 8 π 3 / 4 {\displaystyle \phi (e^{-\pi })={\frac {e^{\pi /24}\Gamma \left({\frac {1}{4}}\right)}{2^{7/8}\pi ^{3/4}}}} ϕ ( e − 2 π ) = e π / 12 Γ ( 1 4 ) 2 π 3 / 4 {\displaystyle \phi (e^{-2\pi })={\frac {e^{\pi /12}\Gamma \left({\frac {1}{4}}\right)}{2\pi ^{3/4}}}} ϕ ( e − 4 π ) = e π / 6 Γ ( 1 4 ) 2 11 / 8 π 3 / 4 {\displaystyle \phi (e^{-4\pi })={\frac {e^{\pi /6}\Gamma \left({\frac {1}{4}}\right)}{2^{{11}/8}\pi ^{3/4}}}} ϕ ( e − 8 π ) = e π / 3 Γ ( 1 4 ) 2 29 / 16 π 3 / 4 ( 2 − 1 ) 1 / 4 {\displaystyle \phi (e^{-8\pi })={\frac {e^{\pi /3}\Gamma \left({\frac {1}{4}}\right)}{2^{29/16}\pi ^{3/4}}}({\sqrt {2}}-1)^{1/4}} Using the Pentagonal number theorem, exchanging sum and integral, and then invoking complex-analytic methods, one derives ∫ 0 1 ϕ ( q ) d q = 8 3 23 π sinh ⁡ ( 23 π 6 ) 2 cosh ⁡ ( 23 π 3 ) − 1 . {\displaystyle \int _{0}^{1}\phi (q)\,\mathrm {d} q={\frac {8{\sqrt {\frac {3}{23}}}\pi \sinh \left({\frac {{\sqrt {23}}\pi }{6}}\right)}{2\cosh \left({\frac {{\sqrt {23}}\pi }{3}}\right)-1}}.} == References == Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR 0434929, Zbl 0335.10001
Wikipedia/Euler_function
In numerical analysis, the Runge–Kutta methods (English: RUUNG-ə-KUUT-tah) are a family of implicit and explicit iterative methods, which include the Euler method, used in temporal discretization for the approximate solutions of simultaneous nonlinear equations. These methods were developed around 1900 by the German mathematicians Carl Runge and Wilhelm Kutta. == The Runge–Kutta method == The most widely known member of the Runge–Kutta family is generally referred to as "RK4", the "classic Runge–Kutta method" or simply as "the Runge–Kutta method". Let an initial value problem be specified as follows: d y d t = f ( t , y ) , y ( t 0 ) = y 0 . {\displaystyle {\frac {dy}{dt}}=f(t,y),\quad y(t_{0})=y_{0}.} Here y {\displaystyle y} is an unknown function (scalar or vector) of time t {\displaystyle t} , which we would like to approximate; we are told that d y d t {\displaystyle {\frac {dy}{dt}}} , the rate at which y {\displaystyle y} changes, is a function of t {\displaystyle t} and of y {\displaystyle y} itself. At the initial time t 0 {\displaystyle t_{0}} the corresponding y {\displaystyle y} value is y 0 {\displaystyle y_{0}} . The function f {\displaystyle f} and the initial conditions t 0 {\displaystyle t_{0}} , y 0 {\displaystyle y_{0}} are given. Now we pick a step-size h > 0 and define: y n + 1 = y n + h 6 ( k 1 + 2 k 2 + 2 k 3 + k 4 ) , t n + 1 = t n + h {\displaystyle {\begin{aligned}y_{n+1}&=y_{n}+{\frac {h}{6}}\left(k_{1}+2k_{2}+2k_{3}+k_{4}\right),\\t_{n+1}&=t_{n}+h\\\end{aligned}}} for n = 0, 1, 2, 3, ..., using k 1 = f ( t n , y n ) , k 2 = f ( t n + h 2 , y n + h k 1 2 ) , k 3 = f ( t n + h 2 , y n + h k 2 2 ) , k 4 = f ( t n + h , y n + h k 3 ) . {\displaystyle {\begin{aligned}k_{1}&=\ f(t_{n},y_{n}),\\k_{2}&=\ f\!\left(t_{n}+{\frac {h}{2}},y_{n}+h{\frac {k_{1}}{2}}\right),\\k_{3}&=\ f\!\left(t_{n}+{\frac {h}{2}},y_{n}+h{\frac {k_{2}}{2}}\right),\\k_{4}&=\ f\!\left(t_{n}+h,y_{n}+hk_{3}\right).\end{aligned}}} (Note: the above equations have different but equivalent definitions in different texts.) Here y n + 1 {\displaystyle y_{n+1}} is the RK4 approximation of y ( t n + 1 ) {\displaystyle y(t_{n+1})} , and the next value ( y n + 1 {\displaystyle y_{n+1}} ) is determined by the present value ( y n {\displaystyle y_{n}} ) plus the weighted average of four increments, where each increment is the product of the size of the interval, h, and an estimated slope specified by function f on the right-hand side of the differential equation. k 1 {\displaystyle k_{1}} is the slope at the beginning of the interval, using y {\displaystyle y} (Euler's method); k 2 {\displaystyle k_{2}} is the slope at the midpoint of the interval, using y {\displaystyle y} and k 1 {\displaystyle k_{1}} ; k 3 {\displaystyle k_{3}} is again the slope at the midpoint, but now using y {\displaystyle y} and k 2 {\displaystyle k_{2}} ; k 4 {\displaystyle k_{4}} is the slope at the end of the interval, using y {\displaystyle y} and k 3 {\displaystyle k_{3}} . In averaging the four slopes, greater weight is given to the slopes at the midpoint. If f {\displaystyle f} is independent of y {\displaystyle y} , so that the differential equation is equivalent to a simple integral, then RK4 is Simpson's rule. The RK4 method is a fourth-order method, meaning that the local truncation error is on the order of O ( h 5 ) {\displaystyle O(h^{5})} , while the total accumulated error is on the order of O ( h 4 ) {\displaystyle O(h^{4})} . In many practical applications the function f {\displaystyle f} is independent of t {\displaystyle t} (so called autonomous system, or time-invariant system, especially in physics), and their increments are not computed at all and not passed to function f {\displaystyle f} , with only the final formula for t n + 1 {\displaystyle t_{n+1}} used. == Explicit Runge–Kutta methods == The family of explicit Runge–Kutta methods is a generalization of the RK4 method mentioned above. It is given by y n + 1 = y n + h ∑ i = 1 s b i k i , {\displaystyle y_{n+1}=y_{n}+h\sum _{i=1}^{s}b_{i}k_{i},} where k 1 = f ( t n , y n ) , k 2 = f ( t n + c 2 h , y n + ( a 21 k 1 ) h ) , k 3 = f ( t n + c 3 h , y n + ( a 31 k 1 + a 32 k 2 ) h ) , ⋮ k s = f ( t n + c s h , y n + ( a s 1 k 1 + a s 2 k 2 + ⋯ + a s , s − 1 k s − 1 ) h ) . {\displaystyle {\begin{aligned}k_{1}&=f(t_{n},y_{n}),\\k_{2}&=f(t_{n}+c_{2}h,y_{n}+(a_{21}k_{1})h),\\k_{3}&=f(t_{n}+c_{3}h,y_{n}+(a_{31}k_{1}+a_{32}k_{2})h),\\&\ \ \vdots \\k_{s}&=f(t_{n}+c_{s}h,y_{n}+(a_{s1}k_{1}+a_{s2}k_{2}+\cdots +a_{s,s-1}k_{s-1})h).\end{aligned}}} (Note: the above equations may have different but equivalent definitions in some texts.) To specify a particular method, one needs to provide the integer s (the number of stages), and the coefficients aij (for 1 ≤ j < i ≤ s), bi (for i = 1, 2, ..., s) and ci (for i = 2, 3, ..., s). The matrix [aij] is called the Runge–Kutta matrix, while the bi and ci are known as the weights and the nodes. These data are usually arranged in a mnemonic device, known as a Butcher tableau (after John C. Butcher): A Taylor series expansion shows that the Runge–Kutta method is consistent if and only if ∑ i = 1 s b i = 1. {\displaystyle \sum _{i=1}^{s}b_{i}=1.} There are also accompanying requirements if one requires the method to have a certain order p, meaning that the local truncation error is O(hp+1). These can be derived from the definition of the truncation error itself. For example, a two-stage method has order 2 if b1 + b2 = 1, b2c2 = 1/2, and b2a21 = 1/2. Note that a popular condition for determining coefficients is ∑ j = 1 i − 1 a i j = c i for i = 2 , … , s . {\displaystyle \sum _{j=1}^{i-1}a_{ij}=c_{i}{\text{ for }}i=2,\ldots ,s.} This condition alone, however, is neither sufficient, nor necessary for consistency. In general, if an explicit s {\displaystyle s} -stage Runge–Kutta method has order p {\displaystyle p} , then it can be proven that the number of stages must satisfy s ≥ p {\displaystyle s\geq p} and if p ≥ 5 {\displaystyle p\geq 5} , then s ≥ p + 1 {\displaystyle s\geq p+1} . However, it is not known whether these bounds are sharp in all cases. In some cases, it is proven that the bound cannot be achieved. For instance, Butcher proved that for p > 6 {\displaystyle p>6} , there is no explicit method with s = p + 1 {\displaystyle s=p+1} stages. Butcher also proved that for p > 7 {\displaystyle p>7} , there is no explicit Runge-Kutta method with p + 2 {\displaystyle p+2} stages. In general, however, it remains an open problem what the precise minimum number of stages s {\displaystyle s} is for an explicit Runge–Kutta method to have order p {\displaystyle p} . Some values which are known are: p 1 2 3 4 5 6 7 8 min s 1 2 3 4 6 7 9 11 {\displaystyle {\begin{array}{c|cccccccc}p&1&2&3&4&5&6&7&8\\\hline \min s&1&2&3&4&6&7&9&11\end{array}}} The provable bound above then imply that we can not find methods of orders p = 1 , 2 , … , 6 {\displaystyle p=1,2,\ldots ,6} that require fewer stages than the methods we already know for these orders. The work of Butcher also proves that 7th and 8th order methods have a minimum of 9 and 11 stages, respectively. An example of an explicit method of order 6 with 7 stages can be found in Ref. Explicit methods of order 7 with 9 stages and explicit methods of order 8 with 11 stages are also known. See Refs. for a summary. === Examples === The RK4 method falls in this framework. Its tableau is A slight variation of "the" Runge–Kutta method is also due to Kutta in 1901 and is called the 3/8-rule. The primary advantage this method has is that almost all of the error coefficients are smaller than in the popular method, but it requires slightly more FLOPs (floating-point operations) per time step. Its Butcher tableau is However, the simplest Runge–Kutta method is the (forward) Euler method, given by the formula y n + 1 = y n + h f ( t n , y n ) {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n})} . This is the only consistent explicit Runge–Kutta method with one stage. The corresponding tableau is === Second-order methods with two stages === An example of a second-order method with two stages is provided by the explicit midpoint method: y n + 1 = y n + h f ( t n + 1 2 h , y n + 1 2 h f ( t n , y n ) ) . {\displaystyle y_{n+1}=y_{n}+hf\left(t_{n}+{\frac {1}{2}}h,y_{n}+{\frac {1}{2}}hf(t_{n},\ y_{n})\right).} The corresponding tableau is The midpoint method is not the only second-order Runge–Kutta method with two stages; there is a family of such methods, parameterized by α and given by the formula y n + 1 = y n + h ( ( 1 − 1 2 α ) f ( t n , y n ) + 1 2 α f ( t n + α h , y n + α h f ( t n , y n ) ) ) . {\displaystyle y_{n+1}=y_{n}+h{\bigl (}(1-{\tfrac {1}{2\alpha }})f(t_{n},y_{n})+{\tfrac {1}{2\alpha }}f(t_{n}+\alpha h,y_{n}+\alpha hf(t_{n},y_{n})){\bigr )}.} Its Butcher tableau is In this family, α = 1 2 {\displaystyle \alpha ={\tfrac {1}{2}}} gives the midpoint method, α = 1 {\displaystyle \alpha =1} is Heun's method, and α = 2 3 {\displaystyle \alpha ={\tfrac {2}{3}}} is Ralston's method. == Use == As an example, consider the two-stage second-order Runge–Kutta method with α = 2/3, also known as Ralston method. It is given by the tableau with the corresponding equations k 1 = f ( t n , y n ) , k 2 = f ( t n + 2 3 h , y n + 2 3 h k 1 ) , y n + 1 = y n + h ( 1 4 k 1 + 3 4 k 2 ) . {\displaystyle {\begin{aligned}k_{1}&=f(t_{n},\ y_{n}),\\k_{2}&=f(t_{n}+{\tfrac {2}{3}}h,\ y_{n}+{\tfrac {2}{3}}hk_{1}),\\y_{n+1}&=y_{n}+h\left({\tfrac {1}{4}}k_{1}+{\tfrac {3}{4}}k_{2}\right).\end{aligned}}} This method is used to solve the initial-value problem d y d t = tan ⁡ ( y ) + 1 , y 0 = 1 , t ∈ [ 1 , 1.1 ] {\displaystyle {\frac {dy}{dt}}=\tan(y)+1,\quad y_{0}=1,\ t\in [1,1.1]} with step size h = 0.025, so the method needs to take four steps. The method proceeds as follows: The numerical solutions correspond to the underlined values. == Adaptive Runge–Kutta methods == Adaptive methods are designed to produce an estimate of the local truncation error of a single Runge–Kutta step. This is done by having two methods, one with order p {\displaystyle p} and one with order p − 1 {\displaystyle p-1} . These methods are interwoven, i.e., they have common intermediate steps. Thanks to this, estimating the error has little or negligible computational cost compared to a step with the higher-order method. During the integration, the step size is adapted such that the estimated error stays below a user-defined threshold: If the error is too high, a step is repeated with a lower step size; if the error is much smaller, the step size is increased to save time. This results in an (almost), optimal step size, which saves computation time. Moreover, the user does not have to spend time on finding an appropriate step size. The lower-order step is given by y n + 1 ∗ = y n + h ∑ i = 1 s b i ∗ k i , {\displaystyle y_{n+1}^{*}=y_{n}+h\sum _{i=1}^{s}b_{i}^{*}k_{i},} where k i {\displaystyle k_{i}} are the same as for the higher-order method. Then the error is e n + 1 = y n + 1 − y n + 1 ∗ = h ∑ i = 1 s ( b i − b i ∗ ) k i , {\displaystyle e_{n+1}=y_{n+1}-y_{n+1}^{*}=h\sum _{i=1}^{s}(b_{i}-b_{i}^{*})k_{i},} which is O ( h p ) {\displaystyle O(h^{p})} . The Butcher tableau for this kind of method is extended to give the values of b i ∗ {\displaystyle b_{i}^{*}} : The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4. Its extended Butcher tableau is: However, the simplest adaptive Runge–Kutta method involves combining Heun's method, which is order 2, with the Euler method, which is order 1. Its extended Butcher tableau is: Other adaptive Runge–Kutta methods are the Bogacki–Shampine method (orders 3 and 2), the Cash–Karp method and the Dormand–Prince method (both with orders 5 and 4). == Nonconfluent Runge–Kutta methods == A Runge–Kutta method is said to be nonconfluent if all the c i , i = 1 , 2 , … , s {\displaystyle c_{i},\,i=1,2,\ldots ,s} are distinct. == Runge–Kutta–Nyström methods == Runge–Kutta–Nyström methods are specialized Runge–Kutta methods that are optimized for second-order differential equations. A general Runge–Kutta–Nyström method for a second-order ODE system y ¨ i = f i ( y 1 , y 2 , … , y n ) {\displaystyle {\ddot {y}}_{i}=f_{i}(y_{1},y_{2},\ldots ,y_{n})} with order s {\displaystyle s} is with the form { g i = y m + c i h y ˙ m + h 2 ∑ j = 1 s a i j f ( g j ) , i = 1 , 2 , … , s y m + 1 = y m + h y ˙ m + h 2 ∑ j = 1 s b ¯ j f ( g j ) y ˙ m + 1 = y ˙ m + h ∑ j = 1 s b j f ( g j ) {\displaystyle {\begin{cases}g_{i}=y_{m}+c_{i}h{\dot {y}}_{m}+h^{2}\sum _{j=1}^{s}a_{ij}f(g_{j}),&i=1,2,\ldots ,s\\y_{m+1}=y_{m}+h{\dot {y}}_{m}+h^{2}\sum _{j=1}^{s}{\bar {b}}_{j}f(g_{j})\\{\dot {y}}_{m+1}={\dot {y}}_{m}+h\sum _{j=1}^{s}b_{j}f(g_{j})\end{cases}}} which forms a Butcher table with the form c 1 a 11 a 12 … a 1 s c 2 a 21 a 22 … a 2 s ⋮ ⋮ ⋮ ⋱ ⋮ c s a s 1 a s 2 … a s s b ¯ 1 b ¯ 2 … b ¯ s b 1 b 2 … b s = c A b ¯ ⊤ b ⊤ {\displaystyle {\begin{array}{c|cccc}c_{1}&a_{11}&a_{12}&\dots &a_{1s}\\c_{2}&a_{21}&a_{22}&\dots &a_{2s}\\\vdots &\vdots &\vdots &\ddots &\vdots \\c_{s}&a_{s1}&a_{s2}&\dots &a_{ss}\\\hline &{\bar {b}}_{1}&{\bar {b}}_{2}&\dots &{\bar {b}}_{s}\\&b_{1}&b_{2}&\dots &b_{s}\end{array}}={\begin{array}{c|c}\mathbf {c} &\mathbf {A} \\\hline &\mathbf {\bar {b}} ^{\top }\\&\mathbf {b} ^{\top }\end{array}}} Two fourth-order explicit RKN methods are given by the following Butcher tables: c i a i j 3 + 3 6 0 0 0 3 − 3 6 2 − 3 12 0 0 3 + 3 6 0 3 6 0 b i ¯ 5 − 3 3 24 3 + 3 12 1 + 3 24 b i 3 − 2 3 12 1 2 3 + 2 3 12 {\displaystyle {\begin{array}{c|ccc}c_{i}&&a_{ij}&\\{\frac {3+{\sqrt {3}}}{6}}&0&0&0\\{\frac {3-{\sqrt {3}}}{6}}&{\frac {2-{\sqrt {3}}}{12}}&0&0\\{\frac {3+{\sqrt {3}}}{6}}&0&{\frac {\sqrt {3}}{6}}&0\\\hline {\overline {b_{i}}}&{\frac {5-3{\sqrt {3}}}{24}}&{\frac {3+{\sqrt {3}}}{12}}&{\frac {1+{\sqrt {3}}}{24}}\\\hline b_{i}&{\frac {3-2{\sqrt {3}}}{12}}&{\frac {1}{2}}&{\frac {3+2{\sqrt {3}}}{12}}\end{array}}} c i a i j 3 − 3 6 0 0 0 3 + 3 6 2 + 3 12 0 0 3 − 3 6 0 − 3 6 0 b i ¯ 5 + 3 3 24 3 − 3 12 1 − 3 24 b i 3 + 2 3 12 1 2 3 − 2 3 12 {\displaystyle {\begin{array}{c|ccc}c_{i}&&a_{ij}&\\{\frac {3-{\sqrt {3}}}{6}}&0&0&0\\{\frac {3+{\sqrt {3}}}{6}}&{\frac {2+{\sqrt {3}}}{12}}&0&0\\{\frac {3-{\sqrt {3}}}{6}}&0&-{\frac {\sqrt {3}}{6}}&0\\\hline {\overline {b_{i}}}&{\frac {5+3{\sqrt {3}}}{24}}&{\frac {3-{\sqrt {3}}}{12}}&{\frac {1-{\sqrt {3}}}{24}}\\\hline b_{i}&{\frac {3+2{\sqrt {3}}}{12}}&{\frac {1}{2}}&{\frac {3-2{\sqrt {3}}}{12}}\end{array}}} These two schemes also have the symplectic-preserving properties when the original equation is derived from a conservative classical mechanical system, i.e. when f i ( x 1 , … , x n ) = ∂ V ∂ x i ( x 1 , … , x n ) {\displaystyle f_{i}(x_{1},\ldots ,x_{n})={\frac {\partial V}{\partial x_{i}}}(x_{1},\ldots ,x_{n})} for some scalar function V {\displaystyle V} . == Implicit Runge–Kutta methods == All Runge–Kutta methods mentioned up to now are explicit methods. Explicit Runge–Kutta methods are generally unsuitable for the solution of stiff equations because their region of absolute stability is small; in particular, it is bounded. This issue is especially important in the solution of partial differential equations. The instability of explicit Runge–Kutta methods motivates the development of implicit methods. An implicit Runge–Kutta method has the form y n + 1 = y n + h ∑ i = 1 s b i k i , {\displaystyle y_{n+1}=y_{n}+h\sum _{i=1}^{s}b_{i}k_{i},} where k i = f ( t n + c i h , y n + h ∑ j = 1 s a i j k j ) , i = 1 , … , s . {\displaystyle k_{i}=f\left(t_{n}+c_{i}h,\ y_{n}+h\sum _{j=1}^{s}a_{ij}k_{j}\right),\quad i=1,\ldots ,s.} The difference with an explicit method is that in an explicit method, the sum over j only goes up to i − 1. This also shows up in the Butcher tableau: the coefficient matrix a i j {\displaystyle a_{ij}} of an explicit method is lower triangular. In an implicit method, the sum over j goes up to s and the coefficient matrix is not triangular, yielding a Butcher tableau of the form c 1 a 11 a 12 … a 1 s c 2 a 21 a 22 … a 2 s ⋮ ⋮ ⋮ ⋱ ⋮ c s a s 1 a s 2 … a s s b 1 b 2 … b s b 1 ∗ b 2 ∗ … b s ∗ = c A b T {\displaystyle {\begin{array}{c|cccc}c_{1}&a_{11}&a_{12}&\dots &a_{1s}\\c_{2}&a_{21}&a_{22}&\dots &a_{2s}\\\vdots &\vdots &\vdots &\ddots &\vdots \\c_{s}&a_{s1}&a_{s2}&\dots &a_{ss}\\\hline &b_{1}&b_{2}&\dots &b_{s}\\&b_{1}^{*}&b_{2}^{*}&\dots &b_{s}^{*}\\\end{array}}={\begin{array}{c|c}\mathbf {c} &A\\\hline &\mathbf {b^{T}} \\\end{array}}} See Adaptive Runge-Kutta methods above for the explanation of the b ∗ {\displaystyle b^{*}} row. The consequence of this difference is that at every step, a system of algebraic equations has to be solved. This increases the computational cost considerably. If a method with s stages is used to solve a differential equation with m components, then the system of algebraic equations has ms components. This can be contrasted with implicit linear multistep methods (the other big family of methods for ODEs): an implicit s-step linear multistep method needs to solve a system of algebraic equations with only m components, so the size of the system does not increase as the number of steps increases. === Examples === The simplest example of an implicit Runge–Kutta method is the backward Euler method: y n + 1 = y n + h f ( t n + h , y n + 1 ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n}+h,\ y_{n+1}).\,} The Butcher tableau for this is simply: 1 1 1 {\displaystyle {\begin{array}{c|c}1&1\\\hline &1\\\end{array}}} This Butcher tableau corresponds to the formulae k 1 = f ( t n + h , y n + h k 1 ) and y n + 1 = y n + h k 1 , {\displaystyle k_{1}=f(t_{n}+h,\ y_{n}+hk_{1})\quad {\text{and}}\quad y_{n+1}=y_{n}+hk_{1},} which can be re-arranged to get the formula for the backward Euler method listed above. Another example for an implicit Runge–Kutta method is the trapezoidal rule. Its Butcher tableau is: 0 0 0 1 1 2 1 2 1 2 1 2 1 0 {\displaystyle {\begin{array}{c|cc}0&0&0\\1&{\frac {1}{2}}&{\frac {1}{2}}\\\hline &{\frac {1}{2}}&{\frac {1}{2}}\\&1&0\\\end{array}}} The trapezoidal rule is a collocation method (as discussed in that article). All collocation methods are implicit Runge–Kutta methods, but not all implicit Runge–Kutta methods are collocation methods. The Gauss–Legendre methods form a family of collocation methods based on Gauss quadrature. A Gauss–Legendre method with s stages has order 2s (thus, methods with arbitrarily high order can be constructed). The method with two stages (and thus order four) has Butcher tableau: 1 2 − 1 6 3 1 4 1 4 − 1 6 3 1 2 + 1 6 3 1 4 + 1 6 3 1 4 1 2 1 2 1 2 + 1 2 3 1 2 − 1 2 3 {\displaystyle {\begin{array}{c|cc}{\frac {1}{2}}-{\frac {1}{6}}{\sqrt {3}}&{\frac {1}{4}}&{\frac {1}{4}}-{\frac {1}{6}}{\sqrt {3}}\\{\frac {1}{2}}+{\frac {1}{6}}{\sqrt {3}}&{\frac {1}{4}}+{\frac {1}{6}}{\sqrt {3}}&{\frac {1}{4}}\\\hline &{\frac {1}{2}}&{\frac {1}{2}}\\&{\frac {1}{2}}+{\frac {1}{2}}{\sqrt {3}}&{\frac {1}{2}}-{\frac {1}{2}}{\sqrt {3}}\end{array}}} === Stability === The advantage of implicit Runge–Kutta methods over explicit ones is their greater stability, especially when applied to stiff equations. Consider the linear test equation y ′ = λ y {\displaystyle y'=\lambda y} . A Runge–Kutta method applied to this equation reduces to the iteration y n + 1 = r ( h λ ) y n {\displaystyle y_{n+1}=r(h\lambda )\,y_{n}} , with r given by r ( z ) = 1 + z b T ( I − z A ) − 1 e = det ( I − z A + z e b T ) det ( I − z A ) , {\displaystyle r(z)=1+zb^{T}(I-zA)^{-1}e={\frac {\det(I-zA+zeb^{T})}{\det(I-zA)}},} where e stands for the vector of ones. The function r is called the stability function. It follows from the formula that r is the quotient of two polynomials of degree s if the method has s stages. Explicit methods have a strictly lower triangular matrix A, which implies that det(I − zA) = 1 and that the stability function is a polynomial. The numerical solution to the linear test equation decays to zero if | r(z) | < 1 with z = hλ. The set of such z is called the domain of absolute stability. In particular, the method is said to be absolute stable if all z with Re(z) < 0 are in the domain of absolute stability. The stability function of an explicit Runge–Kutta method is a polynomial, so explicit Runge–Kutta methods can never be A-stable. If the method has order p, then the stability function satisfies r ( z ) = e z + O ( z p + 1 ) {\displaystyle r(z)={\textrm {e}}^{z}+O(z^{p+1})} as z → 0 {\displaystyle z\to 0} . Thus, it is of interest to study quotients of polynomials of given degrees that approximate the exponential function the best. These are known as Padé approximants. A Padé approximant with numerator of degree m and denominator of degree n is A-stable if and only if m ≤ n ≤ m + 2. The Gauss–Legendre method with s stages has order 2s, so its stability function is the Padé approximant with m = n = s. It follows that the method is A-stable. This shows that A-stable Runge–Kutta can have arbitrarily high order. In contrast, the order of A-stable linear multistep methods cannot exceed two. == B-stability == The A-stability concept for the solution of differential equations is related to the linear autonomous equation y ′ = λ y {\displaystyle y'=\lambda y} . Dahlquist (1963) proposed the investigation of stability of numerical schemes when applied to nonlinear systems that satisfy a monotonicity condition. The corresponding concepts were defined as G-stability for multistep methods (and the related one-leg methods) and B-stability (Butcher, 1975) for Runge–Kutta methods. A Runge–Kutta method applied to the non-linear system y ′ = f ( y ) {\displaystyle y'=f(y)} , which verifies ⟨ f ( y ) − f ( z ) , y − z ⟩ ≤ 0 {\displaystyle \langle f(y)-f(z),\ y-z\rangle \leq 0} , is called B-stable, if this condition implies ‖ y n + 1 − z n + 1 ‖ ≤ ‖ y n − z n ‖ {\displaystyle \|y_{n+1}-z_{n+1}\|\leq \|y_{n}-z_{n}\|} for two numerical solutions. Let B {\displaystyle B} , M {\displaystyle M} and Q {\displaystyle Q} be three s × s {\displaystyle s\times s} matrices defined by B = diag ⁡ ( b 1 , b 2 , … , b s ) , M = B A + A T B − b b T , Q = B A − 1 + A − T B − A − T b b T A − 1 . {\displaystyle {\begin{aligned}B&=\operatorname {diag} (b_{1},b_{2},\ldots ,b_{s}),\\[4pt]M&=BA+A^{T}B-bb^{T},\\[4pt]Q&=BA^{-1}+A^{-T}B-A^{-T}bb^{T}A^{-1}.\end{aligned}}} A Runge–Kutta method is said to be algebraically stable if the matrices B {\displaystyle B} and M {\displaystyle M} are both non-negative definite. A sufficient condition for B-stability is: B {\displaystyle B} and Q {\displaystyle Q} are non-negative definite. == Derivation of the Runge–Kutta fourth-order method == In general a Runge–Kutta method of order s {\displaystyle s} can be written as: y t + h = y t + h ⋅ ∑ i = 1 s a i k i + O ( h s + 1 ) , {\displaystyle y_{t+h}=y_{t}+h\cdot \sum _{i=1}^{s}a_{i}k_{i}+{\mathcal {O}}(h^{s+1}),} where: k i = ∑ j = 1 s β i j f ( k j , t n + α i h ) {\displaystyle k_{i}=\sum _{j=1}^{s}\beta _{ij}f(k_{j},\ t_{n}+\alpha _{i}h)} are increments obtained evaluating the derivatives of y t {\displaystyle y_{t}} at the i {\displaystyle i} -th order. We develop the derivation for the Runge–Kutta fourth-order method using the general formula with s = 4 {\displaystyle s=4} evaluated, as explained above, at the starting point, the midpoint and the end point of any interval ( t , t + h ) {\displaystyle (t,\ t+h)} ; thus, we choose: α i β i j α 1 = 0 β 21 = 1 2 α 2 = 1 2 β 32 = 1 2 α 3 = 1 2 β 43 = 1 α 4 = 1 {\displaystyle {\begin{aligned}&\alpha _{i}&&\beta _{ij}\\\alpha _{1}&=0&\beta _{21}&={\frac {1}{2}}\\\alpha _{2}&={\frac {1}{2}}&\beta _{32}&={\frac {1}{2}}\\\alpha _{3}&={\frac {1}{2}}&\beta _{43}&=1\\\alpha _{4}&=1&&\\\end{aligned}}} and β i j = 0 {\displaystyle \beta _{ij}=0} otherwise. We begin by defining the following quantities: y t + h 1 = y t + h f ( y t , t ) y t + h 2 = y t + h f ( y t + h / 2 1 , t + h 2 ) y t + h 3 = y t + h f ( y t + h / 2 2 , t + h 2 ) {\displaystyle {\begin{aligned}y_{t+h}^{1}&=y_{t}+hf\left(y_{t},\ t\right)\\y_{t+h}^{2}&=y_{t}+hf\left(y_{t+h/2}^{1},\ t+{\frac {h}{2}}\right)\\y_{t+h}^{3}&=y_{t}+hf\left(y_{t+h/2}^{2},\ t+{\frac {h}{2}}\right)\end{aligned}}} where y t + h / 2 1 = y t + y t + h 1 2 {\displaystyle y_{t+h/2}^{1}={\dfrac {y_{t}+y_{t+h}^{1}}{2}}} and y t + h / 2 2 = y t + y t + h 2 2 . {\displaystyle y_{t+h/2}^{2}={\dfrac {y_{t}+y_{t+h}^{2}}{2}}.} If we define: k 1 = f ( y t , t ) k 2 = f ( y t + h / 2 1 , t + h 2 ) = f ( y t + h 2 k 1 , t + h 2 ) k 3 = f ( y t + h / 2 2 , t + h 2 ) = f ( y t + h 2 k 2 , t + h 2 ) k 4 = f ( y t + h 3 , t + h ) = f ( y t + h k 3 , t + h ) {\displaystyle {\begin{aligned}k_{1}&=f(y_{t},\ t)\\k_{2}&=f\left(y_{t+h/2}^{1},\ t+{\frac {h}{2}}\right)=f\left(y_{t}+{\frac {h}{2}}k_{1},\ t+{\frac {h}{2}}\right)\\k_{3}&=f\left(y_{t+h/2}^{2},\ t+{\frac {h}{2}}\right)=f\left(y_{t}+{\frac {h}{2}}k_{2},\ t+{\frac {h}{2}}\right)\\k_{4}&=f\left(y_{t+h}^{3},\ t+h\right)=f\left(y_{t}+hk_{3},\ t+h\right)\end{aligned}}} and for the previous relations we can show that the following equalities hold up to O ( h 2 ) {\displaystyle {\mathcal {O}}(h^{2})} : k 2 = f ( y t + h / 2 1 , t + h 2 ) = f ( y t + h 2 k 1 , t + h 2 ) = f ( y t , t ) + h 2 d d t f ( y t , t ) k 3 = f ( y t + h / 2 2 , t + h 2 ) = f ( y t + h 2 f ( y t + h 2 k 1 , t + h 2 ) , t + h 2 ) = f ( y t , t ) + h 2 d d t [ f ( y t , t ) + h 2 d d t f ( y t , t ) ] k 4 = f ( y t + h 3 , t + h ) = f ( y t + h f ( y t + h 2 k 2 , t + h 2 ) , t + h ) = f ( y t + h f ( y t + h 2 f ( y t + h 2 f ( y t , t ) , t + h 2 ) , t + h 2 ) , t + h ) = f ( y t , t ) + h d d t [ f ( y t , t ) + h 2 d d t [ f ( y t , t ) + h 2 d d t f ( y t , t ) ] ] {\displaystyle {\begin{aligned}k_{2}&=f\left(y_{t+h/2}^{1},\ t+{\frac {h}{2}}\right)=f\left(y_{t}+{\frac {h}{2}}k_{1},\ t+{\frac {h}{2}}\right)\\&=f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}f\left(y_{t},\ t\right)\\k_{3}&=f\left(y_{t+h/2}^{2},\ t+{\frac {h}{2}}\right)=f\left(y_{t}+{\frac {h}{2}}f\left(y_{t}+{\frac {h}{2}}k_{1},\ t+{\frac {h}{2}}\right),\ t+{\frac {h}{2}}\right)\\&=f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}\left[f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}f\left(y_{t},\ t\right)\right]\\k_{4}&=f\left(y_{t+h}^{3},\ t+h\right)=f\left(y_{t}+hf\left(y_{t}+{\frac {h}{2}}k_{2},\ t+{\frac {h}{2}}\right),\ t+h\right)\\&=f\left(y_{t}+hf\left(y_{t}+{\frac {h}{2}}f\left(y_{t}+{\frac {h}{2}}f\left(y_{t},\ t\right),\ t+{\frac {h}{2}}\right),\ t+{\frac {h}{2}}\right),\ t+h\right)\\&=f\left(y_{t},\ t\right)+h{\frac {d}{dt}}\left[f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}\left[f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}f\left(y_{t},\ t\right)\right]\right]\end{aligned}}} where: d d t f ( y t , t ) = ∂ ∂ y f ( y t , t ) y ˙ t + ∂ ∂ t f ( y t , t ) = f y ( y t , t ) y ˙ t + f t ( y t , t ) := y ¨ t {\displaystyle {\frac {d}{dt}}f(y_{t},\ t)={\frac {\partial }{\partial y}}f(y_{t},\ t){\dot {y}}_{t}+{\frac {\partial }{\partial t}}f(y_{t},\ t)=f_{y}(y_{t},\ t){\dot {y}}_{t}+f_{t}(y_{t},\ t):={\ddot {y}}_{t}} is the total derivative of f {\displaystyle f} with respect to time. If we now express the general formula using what we just derived we obtain: y t + h = y t + h { a ⋅ f ( y t , t ) + b ⋅ [ f ( y t , t ) + h 2 d d t f ( y t , t ) ] + + c ⋅ [ f ( y t , t ) + h 2 d d t [ f ( y t , t ) + h 2 d d t f ( y t , t ) ] ] + + d ⋅ [ f ( y t , t ) + h d d t [ f ( y t , t ) + h 2 d d t [ f ( y t , t ) + h 2 d d t f ( y t , t ) ] ] ] } + O ( h 5 ) = y t + a ⋅ h f t + b ⋅ h f t + b ⋅ h 2 2 d f t d t + c ⋅ h f t + c ⋅ h 2 2 d f t d t + + c ⋅ h 3 4 d 2 f t d t 2 + d ⋅ h f t + d ⋅ h 2 d f t d t + d ⋅ h 3 2 d 2 f t d t 2 + d ⋅ h 4 4 d 3 f t d t 3 + O ( h 5 ) {\displaystyle {\begin{aligned}y_{t+h}={}&y_{t}+h\left\lbrace a\cdot f(y_{t},\ t)+b\cdot \left[f(y_{t},\ t)+{\frac {h}{2}}{\frac {d}{dt}}f(y_{t},\ t)\right]\right.+\\&{}+c\cdot \left[f(y_{t},\ t)+{\frac {h}{2}}{\frac {d}{dt}}\left[f\left(y_{t},\ t\right)+{\frac {h}{2}}{\frac {d}{dt}}f(y_{t},\ t)\right]\right]+\\&{}+d\cdot \left[f(y_{t},\ t)+h{\frac {d}{dt}}\left[f(y_{t},\ t)+{\frac {h}{2}}{\frac {d}{dt}}\left[f(y_{t},\ t)+\left.{\frac {h}{2}}{\frac {d}{dt}}f(y_{t},\ t)\right]\right]\right]\right\rbrace +{\mathcal {O}}(h^{5})\\={}&y_{t}+a\cdot hf_{t}+b\cdot hf_{t}+b\cdot {\frac {h^{2}}{2}}{\frac {df_{t}}{dt}}+c\cdot hf_{t}+c\cdot {\frac {h^{2}}{2}}{\frac {df_{t}}{dt}}+\\&{}+c\cdot {\frac {h^{3}}{4}}{\frac {d^{2}f_{t}}{dt^{2}}}+d\cdot hf_{t}+d\cdot h^{2}{\frac {df_{t}}{dt}}+d\cdot {\frac {h^{3}}{2}}{\frac {d^{2}f_{t}}{dt^{2}}}+d\cdot {\frac {h^{4}}{4}}{\frac {d^{3}f_{t}}{dt^{3}}}+{\mathcal {O}}(h^{5})\end{aligned}}} and comparing this with the Taylor series of y t + h {\displaystyle y_{t+h}} around t {\displaystyle t} : y t + h = y t + h y ˙ t + h 2 2 y ¨ t + h 3 6 y t ( 3 ) + h 4 24 y t ( 4 ) + O ( h 5 ) = = y t + h f ( y t , t ) + h 2 2 d d t f ( y t , t ) + h 3 6 d 2 d t 2 f ( y t , t ) + h 4 24 d 3 d t 3 f ( y t , t ) {\displaystyle {\begin{aligned}y_{t+h}&=y_{t}+h{\dot {y}}_{t}+{\frac {h^{2}}{2}}{\ddot {y}}_{t}+{\frac {h^{3}}{6}}y_{t}^{(3)}+{\frac {h^{4}}{24}}y_{t}^{(4)}+{\mathcal {O}}(h^{5})=\\&=y_{t}+hf(y_{t},\ t)+{\frac {h^{2}}{2}}{\frac {d}{dt}}f(y_{t},\ t)+{\frac {h^{3}}{6}}{\frac {d^{2}}{dt^{2}}}f(y_{t},\ t)+{\frac {h^{4}}{24}}{\frac {d^{3}}{dt^{3}}}f(y_{t},\ t)\end{aligned}}} we obtain a system of constraints on the coefficients: { a + b + c + d = 1 1 2 b + 1 2 c + d = 1 2 1 4 c + 1 2 d = 1 6 1 4 d = 1 24 {\displaystyle {\begin{cases}&a+b+c+d=1\\[6pt]&{\frac {1}{2}}b+{\frac {1}{2}}c+d={\frac {1}{2}}\\[6pt]&{\frac {1}{4}}c+{\frac {1}{2}}d={\frac {1}{6}}\\[6pt]&{\frac {1}{4}}d={\frac {1}{24}}\end{cases}}} which when solved gives a = 1 6 , b = 1 3 , c = 1 3 , d = 1 6 {\displaystyle a={\frac {1}{6}},b={\frac {1}{3}},c={\frac {1}{3}},d={\frac {1}{6}}} as stated above. == See also == Euler's method List of Runge–Kutta methods Numerical methods for ordinary differential equations Runge–Kutta method (SDE) General linear methods Lie group integrator == Notes == == References == Runge, Carl David Tolmé (1895), "Über die numerische Auflösung von Differentialgleichungen", Mathematische Annalen, 46 (2), Springer: 167–178, doi:10.1007/BF01446807, S2CID 119924854. Kutta, Wilhelm (1901), "Beitrag zur näherungsweisen Integration totaler Differentialgleichungen", Zeitschrift für Mathematik und Physik, 46: 435–453. Ascher, Uri M.; Petzold, Linda R. (1998), Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-412-8. Atkinson, Kendall A. (1989), An Introduction to Numerical Analysis (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-50023-0. Butcher, John C. (May 1963), "Coefficients for the study of Runge-Kutta integration processes", Journal of the Australian Mathematical Society, 3 (2): 185–201, doi:10.1017/S1446788700027932. Butcher, John C. (May 1964), "On Runge-Kutta processes of high order", Journal of the Australian Mathematical Society, 4 (2): 179–194, doi:10.1017/S1446788700023387 Butcher, John C. (1975), "A stability property of implicit Runge-Kutta methods", BIT, 15 (4): 358–361, doi:10.1007/bf01931672, S2CID 120854166. Butcher, John C. (2000), "Numerical methods for ordinary differential equations in the 20th century", J. Comput. Appl. Math., 125 (1–2): 1–29, Bibcode:2000JCoAM.125....1B, doi:10.1016/S0377-0427(00)00455-6. Butcher, John C. (2008), Numerical Methods for Ordinary Differential Equations, New York: John Wiley & Sons, ISBN 978-0-470-72335-7. Cellier, F.; Kofman, E. (2006), Continuous System Simulation, Springer Verlag, ISBN 0-387-26102-8. Dahlquist, Germund (1963), "A special stability problem for linear multistep methods", BIT, 3: 27–43, doi:10.1007/BF01963532, hdl:10338.dmlcz/103497, ISSN 0006-3835, S2CID 120241743. Forsythe, George E.; Malcolm, Michael A.; Moler, Cleve B. (1977), Computer Methods for Mathematical Computations, Prentice-Hall (see Chapter 6). Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN 978-3-540-56670-0. Hairer, Ernst; Wanner, Gerhard (1996), Solving ordinary differential equations II: Stiff and differential-algebraic problems (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-60452-5. Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, Bibcode:1996fcna.book.....I, ISBN 978-0-521-55655-2. Lambert, J.D (1991), Numerical Methods for Ordinary Differential Systems. The Initial Value Problem, John Wiley & Sons, ISBN 0-471-92990-5 Kaw, Autar; Kalu, Egwu (2008), Numerical Methods with Applications (1st ed.), autarkaw.com. Press, William H.; Teukolsky, Saul A.; Vetterling, William T.; Flannery, Brian P. (2007), "Section 17.1 Runge-Kutta Method", Numerical Recipes: The Art of Scientific Computing (3rd ed.), Cambridge University Press, ISBN 978-0-521-88068-8. Also, Section 17.2. Adaptive Stepsize Control for Runge-Kutta. Stoer, Josef; Bulirsch, Roland (2002), Introduction to Numerical Analysis (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-95452-3. Süli, Endre; Mayers, David (2003), An Introduction to Numerical Analysis, Cambridge University Press, ISBN 0-521-00794-1. Tan, Delin; Chen, Zheng (2012), "On A General Formula of Fourth Order Runge-Kutta Method" (PDF), Journal of Mathematical Science & Mathematics Education, 7 (2): 1–10. advance discrete maths ignou reference book (code- mcs033) John C. Butcher: "B-Series : Algebraic Analysis of Numerical Methods", Springer(SSCM, volume 55), ISBN 978-3030709556 (April, 2021). Butcher, J.C. (1985), "The non-existence of ten stage eighth order explicit Runge-Kutta methods", BIT Numerical Mathematics, 25 (3): 521–540, doi:10.1007/BF01935372. Butcher, J.C. (1965), "On the attainable order of Runge-Kutta methods", Mathematics of Computation, 19 (91): 408–417, doi:10.1090/S0025-5718-1965-0179943-X. Curtis, A.R. (1970), "An eighth order Runge-Kutta process with eleven function evaluations per step", Numerische Mathematik, 16 (3): 268–277, doi:10.1007/BF02219778. Cooper, G.J.; Verner, J.H. (1972), "Some Explicit Runge–Kutta Methods of High Order", SIAM Journal on Numerical Analysis, 9 (3): 389–405, Bibcode:1972SJNA....9..389C, doi:10.1137/0709037. Butcher, J.C. (1996), "A History of Runge-Kutta Methods", Applied Numerical Mathematics, 20 (3): 247–260, doi:10.1016/0168-9274(95)00108-5. == External links == "Runge-Kutta method", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Runge–Kutta 4th-Order Method Tracker Component Library Implementation in Matlab — Implements 32 embedded Runge Kutta algorithms in RungeKStep, 24 embedded Runge-Kutta Nyström algorithms in RungeKNystroemSStep and 4 general Runge-Kutta Nyström algorithms in RungeKNystroemGStep.
Wikipedia/Runge–Kutta_method
In the study of age-structured population growth, probably one of the most important equations is the Euler–Lotka equation. Based on the age demographic of females in the population and female births (since in many cases it is the females that are more limited in the ability to reproduce), this equation allows for an estimation of how a population is growing. The field of mathematical demography was largely developed by Alfred J. Lotka in the early 20th century, building on the earlier work of Leonhard Euler. The Euler–Lotka equation, derived and discussed below, is often attributed to either of its origins: Euler, who derived a special form in 1760, or Lotka, who derived a more general continuous version. The equation in discrete time is given by 1 = ∑ a = 1 ω λ − a ℓ ( a ) b ( a ) {\displaystyle 1=\sum _{a=1}^{\omega }\lambda ^{-a}\ell (a)b(a)} where λ {\displaystyle \lambda } is the discrete growth rate, ℓ(a) is the fraction of individuals surviving to age a and b(a) is the number of offspring born to an individual of age a during the time step. The sum is taken over the entire life span of the organism. == Derivations == === Lotka's continuous model === A.J. Lotka in 1911 developed a continuous model of population dynamics as follows. This model tracks only the females in the population. Let B(t)dt be the number of births during the time interval from t to t+dt. Also define the survival function ℓ(a), the fraction of individuals surviving to age a. Finally define b(a) to be the birth rate for mothers of age a. The product B(t-a)ℓ(a) therefore denotes the number density of individuals born at t-a and still alive at t, while B(t-a)ℓ(a)b(a) denotes the number of births in this cohort, which suggest the following Volterra integral equation for B: B ( t ) = ∫ 0 t B ( t − a ) ℓ ( a ) b ( a ) d a . {\displaystyle B(t)=\int _{0}^{t}B(t-a)\ell (a)b(a)\,da.} We integrate over all possible ages to find the total rate of births at time t. We are in effect finding the contributions of all individuals of age up to t. We need not consider individuals born before the start of this analysis since we can just set the base point low enough to incorporate all of them. Let us then guess an exponential solution of the form B(t) = Qert. Plugging this into the integral equation gives: Q e r t = ∫ 0 t Q e r ( t − a ) ℓ ( a ) b ( a ) d a {\displaystyle Qe^{rt}=\int _{0}^{t}Qe^{r(t-a)}\ell (a)b(a)\,da} or 1 = ∫ 0 t e − r a ℓ ( a ) b ( a ) d a . {\displaystyle 1=\int _{0}^{t}e^{-ra}\ell (a)b(a)\,da.} This can be rewritten in the discrete case by turning the integral into a sum producing 1 = ∑ a = α β e − r a ℓ ( a ) b ( a ) {\displaystyle 1=\sum _{a=\alpha }^{\beta }e^{-ra}\ell (a)b(a)} letting α {\displaystyle \alpha } and β {\displaystyle \beta } be the boundary ages for reproduction or defining the discrete growth rate λ = er we obtain the discrete time equation derived above: 1 = ∑ a = 1 ω λ − a ℓ ( a ) b ( a ) {\displaystyle 1=\sum _{a=1}^{\omega }\lambda ^{-a}\ell (a)b(a)} where ω {\displaystyle \omega } is the maximum age, we can extend these ages since b(a) vanishes beyond the boundaries. === From the Leslie matrix === Let us write the Leslie matrix as: [ f 0 f 1 f 2 f 3 … f ω − 1 s 0 0 0 0 … 0 0 s 1 0 0 … 0 0 0 s 2 0 … 0 0 0 0 ⋱ … 0 0 0 0 … s ω − 2 0 ] {\displaystyle {\begin{bmatrix}f_{0}&f_{1}&f_{2}&f_{3}&\ldots &f_{\omega -1}\\s_{0}&0&0&0&\ldots &0\\0&s_{1}&0&0&\ldots &0\\0&0&s_{2}&0&\ldots &0\\0&0&0&\ddots &\ldots &0\\0&0&0&\ldots &s_{\omega -2}&0\end{bmatrix}}} where s i {\displaystyle s_{i}} and f i {\displaystyle f_{i}} are survival to the next age class and per capita fecundity respectively. Note that s i = ℓ i + 1 / ℓ i {\displaystyle s_{i}=\ell _{i+1}/\ell _{i}} where ℓ i is the probability of surviving to age i {\displaystyle i} , and f i = s i b i + 1 {\displaystyle f_{i}=s_{i}b_{i+1}} , the number of births at age i + 1 {\displaystyle i+1} weighted by the probability of surviving to age i + 1 {\displaystyle i+1} . Now if we have stable growth the growth of the system is an eigenvalue of the matrix since n i + 1 = L n i = λ n i {\displaystyle \mathbf {n_{i+1}} =\mathbf {Ln_{i}} =\lambda \mathbf {n_{i}} } . Therefore, we can use this relationship row by row to derive expressions for n i {\displaystyle n_{i}} in terms of the values in the matrix and λ {\displaystyle \lambda } . Introducing notation n i , t {\displaystyle n_{i,t}} the population in age class i {\displaystyle i} at time t {\displaystyle t} , we have n 1 , t + 1 = λ n 1 , t {\displaystyle n_{1,t+1}=\lambda n_{1,t}} . However also n 1 , t + 1 = s 0 n 0 , t {\displaystyle n_{1,t+1}=s_{0}n_{0,t}} . This implies that n 1 , t = s 0 λ n 0 , t . {\displaystyle n_{1,t}={\frac {s_{0}}{\lambda }}n_{0,t}.\,} By the same argument we find that n 2 , t = s 1 λ n 1 , t = s 0 s 1 λ 2 n 0 , t . {\displaystyle n_{2,t}={\frac {s_{1}}{\lambda }}n_{1,t}={\frac {s_{0}s_{1}}{\lambda ^{2}}}n_{0,t}.} Continuing inductively we conclude that generally n i , t = s 0 ⋯ s i − 1 λ i n 0 , t . {\displaystyle n_{i,t}={\frac {s_{0}\cdots s_{i-1}}{\lambda ^{i}}}n_{0,t}.} Considering the top row, we get n 0 , t + 1 = f 0 n 0 , t + ⋯ + f ω − 1 n ω − 1 , t = λ n 0 , t . {\displaystyle n_{0,t+1}=f_{0}n_{0,t}+\cdots +f_{\omega -1}n_{\omega -1,t}=\lambda n_{0,t}.} Now we may substitute our previous work for the n i , t {\displaystyle n_{i,t}} terms and obtain: λ n 0 , t = ( f 0 + f 1 s 0 λ + ⋯ + f ω − 1 s 0 ⋯ s ω − 2 λ ω − 1 ) n ( 0 , t ) . {\displaystyle \lambda n_{0,t}=\left(f_{0}+f_{1}{\frac {s_{0}}{\lambda }}+\cdots +f_{\omega -1}{\frac {s_{0}\cdots s_{\omega -2}}{\lambda ^{\omega -1}}}\right)n_{(0,t)}.} First substitute the definition of the per-capita fertility and divide through by the left hand side: 1 = s 0 b 1 λ + s 0 s 1 b 2 λ 2 + ⋯ + s 0 ⋯ s ω − 1 b ω λ ω . {\displaystyle 1={\frac {s_{0}b_{1}}{\lambda }}+{\frac {s_{0}s_{1}b_{2}}{\lambda ^{2}}}+\cdots +{\frac {s_{0}\cdots s_{\omega -1}b_{\omega }}{\lambda ^{\omega }}}.} Now we note the following simplification. Since s i = ℓ i + 1 / ℓ i {\displaystyle s_{i}=\ell _{i+1}/\ell _{i}} we note that s 0 … s i = ℓ 1 ℓ 0 ℓ 2 ℓ 1 ⋯ ℓ i + 1 ℓ i = ℓ i + 1 . {\displaystyle s_{0}\ldots s_{i}={\frac {\ell _{1}}{\ell _{0}}}{\frac {\ell _{2}}{\ell _{1}}}\cdots {\frac {\ell _{i+1}}{\ell _{i}}}=\ell _{i+1}.} This sum collapses to: ∑ i = 1 ω ℓ i b i λ i = 1 , {\displaystyle \sum _{i=1}^{\omega }{\frac {\ell _{i}b_{i}}{\lambda ^{i}}}=1,} which is the desired result. == Analysis of expression == From the above analysis we see that the Euler–Lotka equation is in fact the characteristic polynomial of the Leslie matrix. We can analyze its solutions to find information about the eigenvalues of the Leslie matrix (which has implications for the stability of populations). Considering the continuous expression f as a function of r, we can examine its roots. We notice that at negative infinity the function grows to positive infinity and at positive infinity the function approaches 0. The first derivative is clearly −af and the second derivative is a2f. This function is then decreasing, concave up and takes on all positive values. It is also continuous by construction so by the intermediate value theorem, it crosses r = 1 exactly once. Therefore, there is exactly one real solution, which is therefore the dominant eigenvalue of the matrix the equilibrium growth rate. This same derivation applies to the discrete case. == Relationship to replacement rate of populations == If we let λ = 1 the discrete formula becomes the replacement rate of the population. == Further reading == Coale, Ansley J. (1972). The Growth and Structure of Human Populations. Princeton: Princeton University Press. pp. 61–70. ISBN 0-691-09357-1. Hoppensteadt, Frank (1975). Mathematical Theories of Populations : Demographics, Genetics and Epidemics. Philadelphia: SIAM. pp. 1–5. ISBN 0-89871-017-0. Kot, M. (2001). "The Lotka integral equation". Elements of Mathematical Ecology. Cambridge: Cambridge University Press. pp. 353–64. ISBN 0-521-80213-X. Pollard, J. H. (1973). "The deterministic population models of T. Malthus, A. J. Lotka, and F. R. Sharpe and A. J. Lotka". Mathematical models for the growth of human populations. Cambridge University Press. pp. 22–36. ISBN 0-521-20111-X.
Wikipedia/Euler–Lotka_equation
In mathematics and computational science, Heun's method may refer to the improved or modified Euler's method (that is, the explicit trapezoidal rule), or a similar two-stage Runge–Kutta method. It is named after Karl Heun and is a numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. Both variants can be seen as extensions of the Euler method into two-stage second-order Runge–Kutta methods. The procedure for calculating the numerical solution to the initial value problem: y ′ ( t ) = f ( t , y ( t ) ) , y ( t 0 ) = y 0 , {\displaystyle y'(t)=f(t,y(t)),\qquad \qquad y(t_{0})=y_{0},} by way of Heun's method, is to first calculate the intermediate value y ~ i + 1 {\displaystyle {\tilde {y}}_{i+1}} and then the final approximation y i + 1 {\displaystyle y_{i+1}} at the next integration point. y ~ i + 1 = y i + h f ( t i , y i ) {\displaystyle {\tilde {y}}_{i+1}=y_{i}+hf(t_{i},y_{i})} y i + 1 = y i + h 2 [ f ( t i , y i ) + f ( t i + 1 , y ~ i + 1 ) ] , {\displaystyle y_{i+1}=y_{i}+{\frac {h}{2}}[f(t_{i},y_{i})+f(t_{i+1},{\tilde {y}}_{i+1})],} where h {\displaystyle h} is the step size and t i + 1 = t i + h {\displaystyle t_{i+1}=t_{i}+h} . == Description == Euler's method is used as the foundation for Heun's method. Euler's method uses the line tangent to the function at the beginning of the interval as an estimate of the slope of the function over the interval, assuming that if the step size is small, the error will be small. However, even when extremely small step sizes are used, over a large number of steps the error starts to accumulate and the estimate diverges from the actual functional value. Where the solution curve is concave up, its tangent line will underestimate the vertical coordinate of the next point and vice versa for a concave down solution. The ideal prediction line would hit the curve at its next predicted point. In reality, there is no way to know whether the solution is concave-up or concave-down, and hence if the next predicted point will overestimate or underestimate its vertical value. The concavity of the curve cannot be guaranteed to remain consistent either and the prediction may overestimate and underestimate at different points in the domain of the solution. Heun's Method addresses this problem by considering the interval spanned by the tangent line segment as a whole. Taking a concave-up example, the left tangent prediction line underestimates the slope of the curve for the entire width of the interval from the current point to the next predicted point. If the tangent line at the right end point is considered (which can be estimated using Euler's Method), it has the opposite problem. The points along the tangent line of the left end point have vertical coordinates which all underestimate those that lie on the solution curve, including the right end point of the interval under consideration. The solution is to make the slope greater by some amount. Heun's Method considers the tangent lines to the solution curve at both ends of the interval, one which overestimates, and one which underestimates the ideal vertical coordinates. A prediction line must be constructed based on the right end point tangent's slope alone, approximated using Euler's Method. If this slope is passed through the left end point of the interval, the result is evidently too steep to be used as an ideal prediction line and overestimates the ideal point. Therefore, the ideal point lies approximately halfway between the erroneous overestimation and underestimation, the average of the two slopes. Euler's Method is used to roughly estimate the coordinates of the next point in the solution, and with this knowledge, the original estimate is re-predicted or corrected. Assuming that the quantity f ( x , y ) {\displaystyle \textstyle f(x,y)} on the right hand side of the equation can be thought of as the slope of the solution sought at any point ( x , y ) {\displaystyle \textstyle (x,y)} , this can be combined with the Euler estimate of the next point to give the slope of the tangent line at the right end-point. Next the average of both slopes is used to find the corrected coordinates of the right end interval. == Derivation == Slope left = f ( x i , y i ) {\displaystyle {\text{Slope}}_{\text{left}}=f(x_{i},y_{i})} Slope right = f ( x i + h , y i + h f ( x i , y i ) ) {\displaystyle {\text{Slope}}_{\text{right}}=f(x_{i}+h,y_{i}+hf(x_{i},y_{i}))} Slope ideal = 1 2 ( Slope left + Slope right ) {\displaystyle {\text{Slope}}_{\text{ideal}}={\frac {1}{2}}({\text{Slope}}_{\text{left}}+{\text{Slope}}_{\text{right}})} Using the principle that the slope of a line equates to the rise/run, the coordinates at the end of the interval can be found using the following formula: Slope ideal = Δ y h {\displaystyle {\text{Slope}}_{\text{ideal}}={\frac {\Delta y}{h}}} Δ y = h ( Slope ideal ) {\displaystyle \Delta y=h({\text{Slope}}_{\text{ideal}})} x i + 1 = x i + h {\displaystyle x_{i+1}=x_{i}+h} , y i + 1 = y i + Δ y {\displaystyle \textstyle y_{i+1}=y_{i}+\Delta y} y i + 1 = y i + h Slope ideal {\displaystyle y_{i+1}=y_{i}+h{\text{Slope}}_{\text{ideal}}} y i + 1 = y i + 1 2 h ( Slope left + Slope right ) {\displaystyle y_{i+1}=y_{i}+{\frac {1}{2}}h({\text{Slope}}_{\text{left}}+{\text{Slope}}_{\text{right}})} y i + 1 = y i + h 2 ( f ( x i , y i ) + f ( x i + h , y i + h f ( x i , y i ) ) ) {\displaystyle y_{i+1}=y_{i}+{\frac {h}{2}}(f(x_{i},y_{i})+f(x_{i}+h,y_{i}+hf(x_{i},y_{i})))} The accuracy of the Euler method improves only linearly with the step size is decreased, whereas the Heun Method improves accuracy quadratically . The scheme can be compared with the implicit trapezoidal method, but with f ( t i + 1 , y i + 1 ) {\displaystyle f(t_{i+1},y_{i+1})} replaced by f ( t i + 1 , y ~ i + 1 ) {\displaystyle f(t_{i+1},{\tilde {y}}_{i+1})} in order to make it explicit. y ~ i + 1 {\displaystyle {\tilde {y}}_{i+1}} is the result of one step of Euler's method on the same initial value problem. So, Heun's method is a predictor-corrector method with forward Euler's method as predictor and trapezoidal method as corrector. == Runge–Kutta method == The improved Euler's method is a two-stage Runge–Kutta method, and can be written using the Butcher tableau (after John C. Butcher): The other method referred to as Heun's method (also known as Ralston's method) has the Butcher tableau: This method minimizes the truncation error. == References ==
Wikipedia/Heun's_method
Euler calculus is a methodology from applied algebraic topology and integral geometry that integrates constructible functions and more recently definable functions by integrating with respect to the Euler characteristic as a finitely-additive measure. In the presence of a metric, it can be extended to continuous integrands via the Gauss–Bonnet theorem. It was introduced independently by Pierre Schapira and Oleg Viro in 1988, and is useful for enumeration problems in computational geometry and sensor networks. == See also == Topological data analysis == References == Van den Dries, Lou. Tame Topology and O-minimal Structures, Cambridge University Press, 1998. ISBN 978-0-521-59838-5 Arnold, V. I.; Goryunov, V. V.; Lyashko, O. V. Singularity Theory, Volume 1, Springer, 1998, p. 219. ISBN 978-3-540-63711-0 == External links == Ghrist, Robert. Euler Calculus video presentation, June 2009. published 30 July 2009.
Wikipedia/Euler_calculus
In numerical analysis and scientific computing, the trapezoidal rule is a numerical method to solve ordinary differential equations derived from the trapezoidal rule for computing integrals. The trapezoidal rule is an implicit second-order method, which can be considered as both a Runge–Kutta method and a linear multistep method. == Method == Suppose that we want to solve the differential equation y ′ = f ( t , y ) . {\displaystyle y'=f(t,y).} The trapezoidal rule is given by the formula y n + 1 = y n + 1 2 h ( f ( t n , y n ) + f ( t n + 1 , y n + 1 ) ) , {\displaystyle y_{n+1}=y_{n}+{\tfrac {1}{2}}h{\Big (}f(t_{n},y_{n})+f(t_{n+1},y_{n+1}){\Big )},} where h = t n + 1 − t n {\displaystyle h=t_{n+1}-t_{n}} is the step size. This is an implicit method: the value y n + 1 {\displaystyle y_{n+1}} appears on both sides of the equation, and to actually calculate it, we have to solve an equation which will usually be nonlinear. One possible method for solving this equation is Newton's method. We can use the Euler method to get a fairly good estimate for the solution, which can be used as the initial guess of Newton's method. Cutting short, using only the guess from Eulers method is equivalent to performing Heun's method. == Motivation == Integrating the differential equation from t n {\displaystyle t_{n}} to t n + 1 {\displaystyle t_{n+1}} , we find that y ( t n + 1 ) − y ( t n ) = ∫ t n t n + 1 f ( t , y ( t ) ) d t . {\displaystyle y(t_{n+1})-y(t_{n})=\int _{t_{n}}^{t_{n+1}}f(t,y(t))\,\mathrm {d} t.} The trapezoidal rule states that the integral on the right-hand side can be approximated as ∫ t n t n + 1 f ( t , y ( t ) ) d t ≈ 1 2 h ( f ( t n , y ( t n ) ) + f ( t n + 1 , y ( t n + 1 ) ) ) . {\displaystyle \int _{t_{n}}^{t_{n+1}}f(t,y(t))\,\mathrm {d} t\approx {\tfrac {1}{2}}h{\Big (}f(t_{n},y(t_{n}))+f(t_{n+1},y(t_{n+1})){\Big )}.} Now combine both formulas and use that y n ≈ y ( t n ) {\displaystyle y_{n}\approx y(t_{n})} and y n + 1 ≈ y ( t n + 1 ) {\displaystyle y_{n+1}\approx y(t_{n+1})} to get the trapezoidal rule for solving ordinary differential equations. == Error analysis == It follows from the error analysis of the trapezoidal rule for quadrature that the local truncation error τ n {\displaystyle \tau _{n}} of the trapezoidal rule for solving differential equations can be bounded as: | τ n | ≤ 1 12 h 3 max t | y ‴ ( t ) | . {\displaystyle |\tau _{n}|\leq {\tfrac {1}{12}}h^{3}\max _{t}|y'''(t)|.} Thus, the trapezoidal rule is a second-order method. This result can be used to show that the global error is O ( h 2 ) {\displaystyle O(h^{2})} as the step size h {\displaystyle h} tends to zero (see big O notation for the meaning of this). == Stability == The region of absolute stability for the trapezoidal rule is { z ∈ C ∣ Re ⁡ ( z ) < 0 } . {\displaystyle \{z\in \mathbb {C} \mid \operatorname {Re} (z)<0\}.} This includes the left-half plane, so the trapezoidal rule is A-stable. The second Dahlquist barrier states that the trapezoidal rule is the most accurate amongst the A-stable linear multistep methods. More precisely, a linear multistep method that is A-stable has at most order two, and the error constant of a second-order A-stable linear multistep method cannot be better than the error constant of the trapezoidal rule. In fact, the region of absolute stability for the trapezoidal rule is precisely the left-half plane. This means that if the trapezoidal rule is applied to the linear test equation y' = λy, the numerical solution decays to zero if and only if the exact solution does. However, the decay of the numerical solution can be many orders of magnitude slower than that of the true solution. == Notes == == References == Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, ISBN 978-0-521-55655-2. Süli, Endre; Mayers, David (2003), An Introduction to Numerical Analysis, Cambridge University Press, ISBN 0521007941. == See also == Crank–Nicolson method
Wikipedia/Trapezoidal_rule_(differential_equations)
In Itô calculus, the Euler–Maruyama method (also simply called the Euler method) is a method for the approximate numerical solution of a stochastic differential equation (SDE). It is an extension of the Euler method for ordinary differential equations to stochastic differential equations named after Leonhard Euler and Gisiro Maruyama. The same generalization cannot be done for any arbitrary deterministic method. == Definition == Consider the stochastic differential equation (see Itô calculus) d X t = a ( X t , t ) d t + b ( X t , t ) d W t , {\displaystyle \mathrm {d} X_{t}=a(X_{t},t)\,\mathrm {d} t+b(X_{t},t)\,\mathrm {d} W_{t},} with initial condition X0 = x0, where Wt denotes the Wiener process, and suppose that we wish to solve this SDE on some interval of time [0, T]. Then the Euler–Maruyama approximation to the true solution X is the Markov chain Y defined as follows: Partition the interval [0, T] into N equal subintervals of width Δ t > 0 {\displaystyle \Delta t>0} : 0 = τ 0 < τ 1 < ⋯ < τ N = T and Δ t = T / N ; {\displaystyle 0=\tau _{0}<\tau _{1}<\cdots <\tau _{N}=T{\text{ and }}\Delta t=T/N;} Set Y0 = x0 Recursively define Yn for 0 ≤ n ≤ N-1 by Y n + 1 = Y n + a ( Y n , τ n ) Δ t + b ( Y n , τ n ) Δ W n , {\displaystyle \,Y_{n+1}=Y_{n}+a(Y_{n},\tau _{n})\,\Delta t+b(Y_{n},\tau _{n})\,\Delta W_{n},} where Δ W n = W τ n + 1 − W τ n . {\displaystyle \Delta W_{n}=W_{\tau _{n+1}}-W_{\tau _{n}}.} The random variables ΔWn are independent and identically distributed normal random variables with expected value zero and variance Δt. === Derivation === The Euler-Maruyama formula can be derived by considering the integral form of the Itô SDE X τ n + 1 = X τ n + ∫ τ n τ n + 1 a ( X s , s ) d s + ∫ τ n τ n + 1 b ( X s , s ) d W s {\displaystyle X_{\tau _{n+1}}=X_{\tau _{n}}+\int _{\tau _{n}}^{\tau _{n+1}}a(X_{s},s)\,ds+\int _{\tau _{n}}^{\tau _{n+1}}b(X_{s},s)\,dW_{s}} and approximating a ( X s , s ) ≈ a ( X n , τ n ) {\displaystyle a(X_{s},s)\approx a(X_{n},\tau _{n})} and b ( X s , s ) ≈ b ( X n , τ n ) {\displaystyle b(X_{s},s)\approx b(X_{n},\tau _{n})} on the small time interval [ τ n , τ n + 1 ] {\displaystyle [\tau _{n},\tau _{n+1}]} . == Strong and weak convergence == Like other approximation methods, the accuracy of the Euler–Maruyama scheme is analyzed through comparison to an underlying continuous solution. Let X {\displaystyle X} denote an Itô process over [ 0 , T ] {\displaystyle [0,T]} , equal to X t = X 0 + ∫ 0 t μ ( s , X s ) d s + ∫ 0 t σ ( s , X s ) d W s {\displaystyle X_{t}=X_{0}+\int _{0}^{t}\mu (s,X_{s})ds+\int _{0}^{t}\sigma (s,X_{s})dW_{s}} at time t ∈ [ 0 , T ] {\displaystyle t\in [0,T]} , where μ {\displaystyle \mu } and σ {\displaystyle \sigma } denote deterministic "drift" and "diffusion" functions, respectively, and W t {\displaystyle W_{t}} is the Wiener process. As discrete approximations of continuous processes are typically assessed through comparison between their respective final states at T > 0 {\displaystyle T>0} , a natural convergence criterion for such discrete processes is lim N → ∞ E [ | X ^ N − X T | ] = 0. {\displaystyle \lim _{N\to \infty }\mathbb {E} \left[\left|{\hat {X}}_{N}-X_{T}\right|\right]=0.} Here, X ^ N {\displaystyle {\hat {X}}_{N}} corresponds to the final state of the discrete process X ^ {\displaystyle {\hat {X}}} , which approximates X T {\displaystyle X_{T}} by taking N {\displaystyle N} steps of length Δ t = T / N {\displaystyle \Delta t=T/N} . Iterative schemes satisfying the above condition are said to strongly converge to the continuous process X {\displaystyle X} , which automatically implies their satisfaction of the weak convergence criterion, lim N → ∞ E [ | g ( X ^ N ) − g ( X T ) | ] = 0 , {\displaystyle \lim _{N\to \infty }\mathbb {E} \left[\left|g({\hat {X}}_{N})-g(X_{T})\right|\right]=0,} for any smooth function g {\displaystyle g} . More specifically, if there exists a constant K {\displaystyle K} and γ s , δ 0 > 0 {\displaystyle \gamma _{s},\delta _{0}>0} such that E [ | X ^ N − X T | ] ≤ K δ 0 γ s {\displaystyle \mathbb {E} \left[\left|{\hat {X}}_{N}-X_{T}\right|\right]\leq K\delta _{0}^{\gamma _{s}}} for any δ ∈ ( 0 , δ 0 ) {\displaystyle \delta \in (0,\delta _{0})} , the approximation converges strongly with order γ s {\displaystyle \gamma _{s}} to the continuous process X {\displaystyle X} ; likewise, X ^ {\displaystyle {\hat {X}}} converges weakly to X {\displaystyle X} with order γ w {\displaystyle \gamma _{w}} if the same inequality holds with g ( X ^ N ) − g ( X T ) {\displaystyle g({\hat {X}}_{N})-g(X_{T})} in place of X ^ N − X T {\displaystyle {\hat {X}}_{N}-X_{T}} . Strong order γ s {\displaystyle \gamma _{s}} convergence implies weak order γ w ≥ γ s {\displaystyle \gamma _{w}\geq \gamma _{s}} convergence: exemplifying this, it was shown in 1972 that the Euler–Maruyama method strongly converges with order γ s = 1 / 2 {\displaystyle \gamma _{s}=1/2} to any Itô process, provided μ , σ {\displaystyle \mu ,\sigma } satisfy Lipschitz continuity and linear growth conditions with respect to x {\displaystyle x} , and in 1974, the Euler–Maruyama scheme was proven to converge weakly with order γ w = 1 {\displaystyle \gamma _{w}=1} to Itô processes governed by the same such μ , σ {\displaystyle \mu ,\sigma } , provided that their derivatives also satisfy similar conditions. == Example with geometric Brownian motion == A simple case to analyze is geometric Brownian motion, which satisfies the SDE d X t = λ X t d t + σ X t d W t {\displaystyle dX_{t}=\lambda X_{t}\,dt+\sigma X_{t}\,dW_{t}} for fixed λ {\displaystyle \lambda } and σ {\displaystyle \sigma } . Applying Itô’s lemma to ln ⁡ X t {\displaystyle \ln X_{t}} yields the closed-form solution X t = X 0 exp ⁡ ( ( λ − 1 2 σ 2 ) t + σ W t ) {\displaystyle X_{t}=X_{0}\exp \left(\left(\lambda -{\tfrac {1}{2}}\sigma ^{2}\right)t+\sigma W_{t}\right)} Discretising with Euler–Maruyama gives the time-step updates Y n + 1 = ( 1 + λ Δ t + σ Δ W n ) Y n = Y 0 ∏ k = 0 n ( 1 + λ Δ t + σ Δ W k ) {\displaystyle Y_{n+1}=\left(1+\lambda \Delta t+\sigma \Delta W_{n}\right)Y_{n}=Y_{0}\prod _{k=0}^{n}\left(1+\lambda \Delta t+\sigma \Delta W_{k}\right)} By using a Taylor series expansion of the exponential function in the analytic solution, we can get a formula for the exact update in a time-step. X τ k + 1 = X τ k exp ⁡ ( ( λ − 1 2 σ 2 ) Δ t + σ Δ W k ) = X τ k [ 1 + λ Δ t + σ Δ W k + 1 2 σ 2 ( ( Δ W k ) 2 − Δ t ) + O ( Δ t 3 / 2 ) ] {\displaystyle {\begin{aligned}X_{\tau _{k+1}}&=X_{\tau _{k}}\exp \left((\lambda -{\tfrac {1}{2}}\sigma ^{2})\Delta t+\sigma \Delta W_{k}\right)\\&=X_{\tau _{k}}\left[1+\lambda \Delta t+\sigma \Delta W_{k}+{\tfrac {1}{2}}\sigma ^{2}\left((\Delta W_{k})^{2}-\Delta t\right)+O\left(\Delta t^{3/2}\right)\right]\\\end{aligned}}} Summing the local errors between the analytic and Euler-Maruyama solutions over each of the N = T / Δ t {\displaystyle N=T/\Delta t} steps gives the strong error estimate E [ | X T − Y N | ] = O ( Δ t ) {\displaystyle \mathbb {E} \left[\,|X_{T}-Y_{N}|\,\right]=O\left({\sqrt {\Delta t}}\right)} confirming strong order 1 / 2 {\displaystyle 1/2} convergence. Another numerical aspect to consider is stability. The path's second moment is E | X t | 2 ∝ exp ⁡ ( ( 2 λ + σ 2 ) t ) {\displaystyle \mathbb {E} |X_{t}|^{2}\propto \exp \left((2\lambda +\sigma ^{2})t\right)} , so long-time decay of the solution occurs only when 2 λ + σ 2 < 0 {\displaystyle 2\lambda +\sigma ^{2}<0} . The Euler–Maruyama scheme preserves variance decay in this case provided that Δ t ≤ − 1 λ 2 ( 2 λ + σ 2 ) {\displaystyle \Delta t\leq {\frac {-1}{\lambda ^{2}}}\left(2\lambda +\sigma ^{2}\right)} . == Application == An area that has benefited significantly from SDEs is mathematical biology. As many biological processes are both stochastic and continuous in nature, numerical methods of solving SDEs are highly valuable in the field. == References ==
Wikipedia/Euler–Maruyama_method
In numerical analysis, a branch of applied mathematics, the midpoint method is a one-step method for numerically solving the differential equation, y ′ ( t ) = f ( t , y ( t ) ) , y ( t 0 ) = y 0 . {\displaystyle y'(t)=f(t,y(t)),\quad y(t_{0})=y_{0}.} The explicit midpoint method is given by the formula the implicit midpoint method by for n = 0 , 1 , 2 , … {\displaystyle n=0,1,2,\dots } Here, h {\displaystyle h} is the step size — a small positive number, t n = t 0 + n h , {\displaystyle t_{n}=t_{0}+nh,} and y n {\displaystyle y_{n}} is the computed approximate value of y ( t n ) . {\displaystyle y(t_{n}).} The explicit midpoint method is sometimes also known as the modified Euler method, the implicit method is the most simple collocation method, and, applied to Hamiltonian dynamics, a symplectic integrator. Note that the modified Euler method can refer to Heun's method, for further clarity see List of Runge–Kutta methods. The name of the method comes from the fact that in the formula above, the function f {\displaystyle f} giving the slope of the solution is evaluated at t = t n + h / 2 = t n + t n + 1 2 , {\displaystyle t=t_{n}+h/2={\tfrac {t_{n}+t_{n+1}}{2}},} the midpoint between t n {\displaystyle t_{n}} at which the value of y ( t ) {\displaystyle y(t)} is known and t n + 1 {\displaystyle t_{n+1}} at which the value of y ( t ) {\displaystyle y(t)} needs to be found. A geometric interpretation may give a better intuitive understanding of the method (see figure at right). In the basic Euler's method, the tangent of the curve at ( t n , y n ) {\displaystyle (t_{n},y_{n})} is computed using f ( t n , y n ) {\displaystyle f(t_{n},y_{n})} . The next value y n + 1 {\displaystyle y_{n+1}} is found where the tangent intersects the vertical line t = t n + 1 {\displaystyle t=t_{n+1}} . However, if the second derivative is only positive between t n {\displaystyle t_{n}} and t n + 1 {\displaystyle t_{n+1}} , or only negative (as in the diagram), the curve will increasingly veer away from the tangent, leading to larger errors as h {\displaystyle h} increases. The diagram illustrates that the tangent at the midpoint (upper, green line segment) would most likely give a more accurate approximation of the curve in that interval. However, this midpoint tangent could not be accurately calculated because we do not know the curve (that is what is to be calculated). Instead, this tangent is estimated by using the original Euler's method to estimate the value of y ( t ) {\displaystyle y(t)} at the midpoint, then computing the slope of the tangent with f ( ) {\displaystyle f()} . Finally, the improved tangent is used to calculate the value of y n + 1 {\displaystyle y_{n+1}} from y n {\displaystyle y_{n}} . This last step is represented by the red chord in the diagram. Note that the red chord is not exactly parallel to the green segment (the true tangent), due to the error in estimating the value of y ( t ) {\displaystyle y(t)} at the midpoint. The local error at each step of the midpoint method is of order O ( h 3 ) {\displaystyle O\left(h^{3}\right)} , giving a global error of order O ( h 2 ) {\displaystyle O\left(h^{2}\right)} . Thus, while more computationally intensive than Euler's method, the midpoint method's error generally decreases faster as h → 0 {\displaystyle h\to 0} . The methods are examples of a class of higher-order methods known as Runge–Kutta methods. == Derivation of the midpoint method == The midpoint method is a refinement of the Euler method y n + 1 = y n + h f ( t n , y n ) , {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}),\,} and is derived in a similar manner. The key to deriving Euler's method is the approximate equality which is obtained from the slope formula and keeping in mind that y ′ = f ( t , y ) . {\displaystyle y'=f(t,y).} For the midpoint methods, one replaces (3) with the more accurate y ′ ( t + h 2 ) ≈ y ( t + h ) − y ( t ) h {\displaystyle y'\left(t+{\frac {h}{2}}\right)\approx {\frac {y(t+h)-y(t)}{h}}} when instead of (2) we find One cannot use this equation to find y ( t + h ) {\displaystyle y(t+h)} as one does not know y {\displaystyle y} at t + h / 2 {\displaystyle t+h/2} . The solution is then to use a Taylor series expansion exactly as if using the Euler method to solve for y ( t + h / 2 ) {\displaystyle y(t+h/2)} : y ( t + h 2 ) ≈ y ( t ) + h 2 y ′ ( t ) = y ( t ) + h 2 f ( t , y ( t ) ) , {\displaystyle y\left(t+{\frac {h}{2}}\right)\approx y(t)+{\frac {h}{2}}y'(t)=y(t)+{\frac {h}{2}}f(t,y(t)),} which, when plugged in (4), gives us y ( t + h ) ≈ y ( t ) + h f ( t + h 2 , y ( t ) + h 2 f ( t , y ( t ) ) ) {\displaystyle y(t+h)\approx y(t)+hf\left(t+{\frac {h}{2}},y(t)+{\frac {h}{2}}f(t,y(t))\right)} and the explicit midpoint method (1e). The implicit method (1i) is obtained by approximating the value at the half step t + h / 2 {\displaystyle t+h/2} by the midpoint of the line segment from y ( t ) {\displaystyle y(t)} to y ( t + h ) {\displaystyle y(t+h)} y ( t + h 2 ) ≈ 1 2 ( y ( t ) + y ( t + h ) ) {\displaystyle y\left(t+{\frac {h}{2}}\right)\approx {\frac {1}{2}}{\bigl (}y(t)+y(t+h){\bigr )}} and thus y ( t + h ) − y ( t ) h ≈ y ′ ( t + h 2 ) ≈ k = f ( t + h 2 , 1 2 ( y ( t ) + y ( t + h ) ) ) {\displaystyle {\frac {y(t+h)-y(t)}{h}}\approx y'\left(t+{\frac {h}{2}}\right)\approx k=f\left(t+{\frac {h}{2}},{\frac {1}{2}}{\bigl (}y(t)+y(t+h){\bigr )}\right)} Inserting the approximation y n + h k {\displaystyle y_{n}+h\,k} for y ( t n + h ) {\displaystyle y(t_{n}+h)} results in the implicit Runge-Kutta method k = f ( t n + h 2 , y n + h 2 k ) y n + 1 = y n + h k {\displaystyle {\begin{aligned}k&=f\left(t_{n}+{\frac {h}{2}},y_{n}+{\frac {h}{2}}k\right)\\y_{n+1}&=y_{n}+h\,k\end{aligned}}} which contains the implicit Euler method with step size h / 2 {\displaystyle h/2} as its first part. Because of the time symmetry of the implicit method, all terms of even degree in h {\displaystyle h} of the local error cancel, so that the local error is automatically of order O ( h 3 ) {\displaystyle {\mathcal {O}}(h^{3})} . Replacing the implicit with the explicit Euler method in the determination of k {\displaystyle k} results again in the explicit midpoint method. == See also == Rectangle method Heun's method Leapfrog integration and Verlet integration == Notes == == References == Griffiths, D. V.; Smith, I. M. (1991). Numerical methods for engineers: a programming approach. Boca Raton: CRC Press. p. 218. ISBN 0-8493-8610-1. Süli, Endre; Mayers, David (2003), An Introduction to Numerical Analysis, Cambridge University Press, ISBN 0-521-00794-1. Burden, Richard; Faires, John (2010). Numerical Analysis. Richard Stratton. p. 286. ISBN 978-0-538-73351-9.
Wikipedia/Midpoint_method
In numerical analysis and scientific computing, the Gauss–Legendre methods are a family of numerical methods for ordinary differential equations. Gauss–Legendre methods are implicit Runge–Kutta methods. More specifically, they are collocation methods based on the points of Gauss–Legendre quadrature. The Gauss–Legendre method based on s points has order 2s. All Gauss–Legendre methods are A-stable. The Gauss–Legendre method of order two is the implicit midpoint rule. Its Butcher tableau is: The Gauss–Legendre method of order four has Butcher tableau: The Gauss–Legendre method of order six has Butcher tableau: The computational cost of higher-order Gauss–Legendre methods is usually excessive, and thus, they are rarely used. == Intuition == Gauss-Legendre Runge-Kutta (GLRK) methods solve an ordinary differential equation x ˙ = f ( x ) {\displaystyle {\dot {x}}=f(x)} with x ( 0 ) = x 0 {\displaystyle x(0)=x_{0}} . The distinguishing feature of GLRK is the estimation of x ( h ) − x 0 = ∫ 0 h f ( x ( t ) ) d t {\textstyle x(h)-x_{0}=\int _{0}^{h}f(x(t))\,dt} with Gaussian quadrature. x ( h ) = x ( 0 ) + h 2 ∑ i = 1 ℓ w i k i + O ( h 2 ℓ ) , {\displaystyle x(h)=x(0)+{\frac {h}{2}}\sum _{i=1}^{\ell }w_{i}k_{i}+O(h^{2\ell }),} where k i = f ( x ( h c i ) ) {\displaystyle k_{i}=f(x(hc_{i}))} are the sampled velocities, w i {\displaystyle w_{i}} are the quadrature weights, c i = 1 2 h ( 1 + r i ) {\textstyle c_{i}={\frac {1}{2}}h(1+r_{i})} are the abscissas, and r i {\displaystyle r_{i}} are the roots P ℓ ( r i ) = 0 {\displaystyle P_{\ell }(r_{i})=0} of the Legendre polynomial of degree ℓ {\displaystyle \ell } . A further approximation is needed, as k i {\displaystyle k_{i}} is still impossible to evaluate. To maintain truncation error of order O ( h 2 ℓ ) {\displaystyle O(h^{2\ell })} , we only need k i {\displaystyle k_{i}} to order O ( h 2 ℓ − 1 ) {\displaystyle O(h^{2\ell -1})} . The Runge-Kutta implicit definition k i = f ( x 0 + h ∑ j a i j k j ) {\textstyle k_{i}=f{\left(x_{0}+h\sum _{j}a_{ij}k_{j}\right)}} is invoked to accomplish this. This is an implicit constraint that must be solved by a root finding algorithm like Newton's method. The values of the Runge-Kutta parameters a i j {\displaystyle a_{ij}} can be determined from a Taylor series expansion in h {\displaystyle h} . == Practical example == The Gauss-Legendre methods are implicit, so in general they cannot be applied exactly. Instead one makes an educated guess of k i {\displaystyle k_{i}} , and then uses Newton's method to converge arbitrarily close to the true solution. Below is a Matlab function which implements the Gauss-Legendre method of order four. This algorithm is surprisingly cheap. The error in k i {\displaystyle k_{i}} can fall below 10 − 12 {\displaystyle 10^{-12}} in as few as 2 Newton steps. The only extra work compared to explicit Runge-Kutta methods is the computation of the Jacobian. == Time-symmetric variants == At the cost of adding an additional implicit relation, these methods can be adapted to have time reversal symmetry. In these methods, the averaged position ( x f + x i ) / 2 {\displaystyle (x_{f}+x_{i})/2} is used in computing k i {\displaystyle k_{i}} instead of just the initial position x i {\displaystyle x_{i}} in standard Runge-Kutta methods. The method of order 2 is just an implicit midpoint method. k 1 = f ( x f + x i 2 ) {\displaystyle k_{1}=f\left({\frac {x_{f}+x_{i}}{2}}\right)} x f = x i + h k 1 {\displaystyle x_{f}=x_{i}+hk_{1}} The method of order 4 with 2 stages is as follows. k 1 = f ( x f + x i 2 − 3 6 h k 2 ) {\displaystyle k_{1}=f\left({\frac {x_{f}+x_{i}}{2}}-{\frac {\sqrt {3}}{6}}hk_{2}\right)} k 2 = f ( x f + x i 2 + 3 6 h k 1 ) {\displaystyle k_{2}=f\left({\frac {x_{f}+x_{i}}{2}}+{\frac {\sqrt {3}}{6}}hk_{1}\right)} x f = x i + h 2 ( k 1 + k 2 ) {\displaystyle x_{f}=x_{i}+{\frac {h}{2}}(k_{1}+k_{2})} The method of order 6 with 3 stages is as follows. k 1 = f ( x f + x i 2 − 15 15 h k 2 − 15 30 h k 3 ) {\displaystyle k_{1}=f\left({\frac {x_{f}+x_{i}}{2}}-{\frac {\sqrt {15}}{15}}hk_{2}-{\frac {\sqrt {15}}{30}}hk_{3}\right)} k 2 = f ( x f + x i 2 + 15 24 h k 1 − 15 24 h k 3 ) {\displaystyle k_{2}=f\left({\frac {x_{f}+x_{i}}{2}}+{\frac {\sqrt {15}}{24}}hk_{1}-{\frac {\sqrt {15}}{24}}hk_{3}\right)} k 3 = f ( x f + x i 2 + 15 30 h k 1 + 15 15 h k 2 ) {\displaystyle k_{3}=f\left({\frac {x_{f}+x_{i}}{2}}+{\frac {\sqrt {15}}{30}}hk_{1}+{\frac {\sqrt {15}}{15}}hk_{2}\right)} x f = x i + h 18 ( 5 k 1 + 8 k 2 + 5 k 3 ) {\displaystyle x_{f}=x_{i}+{\frac {h}{18}}(5k_{1}+8k_{2}+5k_{3})} == Notes == == References == Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, ISBN 978-0-521-55655-2.
Wikipedia/Gauss–Legendre_method
Beeman's algorithm is a method for numerically integrating ordinary differential equations of order 2, more specifically Newton's equations of motion x ¨ = A ( x ) {\displaystyle {\ddot {x}}=A(x)} . It was designed to allow high numbers of particles in simulations of molecular dynamics. There is a direct or explicit and an implicit variant of the method. The direct variant was published by Schofield in 1973 as a personal communication from Beeman. This is what is commonly known as Beeman's method. It is a variant of the Verlet integration method. It produces identical positions, but uses a different formula for the velocities. Beeman in 1976 published a class of implicit (predictor–corrector) multi-step methods, where Beeman's method is the direct variant of the third-order method in this class. == Equation == The formula used to compute the positions at time t + Δ t {\displaystyle t+\Delta t} in the full predictor-corrector scheme is: Predict x ( t + Δ t ) {\displaystyle x(t+\Delta t)} from data at times t and t − Δ t {\displaystyle t{\text{ and }}t-\Delta t} x ( t + Δ t ) = x ( t ) + v ( t ) Δ t + 1 6 ( 4 a ( t ) − a ( t − Δ t ) ) Δ t 2 + O ( Δ t 4 ) {\displaystyle x(t+\Delta t)=x(t)+v(t)\Delta t+{\frac {1}{6}}{\Bigl (}4a(t)-a(t-\Delta t){\Bigr )}\Delta t^{2}+O(\Delta t^{4})} . Correct position and velocities at time t + Δ t {\displaystyle t+\Delta t} from data at times t and t + Δ t {\displaystyle t{\text{ and }}t+\Delta t} by repeated evaluation of the differential equation to get the acceleration a ( t + Δ t ) {\displaystyle a(t+\Delta t)} and of the equations of the implicit system x ( t + Δ t ) = x ( t ) + v ( t ) Δ t + 1 6 ( a ( t + Δ t ) + 2 a ( t ) ) Δ t 2 + O ( Δ t 4 ) ; v ( t + Δ t ) Δ t = x ( t + Δ t ) − x ( t ) + 1 6 ( 2 a ( t + Δ t ) + a ( t ) ) Δ t 2 + O ( Δ t 4 ) ; {\displaystyle {\begin{aligned}x(t+\Delta t)&=x(t)+v(t)\Delta t+{\frac {1}{6}}{\Bigl (}a(t+\Delta t)+2a(t){\Bigr )}\Delta t^{2}+O(\Delta t^{4});\\v(t+\Delta t)\Delta t&=x(t+\Delta t)-x(t)+{\frac {1}{6}}{\Bigl (}2a(t+\Delta t)+a(t){\Bigr )}\Delta t^{2}+O(\Delta t^{4});\end{aligned}}} In tests it was found that this corrector step needs to be repeated at most twice. The values on the right are the old values of the last iterations, resulting in the new values on the left. Using only the predictor formula and the corrector for the velocities one obtains a direct or explicit method which is a variant of the Verlet integration method: x ( t + Δ t ) = x ( t ) + v ( t ) Δ t + 1 6 ( 4 a ( t ) − a ( t − Δ t ) ) Δ t 2 + O ( Δ t 4 ) v ( t + Δ t ) = v ( t ) + 1 6 ( 2 a ( t + Δ t ) + 5 a ( t ) − a ( t − Δ t ) ) Δ t + O ( Δ t 3 ) ; {\displaystyle {\begin{aligned}x(t+\Delta t)&=x(t)+v(t)\Delta t+{\frac {1}{6}}{\Bigl (}4a(t)-a(t-\Delta t){\Bigr )}\Delta t^{2}+O(\Delta t^{4})\\v(t+\Delta t)&=v(t)+{\frac {1}{6}}{\Bigl (}2a(t+\Delta t)+5a(t)-a(t-\Delta t){\Bigr )}\Delta t+O(\Delta t^{3});\end{aligned}}} This is the variant that is usually understood as Beeman's method. Beeman also proposed to alternatively replace the velocity update in the last equation by the second order Adams–Moulton method: v ( t + Δ t ) = v ( t ) + 1 12 ( 5 a ( t + Δ t ) + 8 a ( t ) − a ( t − Δ t ) ) Δ t + O ( Δ t 3 ) {\displaystyle v(t+\Delta t)=v(t)+{\frac {1}{12}}{\Bigl (}5a(t+\Delta t)+8a(t)-a(t-\Delta t){\Bigr )}\Delta t+O(\Delta t^{3})} where t {\displaystyle t} is present time (i.e.: independent variable) Δ t {\displaystyle \Delta t} is the time step size x ( t ) {\displaystyle x(t)} is the position at time t v ( t ) {\displaystyle v(t)} is the velocity at time t a ( t ) {\displaystyle a(t)} is the acceleration at time t, computed as a function of x ( t ) {\displaystyle x(t)} the last term is the error term, using the big O notation == Predictor–corrector modifications == In systems where the forces are a function of velocity in addition to position, the above equations need to be modified into a predictor–corrector form whereby the velocities at time t + Δ t {\displaystyle t+\Delta t} are predicted and the forces calculated, before producing a corrected form of the velocities. An example is: x ( t + Δ t ) = x ( t ) + v ( t ) Δ t + 2 3 a ( t ) Δ t 2 − 1 6 a ( t − Δ t ) Δ t 2 + O ( Δ t 4 ) . {\displaystyle x(t+\Delta t)=x(t)+v(t)\Delta t+{\frac {2}{3}}a(t)\Delta t^{2}-{\frac {1}{6}}a(t-\Delta t)\Delta t^{2}+O(\Delta t^{4}).} The velocities at time t = t + Δ t {\displaystyle t=t+\Delta t} are then calculated (predicted) from the positions. v ( t + Δ t ) (predicted) = v ( t ) + 3 2 a ( t ) Δ t − 1 2 a ( t − Δ t ) Δ t + O ( Δ t 3 ) . {\displaystyle v(t+\Delta t)~{\text{(predicted)}}=v(t)+{\frac {3}{2}}a(t)\Delta t-{\frac {1}{2}}a(t-\Delta t)\Delta t+O(\Delta t^{3}).} The accelerations a ( t + Δ t ) {\displaystyle a(t+\Delta t)} at time t = t + Δ t {\displaystyle t=t+\Delta t} are then calculated from the positions and predicted velocities, and the velocities are corrected. v ( t + Δ t ) (corrected) = v ( t ) + 5 12 a ( t + Δ t ) Δ t + 2 3 a ( t ) Δ t − 1 12 a ( t − Δ t ) Δ t + O ( Δ t 3 ) . {\displaystyle v(t+\Delta t)~{\text{(corrected)}}=v(t)+{\frac {5}{12}}a(t+\Delta t)\Delta t+{\frac {2}{3}}a(t)\Delta t-{\frac {1}{12}}a(t-\Delta t)\Delta t+O(\Delta t^{3}).} == Error term == As shown above, the local error term is O ( Δ t 4 ) {\displaystyle O(\Delta t^{4})} for position and O ( Δ t 3 ) {\displaystyle O(\Delta t^{3})} velocity, resulting in a global error of O ( Δ t 3 ) {\displaystyle O(\Delta t^{3})} . In comparison, Verlet is O ( Δ t 2 ) {\displaystyle O(\Delta t^{2})} for position and velocity. In exchange for greater accuracy, Beeman's algorithm is moderately computationally more expensive. == Memory requirements == The simulation must keep track of position, velocity, acceleration and previous acceleration vectors per particle (though some clever workarounds for storing the previous acceleration vector are possible), keeping its memory requirements on par with velocity Verlet and slightly more expensive than the original Verlet method. == References == Sadus, Richard J. (2002), Molecular Theory of Fluids: Theory, Algorithms and Object-Orientation, Elsevier, p. 231, ISBN 0-444-51082-6
Wikipedia/Beeman's_algorithm
The Euler pump and turbine equations are the most fundamental equations in the field of turbomachinery. These equations govern the power, efficiencies and other factors that contribute to the design of turbomachines. With the help of these equations the head developed by a pump and the head utilised by a turbine can be easily determined. As the name suggests these equations were formulated by Leonhard Euler in the eighteenth century. These equations can be derived from the moment of momentum equation when applied for a pump or a turbine. == Conservation of angular momentum == A consequence of Newton's second law of mechanics is the conservation of the angular momentum (or the “moment of momentum”) which is fundamental to all turbomachines. Accordingly, the change of the angular momentum is equal to the sum of the external moments. The variation of angular momentum ρ ⋅ Q ⋅ r ⋅ c u {\displaystyle \rho \cdot Q\cdot r\cdot c_{u}} at inlet and outlet, an external torque M {\displaystyle M} and friction moments due to shear stresses M τ {\displaystyle M_{\tau }} act on an impeller or a diffuser. Since no pressure forces are created on cylindrical surfaces in the circumferential direction, it is possible to write: ρ Q ( c 2 u r 2 − c 1 u r 1 ) = M + M τ {\displaystyle \rho Q(c_{2u}r_{2}-c_{1u}r_{1})=M+M_{\tau }\,} (1.13) c 2 u = c 2 cos ⁡ α 2 {\displaystyle c_{2u}=c_{2}\cos \alpha _{2}\,} c 1 u = c 1 cos ⁡ α 1 . {\displaystyle c_{1u}=c_{1}\cos \alpha _{1}.\,} == Velocity triangles == The color triangles formed by velocity vectors u,c and w are called velocity triangles and are helpful in explaining how pumps work. c 1 {\displaystyle c_{1}\,} and c 2 {\displaystyle c_{2}\,} are the absolute velocities of the fluid at the inlet and outlet respectively. w 1 {\displaystyle w_{1}\,} and w 2 {\displaystyle w_{2}\,} are the relative velocities of the fluid with respect to the blade at the inlet and outlet respectively. u 1 {\displaystyle u_{1}\,} and u 2 {\displaystyle u_{2}\,} are the velocities of the blade at the inlet and outlet respectively. ω {\displaystyle \omega } is angular velocity. Figures 'a' and 'b' show impellers with backward and forward-curved vanes respectively. == Euler's pump equation == Based on Eq.(1.13), Euler developed the equation for the pressure head created by an impeller: Y t h = H t ⋅ g = c 2 u u 2 − c 1 u u 1 {\displaystyle Y_{th}=H_{t}\cdot g=c_{2u}u_{2}-c_{1u}u_{1}} (1) Y t h = 1 / 2 ( u 2 2 − u 1 2 + w 1 2 − w 2 2 + c 2 2 − c 1 2 ) {\displaystyle Y_{th}=1/2(u_{2}^{2}-u_{1}^{2}+w_{1}^{2}-w_{2}^{2}+c_{2}^{2}-c_{1}^{2})} (2) Yth : theoretical specific supply; Ht : theoretical head pressure; g: gravitational acceleration For the case of a Pelton turbine the static component of the head is zero, hence the equation reduces to: H = 1 2 g ( V 1 2 − V 2 2 ) . {\displaystyle H={1 \over {2g}}(V_{1}^{2}-V_{2}^{2}).\,} == Usage == Euler’s pump and turbine equations can be used to predict the effect that changing the impeller geometry has on the head. Qualitative estimations can be made from the impeller geometry about the performance of the turbine/pump. This equation can be written as rothalpy invariance: I = h 0 − u c u {\displaystyle I=h_{0}-uc_{u}} where I {\displaystyle I} is constant across the rotor blade. == See also == Euler equations (fluid dynamics) List of topics named after Leonhard Euler Rothalpy == References ==
Wikipedia/Euler's_pump_and_turbine_equation
In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution. When integrating a differential equation numerically, one would expect the requisite step size to be relatively small in a region where the solution curve displays much variation and to be relatively large where the solution curve straightens out to approach a line with slope nearly zero. For some problems this is not the case. In order for a numerical method to give a reliable solution to the differential system sometimes the step size is required to be at an unacceptably small level in a region where the solution curve is very smooth. The phenomenon is known as stiffness. In some cases there may be two different problems with the same solution, yet one is not stiff and the other is. The phenomenon cannot therefore be a property of the exact solution, since this is the same for both problems, and must be a property of the differential system itself. Such systems are thus known as stiff systems. == Motivating example == Consider the initial value problem The exact solution (shown in cyan) is We seek a numerical solution that exhibits the same behavior. The figure (right) illustrates the numerical issues for various numerical integrators applied on the equation. One of the most prominent examples of the stiff ordinary differential equations (ODEs) is a system that describes the chemical reaction of Robertson: If one treats this system on a short interval, for example, t ∈ [ 0 , 40 ] {\displaystyle t\in [0,40]} there is no problem in numerical integration. However, if the interval is very large (1011 say), then many standard codes fail to integrate it correctly. == Stiffness ratio == Consider the linear constant coefficient inhomogeneous system where y , f ∈ R n {\displaystyle \mathbf {y} ,\mathbf {f} \in \mathbb {R} ^{n}} and A {\displaystyle \mathbf {A} } is a constant, diagonalizable, n × n {\displaystyle n\times n} matrix with eigenvalues λ t ∈ C , t = 1 , 2 , … , n {\displaystyle \lambda _{t}\in \mathbb {C} ,t=1,2,\ldots ,n} (assumed distinct) and corresponding eigenvectors c t ∈ C n , t = 1 , 2 , … , n {\displaystyle \mathbf {c} _{t}\in \mathbb {C} ^{n},t=1,2,\ldots ,n} . The general solution of (5) takes the form where the κ t {\displaystyle \kappa _{t}} are arbitrary constants and g ( x ) {\displaystyle \mathbf {g} (x)} is a particular integral. Now let us suppose that which implies that each of the terms e λ t x c t → 0 {\displaystyle e^{\lambda _{t}x}\mathbf {c} _{t}\to 0} as x → ∞ {\displaystyle x\to \infty } , so that the solution y ( x ) {\displaystyle \mathbf {y} (x)} approaches g ( x ) {\displaystyle \mathbf {g} (x)} asymptotically as x → ∞ {\displaystyle x\to \infty } ; the term e λ t x c t {\displaystyle e^{\lambda _{t}x}\mathbf {c} _{t}} will decay monotonically if λ t {\displaystyle \lambda _{t}} is real and sinusoidally if λ t {\displaystyle \lambda _{t}} is complex. Interpreting x {\displaystyle x} to be time (as it often is in physical problems), ∑ t = 1 n κ t e λ t x c t {\textstyle \sum _{t=1}^{n}\kappa _{t}e^{\lambda _{t}x}\mathbf {c} _{t}} is called the transient solution and g ( x ) {\displaystyle \mathbf {g} (x)} the steady-state solution. If | Re ⁡ ( λ t ) | {\displaystyle \left|\operatorname {Re} (\lambda _{t})\right|} is large, then the corresponding term κ t e λ t x c t {\displaystyle \kappa _{t}e^{\lambda _{t}x}\mathbf {c} _{t}} will decay quickly as x {\displaystyle x} increases and is thus called a fast transient; if | Re ⁡ ( λ t ) | {\displaystyle \left|\operatorname {Re} (\lambda _{t})\right|} is small, the corresponding term κ t e λ t x c t {\displaystyle \kappa _{t}e^{\lambda _{t}x}\mathbf {c} _{t}} decays slowly and is called a slow transient. Let λ ¯ , λ _ ∈ { λ t , t = 1 , 2 , … , n } {\displaystyle {\overline {\lambda }},{\underline {\lambda }}\in \{\lambda _{t},t=1,2,\ldots ,n\}} be defined by so that κ t e λ ¯ x c t {\displaystyle \kappa _{t}e^{{\overline {\lambda }}x}\mathbf {c} _{t}} is the fastest transient and κ t e λ _ x c t {\displaystyle \kappa _{t}e^{{\underline {\lambda }}x}\mathbf {c} _{t}} the slowest. We now define the stiffness ratio as == Characterization of stiffness == In this section we consider various aspects of the phenomenon of stiffness. "Phenomenon" is probably a more appropriate word than "property", since the latter rather implies that stiffness can be defined in precise mathematical terms; it turns out not to be possible to do this in a satisfactory manner, even for the restricted class of linear constant coefficient systems. We shall also see several qualitative statements that can be (and mostly have been) made in an attempt to encapsulate the notion of stiffness, and state what is probably the most satisfactory of these as a "definition" of stiffness. J. D. Lambert defines stiffness as follows: If a numerical method with a finite region of absolute stability, applied to a system with any initial conditions, is forced to use in a certain interval of integration a step length which is excessively small in relation to the smoothness of the exact solution in that interval, then the system is said to be stiff in that interval. There are other characteristics which are exhibited by many examples of stiff problems, but for each there are counterexamples, so these characteristics do not make good definitions of stiffness. Nonetheless, definitions based upon these characteristics are in common use by some authors and are good clues as to the presence of stiffness. Lambert refers to these as "statements" rather than definitions, for the aforementioned reasons. A few of these are: A linear constant coefficient system is stiff if all of its eigenvalues have negative real part and the stiffness ratio is large. Stiffness occurs when stability requirements, rather than those of accuracy, constrain the step length. Stiffness occurs when some components of the solution decay much more rapidly than others. == Etymology == The origin of the term "stiffness" has not been clearly established. According to Joseph Oakland Hirschfelder, the term "stiff" is used because such systems correspond to tight coupling between the driver and driven in servomechanisms. According to Richard. L. Burden and J. Douglas Faires, Significant difficulties can occur when standard numerical techniques are applied to approximate the solution of a differential equation when the exact solution contains terms of the form e λ t {\displaystyle e^{\lambda t}} , where λ {\displaystyle \lambda } is a complex number with negative real part. . . . Problems involving rapidly decaying transient solutions occur naturally in a wide variety of applications, including the study of spring and damping systems, the analysis of control systems, and problems in chemical kinetics. These are all examples of a class of problems called stiff (mathematical stiffness) systems of differential equations, due to their application in analyzing the motion of spring and mass systems having large spring constants (physical stiffness). For example, the initial value problem with m = 1 {\displaystyle m=1} , c = 1001 {\displaystyle c=1001} , k = 1000 {\displaystyle k=1000} , can be written in the form (5) with n = 2 {\displaystyle n=2} and and has eigenvalues λ ¯ = − 1000 , λ _ = − 1 {\displaystyle {\overline {\lambda }}=-1000,{\underline {\lambda }}=-1} . Both eigenvalues have negative real part and the stiffness ratio is which is fairly large. System (10) then certainly satisfies statements 1 and 3. Here the spring constant k {\displaystyle k} is large and the damping constant c {\displaystyle c} is even larger. (while "large" is not a clearly-defined term, but the larger the above quantities are, the more pronounced will be the effect of stiffness.) The exact solution to (10) is Equation 13 behaves quite similarly to a simple exponential x 0 e − t {\displaystyle x_{0}e^{-t}} , but the presence of the e − 1000 t {\displaystyle e^{-1000t}} term, even with a small coefficient, is enough to make the numerical computation very sensitive to step size. Stable integration of (10) requires a very small step size until well into the smooth part of the solution curve, resulting in an error much smaller than required for accuracy. Thus the system also satisfies statement 2 and Lambert's definition. == A-stability == The behaviour of numerical methods on stiff problems can be analyzed by applying these methods to the test equation y ′ = k y {\displaystyle y'=ky} subject to the initial condition y ( 0 ) = 1 {\displaystyle y(0)=1} with k ∈ C {\displaystyle k\in \mathbb {C} } . The solution of this equation is y ( t ) = e k t {\displaystyle y(t)=e^{kt}} . This solution approaches zero as t → ∞ {\displaystyle t\to \infty } when Re ⁡ ( k ) < 0. {\displaystyle \operatorname {Re} (k)<0.} If the numerical method also exhibits this behaviour (for a fixed step size), then the method is said to be A-stable. A numerical method that is L-stable (see below) has the stronger property that the solution approaches zero in a single step as the step size goes to infinity. A-stable methods do not exhibit the instability problems as described in the motivating example. == Runge–Kutta methods == Runge–Kutta methods applied to the test equation y ′ = k ⋅ y {\displaystyle y'=k\cdot y} take the form y n + 1 = ϕ ( h k ) ⋅ y n {\displaystyle y_{n+1}=\phi (hk)\cdot y_{n}} , and, by induction, y n = ( ϕ ( h k ) ) n ⋅ y 0 {\displaystyle y_{n}={\bigl (}\phi (hk){\bigr )}^{n}\cdot y_{0}} . The function ϕ {\displaystyle \phi } is called the stability function. Thus, the condition that y n → 0 {\displaystyle y_{n}\to 0} as n → ∞ {\displaystyle n\to \infty } is equivalent to | ϕ ( h k ) | < 1 {\displaystyle |\phi (hk)|<1} . This motivates the definition of the region of absolute stability (sometimes referred to simply as stability region), which is the set { z ∈ C | | ϕ ( z ) | < 1 } {\displaystyle {\bigl \{}z\in \mathbb {C} \,{\big |}\,|\phi (z)|<1{\bigr \}}} . The method is A-stable if the region of absolute stability contains the set { z ∈ C | Re ⁡ ( z ) < 0 } {\displaystyle {\bigl \{}z\in \mathbb {C} \,{\big |}\,\operatorname {Re} (z)<0{\bigr \}}} , that is, the left half plane. === Example: The Euler methods === Consider the Euler methods above. The explicit Euler method applied to the test equation y ′ = k ⋅ y {\displaystyle y'=k\cdot y} is y n + 1 = y n + h ⋅ f ( t n , y n ) = y n + h ⋅ ( k y n ) = y n + h ⋅ k ⋅ y n = ( 1 + h ⋅ k ) y n . {\displaystyle y_{n+1}=y_{n}+h\cdot f(t_{n},y_{n})=y_{n}+h\cdot (ky_{n})=y_{n}+h\cdot k\cdot y_{n}=(1+h\cdot k)y_{n}.} Hence, y n = ( 1 + h k ) n ⋅ y 0 {\displaystyle y_{n}=(1+hk)^{n}\cdot y_{0}} with ϕ ( z ) = 1 + z {\displaystyle \phi (z)=1+z} . The region of absolute stability for this method is thus { z ∈ C | | 1 + z | < 1 } {\displaystyle {\bigl \{}z\in \mathbb {C} \,{\big |}\,|1+z|<1{\bigr \}}} which is the disk depicted on the right. The Euler method is not A-stable. The motivating example had k = − 15 {\displaystyle k=-15} . The value of z when taking step size h = 1 4 {\displaystyle h={\tfrac {1}{4}}} is z = − 15 × 1 4 = − 3.75 {\displaystyle z=-15\times {\tfrac {1}{4}}=-3.75} , which is outside the stability region. Indeed, the numerical results do not converge to zero. However, with step size h = 1 8 {\displaystyle h={\tfrac {1}{8}}} , we have z = − 1.875 {\displaystyle z=-1.875} which is just inside the stability region and the numerical results converge to zero, albeit rather slowly. === Example: Trapezoidal method === Consider the trapezoidal method y n + 1 = y n + 1 2 h ⋅ ( f ( t n , y n ) + f ( t n + 1 , y n + 1 ) ) , {\displaystyle y_{n+1}=y_{n}+{\tfrac {1}{2}}h\cdot {\bigl (}f(t_{n},y_{n})+f(t_{n+1},y_{n+1}){\bigr )},} when applied to the test equation y ′ = k ⋅ y {\displaystyle y'=k\cdot y} , is y n + 1 = y n + 1 2 h ⋅ ( k y n + k y n + 1 ) . {\displaystyle y_{n+1}=y_{n}+{\tfrac {1}{2}}h\cdot \left(ky_{n}+ky_{n+1}\right).} Solving for y n + 1 {\displaystyle y_{n+1}} yields y n + 1 = 1 + 1 2 h k 1 − 1 2 h k ⋅ y n . {\displaystyle y_{n+1}={\frac {1+{\frac {1}{2}}hk}{1-{\frac {1}{2}}hk}}\cdot y_{n}.} Thus, the stability function is ϕ ( z ) = 1 + 1 2 z 1 − 1 2 z {\displaystyle \phi (z)={\frac {1+{\frac {1}{2}}z}{1-{\frac {1}{2}}z}}} and the region of absolute stability is { z ∈ C | | 1 + 1 2 z 1 − 1 2 z | < 1 } . {\displaystyle \left\{z\in \mathbb {C} \ \left|\ \left|{\frac {1+{\frac {1}{2}}z}{1-{\frac {1}{2}}z}}\right|<1\right.\right\}.} This region contains the left half-plane, so the trapezoidal method is A-stable. In fact, the stability region is identical to the left half-plane, and thus the numerical solution of y ′ = k ⋅ y {\displaystyle y'=k\cdot y} converges to zero if and only if the exact solution does. Nevertheless, the trapezoidal method does not have perfect behavior: it does damp all decaying components, but rapidly decaying components are damped only very mildly, because ϕ ( z ) → 1 {\displaystyle \phi (z)\to 1} as z → − ∞ {\displaystyle z\to -\infty } . This led to the concept of L-stability: a method is L-stable if it is A-stable and | ϕ ( z ) | → 0 {\displaystyle |\phi (z)|\to 0} as z → ∞ {\displaystyle z\to \infty } . The trapezoidal method is A-stable but not L-stable. The implicit Euler method is an example of an L-stable method. === General theory === The stability function of a Runge–Kutta method with coefficients A {\displaystyle \mathbf {A} } and b {\displaystyle \mathbf {b} } is given by ϕ ( z ) = det ( I − z A + z e b T ) det ( I − z A ) , {\displaystyle \phi (z)={\frac {\det \left(\mathbf {I} -z\mathbf {A} +z\mathbf {e} \mathbf {b} ^{\mathsf {T}}\right)}{\det(\mathbf {I} -z\mathbf {A} )}},} where e {\displaystyle \mathbf {e} } denotes the vector with all ones. This is a rational function (one polynomial divided by another). Explicit Runge–Kutta methods have a strictly lower triangular coefficient matrix A {\displaystyle \mathbf {A} } and thus, their stability function is a polynomial. It follows that explicit Runge–Kutta methods cannot be A-stable. The stability function of implicit Runge–Kutta methods is often analyzed using order stars. The order star for a method with stability function ϕ {\displaystyle \phi } is defined to be the set { z ∈ C | | ϕ ( z ) | > | e z | } {\displaystyle {\bigl \{}z\in \mathbb {C} \,{\big |}\,|\phi (z)|>|e^{z}|{\bigr \}}} . A method is A-stable if and only if its stability function has no poles in the left-hand plane and its order star contains no purely imaginary numbers. == Multistep methods == Linear multistep methods have the form y n + 1 = ∑ i = 0 s a i y n − i + h ∑ j = − 1 s b j f ( t n − j , y n − j ) . {\displaystyle y_{n+1}=\sum _{i=0}^{s}a_{i}y_{n-i}+h\sum _{j=-1}^{s}b_{j}f\left(t_{n-j},y_{n-j}\right).} Applied to the test equation, they become y n + 1 = ∑ i = 0 s a i y n − i + h k ∑ j = − 1 s b j y n − j , {\displaystyle y_{n+1}=\sum _{i=0}^{s}a_{i}y_{n-i}+hk\sum _{j=-1}^{s}b_{j}y_{n-j},} which can be simplified to ( 1 − b − 1 z ) y n + 1 − ∑ j = 0 s ( a j + b j z ) y n − j = 0 {\displaystyle \left(1-b_{-1}z\right)y_{n+1}-\sum _{j=0}^{s}\left(a_{j}+b_{j}z\right)y_{n-j}=0} where z = h k {\displaystyle z=hk} . This is a linear recurrence relation. The method is A-stable if all solutions { y n } {\displaystyle \{y_{n}\}} of the recurrence relation converge to zero when Re ⁡ ( z ) < 0 {\displaystyle \operatorname {Re} (z)<0} . The characteristic polynomial is Φ ( z , w ) = w s + 1 − ∑ i = 0 s a i w s − i − z ∑ j = − 1 s b j w s − j . {\displaystyle \Phi (z,w)=w^{s+1}-\sum _{i=0}^{s}a_{i}w^{s-i}-z\sum _{j=-1}^{s}b_{j}w^{s-j}.} All solutions converge to zero for a given value of z {\displaystyle z} if all solutions w {\displaystyle w} of Φ ( z , w ) = 0 {\displaystyle \Phi (z,w)=0} lie in the unit circle. The region of absolute stability for a multistep method of the above form is then the set of all z ∈ C {\displaystyle z\in \mathbb {C} } for which all w {\displaystyle w} such that Φ ( z , w ) = 0 {\displaystyle \Phi (z,w)=0} satisfy | w | < 1 {\displaystyle |w|<1} . Again, if this set contains the left half-plane, the multi-step method is said to be A-stable. === Example: The second-order Adams–Bashforth method === Let us determine the region of absolute stability for the two-step Adams–Bashforth method y n + 1 = y n + h ( 3 2 f ( t n , y n ) − 1 2 f ( t n − 1 , y n − 1 ) ) . {\displaystyle y_{n+1}=y_{n}+h\left({\tfrac {3}{2}}f(t_{n},y_{n})-{\tfrac {1}{2}}f(t_{n-1},y_{n-1})\right).} The characteristic polynomial is Φ ( w , z ) = w 2 − ( 1 + 3 2 z ) w + 1 2 z = 0 {\displaystyle \Phi (w,z)=w^{2}-\left(1+{\tfrac {3}{2}}z\right)w+{\tfrac {1}{2}}z=0} which has roots w = 1 2 ( 1 + 3 2 z ± 1 + z + 9 4 z 2 ) , {\displaystyle w={\tfrac {1}{2}}\left(1+{\tfrac {3}{2}}z\pm {\sqrt {1+z+{\tfrac {9}{4}}z^{2}}}\right),} thus the region of absolute stability is { z ∈ C | | 1 2 ( 1 + 3 2 z ± 1 + z + 9 4 z 2 ) | < 1 } . {\displaystyle \left\{z\in \mathbb {C} \ \left|\ \left|{\tfrac {1}{2}}\left(1+{\tfrac {3}{2}}z\pm {\sqrt {1+z+{\tfrac {9}{4}}z^{2}}}\right)\right|<1\right.\right\}.} This region is shown on the right. It does not include all the left half-plane (in fact it only includes the real axis between − 1 ≤ z ≤ 0 {\displaystyle -1\leq z\leq 0} ) so the Adams–Bashforth method is not A-stable. === General theory === Explicit multistep methods can never be A-stable, just like explicit Runge–Kutta methods. Implicit multistep methods can only be A-stable if their order is at most 2. The latter result is known as the second Dahlquist barrier; it restricts the usefulness of linear multistep methods for stiff equations. An example of a second-order A-stable method is the trapezoidal rule mentioned above, which can also be considered as a linear multistep method. == See also == Backward differentiation formula, a family of implicit methods especially used for the solution of stiff differential equations Condition number Differential inclusion, an extension of the notion of differential equation that allows discontinuities, in part as way to sidestep some stiffness issues Explicit and implicit methods == Notes == == References == Burden, Richard L.; Faires, J. Douglas (1993), Numerical Analysis (5th ed.), Boston: Prindle, Weber and Schmidt, ISBN 0-534-93219-3. Dahlquist, Germund (1963), "A special stability problem for linear multistep methods", BIT, 3 (1): 27–43, doi:10.1007/BF01963532, hdl:10338.dmlcz/103497, S2CID 120241743. Eberly, David (2008), Stability analysis for systems of differential equations (PDF). Ehle, B. L. (1969), On Padé approximations to the exponential function and A-stable methods for the numerical solution of initial value problems (PDF), University of Waterloo. Gear, C. W. (1971), Numerical Initial-Value Problems in Ordinary Differential Equations, Englewood Cliffs: Prentice Hall, Bibcode:1971nivp.book.....G. Gear, C. W. (1981), "Numerical solution of ordinary differential equations: Is there anything left to do?", SIAM Review, 23 (1): 10–24, doi:10.1137/1023002. Hairer, Ernst; Wanner, Gerhard (1996), Solving ordinary differential equations II: Stiff and differential-algebraic problems (second ed.), Berlin: Springer-Verlag, ISBN 978-3-540-60452-5. Hirshfelder, J. O. (1963), "Applied Mathematics as used in Theoretical Chemistry", American Mathematical Society Symposium: 367–376. Iserles, Arieh; Nørsett, Syvert (1991), Order Stars, Chapman & Hall, ISBN 978-0-412-35260-7. Kreyszig, Erwin (1972), Advanced Engineering Mathematics (3rd ed.), New York: Wiley, ISBN 0-471-50728-8. Lambert, J. D. (1977), D. Jacobs (ed.), "The initial value problem for ordinary differential equations", The State of the Art in Numerical Analysis, New York: Academic Press: 451–501. Lambert, J. D. (1992), Numerical Methods for Ordinary Differential Systems, New York: Wiley, ISBN 978-0-471-92990-1. Mathews, John; Fink, Kurtis (1992), Numerical methods using MATLAB. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 17.5. Stiff Sets of Equations". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2011-08-17. Shampine, L. F.; Gear, C. W. (1979), "A user's view of solving stiff ordinary differential equations", SIAM Review, 21 (1): 1–17, doi:10.1137/1021001. Wanner, Gerhard; Hairer, Ernst; Nørsett, Syvert (1978), "Order stars and stability theory", BIT, 18 (4): 475–489, doi:10.1007/BF01932026, S2CID 8824105. Stability of Runge-Kutta Methods [1] == External links == An Introduction to Physically Based Modeling: Energy Functions and Stiffness Stiff systems Lawrence F. Shampine and Skip Thompson Scholarpedia, 2(3):2855. doi:10.4249/scholarpedia.2855
Wikipedia/Stiff_equation
Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The process continues with subsequent steps to map out the solution. Single-step methods (such as Euler's method) refer to only one previous point and its derivative to determine the current value. Methods such as Runge–Kutta take some intermediate steps (for example, a half-step) to obtain a higher order method, but then discard all previous information before taking a second step. Multistep methods attempt to gain efficiency by keeping and using the information from previous steps rather than discarding it. Consequently, multistep methods refer to several previous points and derivative values. In the case of linear multistep methods, a linear combination of the previous points and derivative values is used. == Definitions == Numerical methods for ordinary differential equations approximate solutions to initial value problems of the form y ′ = f ( t , y ) , y ( t 0 ) = y 0 . {\displaystyle y'=f(t,y),\quad y(t_{0})=y_{0}.} The result is approximations for the value of y ( t ) {\displaystyle y(t)} at discrete times t i {\displaystyle t_{i}} : y i ≈ y ( t i ) where t i = t 0 + i h , {\displaystyle y_{i}\approx y(t_{i})\quad {\text{where}}\quad t_{i}=t_{0}+ih,} where h {\displaystyle h} is the time step (sometimes referred to as Δ t {\displaystyle \Delta t} ) and i {\displaystyle i} is an integer. Multistep methods use information from the previous s {\displaystyle s} steps to calculate the next value. In particular, a linear multistep method uses a linear combination of y i {\displaystyle y_{i}} and f ( t i , y i ) {\displaystyle f(t_{i},y_{i})} to calculate the value of y {\displaystyle y} for the desired current step. Thus, a linear multistep method is a method of the form y n + s + a s − 1 ⋅ y n + s − 1 + a s − 2 ⋅ y n + s − 2 + ⋯ + a 0 ⋅ y n = h ⋅ ( b s ⋅ f ( t n + s , y n + s ) + b s − 1 ⋅ f ( t n + s − 1 , y n + s − 1 ) + ⋯ + b 0 ⋅ f ( t n , y n ) ) ⇔ ∑ j = 0 s a j y n + j = h ∑ j = 0 s b j f ( t n + j , y n + j ) , {\displaystyle {\begin{aligned}&y_{n+s}+a_{s-1}\cdot y_{n+s-1}+a_{s-2}\cdot y_{n+s-2}+\cdots +a_{0}\cdot y_{n}\\&\qquad {}=h\cdot \left(b_{s}\cdot f(t_{n+s},y_{n+s})+b_{s-1}\cdot f(t_{n+s-1},y_{n+s-1})+\cdots +b_{0}\cdot f(t_{n},y_{n})\right)\\&\Leftrightarrow \sum _{j=0}^{s}a_{j}y_{n+j}=h\sum _{j=0}^{s}b_{j}f(t_{n+j},y_{n+j}),\end{aligned}}} with a s = 1 {\displaystyle a_{s}=1} . The coefficients a 0 , … , a s − 1 {\displaystyle a_{0},\dotsc ,a_{s-1}} and b 0 , … , b s {\displaystyle b_{0},\dotsc ,b_{s}} determine the method. The designer of the method chooses the coefficients, balancing the need to get a good approximation to the true solution against the desire to get a method that is easy to apply. Often, many coefficients are zero to simplify the method. One can distinguish between explicit and implicit methods. If b s = 0 {\displaystyle b_{s}=0} , then the method is called "explicit", since the formula can directly compute y n + s {\displaystyle y_{n+s}} . If b s ≠ 0 {\displaystyle b_{s}\neq 0} then the method is called "implicit", since the value of y n + s {\displaystyle y_{n+s}} depends on the value of f ( t n + s , y n + s ) {\displaystyle f(t_{n+s},y_{n+s})} , and the equation must be solved for y n + s {\displaystyle y_{n+s}} . Iterative methods such as Newton's method are often used to solve the implicit formula. Sometimes an explicit multistep method is used to "predict" the value of y n + s {\displaystyle y_{n+s}} . That value is then used in an implicit formula to "correct" the value. The result is a predictor–corrector method. == Examples == Consider for an example the problem y ′ = f ( t , y ) = y , y ( 0 ) = 1. {\displaystyle y'=f(t,y)=y,\quad y(0)=1.} The exact solution is y ( t ) = e t {\displaystyle y(t)=e^{t}} . === One-step Euler === A simple numerical method is Euler's method: y n + 1 = y n + h f ( t n , y n ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}).} Euler's method can be viewed as an explicit multistep method for the degenerate case of one step. This method, applied with step size h = 1 2 {\displaystyle h={\tfrac {1}{2}}} on the problem y ′ = y {\displaystyle y'=y} , gives the following results: y 1 = y 0 + h f ( t 0 , y 0 ) = 1 + 1 2 ⋅ 1 = 1.5 , y 2 = y 1 + h f ( t 1 , y 1 ) = 1.5 + 1 2 ⋅ 1.5 = 2.25 , y 3 = y 2 + h f ( t 2 , y 2 ) = 2.25 + 1 2 ⋅ 2.25 = 3.375 , y 4 = y 3 + h f ( t 3 , y 3 ) = 3.375 + 1 2 ⋅ 3.375 = 5.0625. {\displaystyle {\begin{aligned}y_{1}&=y_{0}+hf(t_{0},y_{0})=1+{\tfrac {1}{2}}\cdot 1=1.5,\\y_{2}&=y_{1}+hf(t_{1},y_{1})=1.5+{\tfrac {1}{2}}\cdot 1.5=2.25,\\y_{3}&=y_{2}+hf(t_{2},y_{2})=2.25+{\tfrac {1}{2}}\cdot 2.25=3.375,\\y_{4}&=y_{3}+hf(t_{3},y_{3})=3.375+{\tfrac {1}{2}}\cdot 3.375=5.0625.\end{aligned}}} === Two-step Adams–Bashforth === Euler's method is a one-step method. A simple multistep method is the two-step Adams–Bashforth method y n + 2 = y n + 1 + 3 2 h f ( t n + 1 , y n + 1 ) − 1 2 h f ( t n , y n ) . {\displaystyle y_{n+2}=y_{n+1}+{\tfrac {3}{2}}hf(t_{n+1},y_{n+1})-{\tfrac {1}{2}}hf(t_{n},y_{n}).} This method needs two values, y n + 1 {\displaystyle y_{n+1}} and y n {\displaystyle y_{n}} , to compute the next value, y n + 2 {\displaystyle y_{n+2}} . However, the initial value problem provides only one value, y 0 = 1 {\displaystyle y_{0}=1} . One possibility to resolve this issue is to use the y 1 {\displaystyle y_{1}} computed by Euler's method as the second value. With this choice, the Adams–Bashforth method yields (rounded to four digits): y 2 = y 1 + 3 2 h f ( t 1 , y 1 ) − 1 2 h f ( t 0 , y 0 ) = 1.5 + 3 2 ⋅ 1 2 ⋅ 1.5 − 1 2 ⋅ 1 2 ⋅ 1 = 2.375 , y 3 = y 2 + 3 2 h f ( t 2 , y 2 ) − 1 2 h f ( t 1 , y 1 ) = 2.375 + 3 2 ⋅ 1 2 ⋅ 2.375 − 1 2 ⋅ 1 2 ⋅ 1.5 = 3.7812 , y 4 = y 3 + 3 2 h f ( t 3 , y 3 ) − 1 2 h f ( t 2 , y 2 ) = 3.7812 + 3 2 ⋅ 1 2 ⋅ 3.7812 − 1 2 ⋅ 1 2 ⋅ 2.375 = 6.0234. {\displaystyle {\begin{aligned}y_{2}&=y_{1}+{\tfrac {3}{2}}hf(t_{1},y_{1})-{\tfrac {1}{2}}hf(t_{0},y_{0})=1.5+{\tfrac {3}{2}}\cdot {\tfrac {1}{2}}\cdot 1.5-{\tfrac {1}{2}}\cdot {\tfrac {1}{2}}\cdot 1=2.375,\\y_{3}&=y_{2}+{\tfrac {3}{2}}hf(t_{2},y_{2})-{\tfrac {1}{2}}hf(t_{1},y_{1})=2.375+{\tfrac {3}{2}}\cdot {\tfrac {1}{2}}\cdot 2.375-{\tfrac {1}{2}}\cdot {\tfrac {1}{2}}\cdot 1.5=3.7812,\\y_{4}&=y_{3}+{\tfrac {3}{2}}hf(t_{3},y_{3})-{\tfrac {1}{2}}hf(t_{2},y_{2})=3.7812+{\tfrac {3}{2}}\cdot {\tfrac {1}{2}}\cdot 3.7812-{\tfrac {1}{2}}\cdot {\tfrac {1}{2}}\cdot 2.375=6.0234.\end{aligned}}} The exact solution at t = t 4 = 2 {\displaystyle t=t_{4}=2} is e 2 = 7.3891 … {\displaystyle e^{2}=7.3891\ldots } , so the two-step Adams–Bashforth method is more accurate than Euler's method. This is always the case if the step size is small enough. == Families of multistep methods == Three families of linear multistep methods are commonly used: Adams–Bashforth methods, Adams–Moulton methods, and the backward differentiation formulas (BDFs). === Adams–Bashforth methods === The Adams–Bashforth methods are explicit methods. The coefficients are a s − 1 = − 1 {\displaystyle a_{s-1}=-1} and a s − 2 = ⋯ = a 0 = 0 {\displaystyle a_{s-2}=\cdots =a_{0}=0} , while the b j {\displaystyle b_{j}} are chosen such that the methods have order s (this determines the methods uniquely). The Adams–Bashforth methods with s = 1, 2, 3, 4, 5 are (Hairer, Nørsett & Wanner 1993, §III.1; Butcher 2003, p. 103): y n + 1 = y n + h f ( t n , y n ) , (This is the Euler method) y n + 2 = y n + 1 + h ( 3 2 f ( t n + 1 , y n + 1 ) − 1 2 f ( t n , y n ) ) , y n + 3 = y n + 2 + h ( 23 12 f ( t n + 2 , y n + 2 ) − 16 12 f ( t n + 1 , y n + 1 ) + 5 12 f ( t n , y n ) ) , y n + 4 = y n + 3 + h ( 55 24 f ( t n + 3 , y n + 3 ) − 59 24 f ( t n + 2 , y n + 2 ) + 37 24 f ( t n + 1 , y n + 1 ) − 9 24 f ( t n , y n ) ) , y n + 5 = y n + 4 + h ( 1901 720 f ( t n + 4 , y n + 4 ) − 2774 720 f ( t n + 3 , y n + 3 ) + 2616 720 f ( t n + 2 , y n + 2 ) − 1274 720 f ( t n + 1 , y n + 1 ) + 251 720 f ( t n , y n ) ) . {\displaystyle {\begin{aligned}y_{n+1}&=y_{n}+hf(t_{n},y_{n}),\qquad {\text{(This is the Euler method)}}\\y_{n+2}&=y_{n+1}+h\left({\frac {3}{2}}f(t_{n+1},y_{n+1})-{\frac {1}{2}}f(t_{n},y_{n})\right),\\y_{n+3}&=y_{n+2}+h\left({\frac {23}{12}}f(t_{n+2},y_{n+2})-{\frac {16}{12}}f(t_{n+1},y_{n+1})+{\frac {5}{12}}f(t_{n},y_{n})\right),\\y_{n+4}&=y_{n+3}+h\left({\frac {55}{24}}f(t_{n+3},y_{n+3})-{\frac {59}{24}}f(t_{n+2},y_{n+2})+{\frac {37}{24}}f(t_{n+1},y_{n+1})-{\frac {9}{24}}f(t_{n},y_{n})\right),\\y_{n+5}&=y_{n+4}+h\left({\frac {1901}{720}}f(t_{n+4},y_{n+4})-{\frac {2774}{720}}f(t_{n+3},y_{n+3})+{\frac {2616}{720}}f(t_{n+2},y_{n+2})-{\frac {1274}{720}}f(t_{n+1},y_{n+1})+{\frac {251}{720}}f(t_{n},y_{n})\right).\end{aligned}}} The coefficients b j {\displaystyle b_{j}} can be determined as follows. Use polynomial interpolation to find the polynomial p of degree s − 1 {\displaystyle s-1} such that p ( t n + i ) = f ( t n + i , y n + i ) , for i = 0 , … , s − 1. {\displaystyle p(t_{n+i})=f(t_{n+i},y_{n+i}),\qquad {\text{for }}i=0,\ldots ,s-1.} The Lagrange formula for polynomial interpolation yields p ( t ) = ∑ j = 0 s − 1 ( − 1 ) s − j − 1 f ( t n + j , y n + j ) j ! ( s − j − 1 ) ! h s − 1 ∏ i = 0 i ≠ j s − 1 ( t − t n + i ) . {\displaystyle p(t)=\sum _{j=0}^{s-1}{\frac {(-1)^{s-j-1}f(t_{n+j},y_{n+j})}{j!(s-j-1)!h^{s-1}}}\prod _{i=0 \atop i\neq j}^{s-1}(t-t_{n+i}).} The polynomial p is locally a good approximation of the right-hand side of the differential equation y ′ = f ( t , y ) {\displaystyle y'=f(t,y)} that is to be solved, so consider the equation y ′ = p ( t ) {\displaystyle y'=p(t)} instead. This equation can be solved exactly; the solution is simply the integral of p. This suggests taking y n + s = y n + s − 1 + ∫ t n + s − 1 t n + s p ( t ) d t . {\displaystyle y_{n+s}=y_{n+s-1}+\int _{t_{n+s-1}}^{t_{n+s}}p(t)\,\mathrm {d} t.} The Adams–Bashforth method arises when the formula for p is substituted. The coefficients b j {\displaystyle b_{j}} turn out to be given by b s − j − 1 = ( − 1 ) j j ! ( s − j − 1 ) ! ∫ 0 1 ∏ i = 0 i ≠ j s − 1 ( u + i ) d u , for j = 0 , … , s − 1. {\displaystyle b_{s-j-1}={\frac {(-1)^{j}}{j!(s-j-1)!}}\int _{0}^{1}\prod _{i=0 \atop i\neq j}^{s-1}(u+i)\,\mathrm {d} u,\qquad {\text{for }}j=0,\ldots ,s-1.} Replacing f ( t , y ) {\displaystyle f(t,y)} by its interpolant p incurs an error of order hs, and it follows that the s-step Adams–Bashforth method has indeed order s (Iserles 1996, §2.1) The Adams–Bashforth methods were designed by John Couch Adams to solve a differential equation modelling capillary action due to Francis Bashforth. Bashforth (1883) published his theory and Adams' numerical method (Goldstine 1977). === Adams–Moulton methods === The Adams–Moulton methods are similar to the Adams–Bashforth methods in that they also have a s − 1 = − 1 {\displaystyle a_{s-1}=-1} and a s − 2 = ⋯ = a 0 = 0 {\displaystyle a_{s-2}=\cdots =a_{0}=0} . Again the b coefficients are chosen to obtain the highest order possible. However, the Adams–Moulton methods are implicit methods. By removing the restriction that b s = 0 {\displaystyle b_{s}=0} , an s-step Adams–Moulton method can reach order s + 1 {\displaystyle s+1} , while an s-step Adams–Bashforth methods has only order s. The Adams–Moulton methods with s = 0, 1, 2, 3, 4 are (Hairer, Nørsett & Wanner 1993, §III.1; Quarteroni, Sacco & Saleri 2000) listed, where the first two methods are the backward Euler method and the trapezoidal rule respectively: y n = y n − 1 + h f ( t n , y n ) , y n + 1 = y n + 1 2 h ( f ( t n + 1 , y n + 1 ) + f ( t n , y n ) ) , y n + 2 = y n + 1 + h ( 5 12 f ( t n + 2 , y n + 2 ) + 8 12 f ( t n + 1 , y n + 1 ) − 1 12 f ( t n , y n ) ) , y n + 3 = y n + 2 + h ( 9 24 f ( t n + 3 , y n + 3 ) + 19 24 f ( t n + 2 , y n + 2 ) − 5 24 f ( t n + 1 , y n + 1 ) + 1 24 f ( t n , y n ) ) , y n + 4 = y n + 3 + h ( 251 720 f ( t n + 4 , y n + 4 ) + 646 720 f ( t n + 3 , y n + 3 ) − 264 720 f ( t n + 2 , y n + 2 ) + 106 720 f ( t n + 1 , y n + 1 ) − 19 720 f ( t n , y n ) ) . {\displaystyle {\begin{aligned}y_{n}&=y_{n-1}+hf(t_{n},y_{n}),\\y_{n+1}&=y_{n}+{\frac {1}{2}}h\left(f(t_{n+1},y_{n+1})+f(t_{n},y_{n})\right),\\y_{n+2}&=y_{n+1}+h\left({\frac {5}{12}}f(t_{n+2},y_{n+2})+{\frac {8}{12}}f(t_{n+1},y_{n+1})-{\frac {1}{12}}f(t_{n},y_{n})\right),\\y_{n+3}&=y_{n+2}+h\left({\frac {9}{24}}f(t_{n+3},y_{n+3})+{\frac {19}{24}}f(t_{n+2},y_{n+2})-{\frac {5}{24}}f(t_{n+1},y_{n+1})+{\frac {1}{24}}f(t_{n},y_{n})\right),\\y_{n+4}&=y_{n+3}+h\left({\frac {251}{720}}f(t_{n+4},y_{n+4})+{\frac {646}{720}}f(t_{n+3},y_{n+3})-{\frac {264}{720}}f(t_{n+2},y_{n+2})+{\frac {106}{720}}f(t_{n+1},y_{n+1})-{\frac {19}{720}}f(t_{n},y_{n})\right).\end{aligned}}} The derivation of the Adams–Moulton methods is similar to that of the Adams–Bashforth method; however, the interpolating polynomial uses not only the points t n − 1 , … , t n − s {\displaystyle t_{n-1},\dots ,t_{n-s}} , as above, but also t n {\displaystyle t_{n}} . The coefficients are given by b s − j = ( − 1 ) j j ! ( s − j ) ! ∫ 0 1 ∏ i = 0 i ≠ j s ( u + i − 1 ) d u , for j = 0 , … , s . {\displaystyle b_{s-j}={\frac {(-1)^{j}}{j!(s-j)!}}\int _{0}^{1}\prod _{i=0 \atop i\neq j}^{s}(u+i-1)\,\mathrm {d} u,\qquad {\text{for }}j=0,\ldots ,s.} The Adams–Moulton methods are solely due to John Couch Adams, like the Adams–Bashforth methods. The name of Forest Ray Moulton became associated with these methods because he realized that they could be used in tandem with the Adams–Bashforth methods as a predictor-corrector pair (Moulton 1926); Milne (1926) had the same idea. Adams used Newton's method to solve the implicit equation (Hairer, Nørsett & Wanner 1993, §III.1). === Backward differentiation formulas (BDF) === The BDF methods are implicit methods with b s − 1 = ⋯ = b 0 = 0 {\displaystyle b_{s-1}=\cdots =b_{0}=0} and the other coefficients chosen such that the method attains order s (the maximum possible). These methods are especially used for the solution of stiff differential equations. == Analysis == The central concepts in the analysis of linear multistep methods, and indeed any numerical method for differential equations, are convergence, order, and stability. === Consistency and order === The first question is whether the method is consistent: is the difference equation a s y n + s + a s − 1 y n + s − 1 + a s − 2 y n + s − 2 + ⋯ + a 0 y n = h ( b s f ( t n + s , y n + s ) + b s − 1 f ( t n + s − 1 , y n + s − 1 ) + ⋯ + b 0 f ( t n , y n ) ) , {\displaystyle {\begin{aligned}&a_{s}y_{n+s}+a_{s-1}y_{n+s-1}+a_{s-2}y_{n+s-2}+\cdots +a_{0}y_{n}\\&\qquad {}=h{\bigl (}b_{s}f(t_{n+s},y_{n+s})+b_{s-1}f(t_{n+s-1},y_{n+s-1})+\cdots +b_{0}f(t_{n},y_{n}){\bigr )},\end{aligned}}} a good approximation of the differential equation y ′ = f ( t , y ) {\displaystyle y'=f(t,y)} ? More precisely, a multistep method is consistent if the local truncation error goes to zero faster than the step size h as h goes to zero, where the local truncation error is defined to be the difference between the result y n + s {\displaystyle y_{n+s}} of the method, assuming that all the previous values y n + s − 1 , … , y n {\displaystyle y_{n+s-1},\ldots ,y_{n}} are exact, and the exact solution of the equation at time t n + s {\displaystyle t_{n+s}} . A computation using Taylor series shows that a linear multistep method is consistent if and only if ∑ k = 0 s − 1 a k = − 1 and ∑ k = 0 s b k = s + ∑ k = 0 s − 1 k a k . {\displaystyle \sum _{k=0}^{s-1}a_{k}=-1\quad {\text{and}}\quad \sum _{k=0}^{s}b_{k}=s+\sum _{k=0}^{s-1}ka_{k}.} All the methods mentioned above are consistent (Hairer, Nørsett & Wanner 1993, §III.2). If the method is consistent, then the next question is how well the difference equation defining the numerical method approximates the differential equation. A multistep method is said to have order p if the local error is of order O ( h p + 1 ) {\displaystyle O(h^{p+1})} as h goes to zero. This is equivalent to the following condition on the coefficients of the methods: ∑ k = 0 s − 1 a k = − 1 and q ∑ k = 0 s k q − 1 b k = s q + ∑ k = 0 s − 1 k q a k for q = 1 , … , p . {\displaystyle \sum _{k=0}^{s-1}a_{k}=-1\quad {\text{and}}\quad q\sum _{k=0}^{s}k^{q-1}b_{k}=s^{q}+\sum _{k=0}^{s-1}k^{q}a_{k}{\text{ for }}q=1,\ldots ,p.} The s-step Adams–Bashforth method has order s, while the s-step Adams–Moulton method has order s + 1 {\displaystyle s+1} (Hairer, Nørsett & Wanner 1993, §III.2). These conditions are often formulated using the characteristic polynomials ρ ( z ) = z s + ∑ k = 0 s − 1 a k z k and σ ( z ) = ∑ k = 0 s b k z k . {\displaystyle \rho (z)=z^{s}+\sum _{k=0}^{s-1}a_{k}z^{k}\quad {\text{and}}\quad \sigma (z)=\sum _{k=0}^{s}b_{k}z^{k}.} In terms of these polynomials, the above condition for the method to have order p becomes ρ ( e h ) − h σ ( e h ) = O ( h p + 1 ) as h → 0. {\displaystyle \rho (e^{h})-h\sigma (e^{h})=O(h^{p+1})\quad {\text{as }}h\to 0.} In particular, the method is consistent if it has order at least one, which is the case if ρ ( 1 ) = 0 {\displaystyle \rho (1)=0} and ρ ′ ( 1 ) = σ ( 1 ) {\displaystyle \rho '(1)=\sigma (1)} . === Stability and convergence === The numerical solution of a one-step method depends on the initial condition y 0 {\displaystyle y_{0}} , but the numerical solution of an s-step method depend on all the s starting values, y 0 , y 1 , … , y s − 1 {\displaystyle y_{0},y_{1},\ldots ,y_{s-1}} . It is thus of interest whether the numerical solution is stable with respect to perturbations in the starting values. A linear multistep method is zero-stable for a certain differential equation on a given time interval, if a perturbation in the starting values of size ε causes the numerical solution over that time interval to change by no more than Kε for some value of K which does not depend on the step size h. This is called "zero-stability" because it is enough to check the condition for the differential equation y ′ = 0 {\displaystyle y'=0} (Süli & Mayers 2003, p. 332). If the roots of the characteristic polynomial ρ all have modulus less than or equal to 1 and the roots of modulus 1 are of multiplicity 1, we say that the root condition is satisfied. A linear multistep method is zero-stable if and only if the root condition is satisfied (Süli & Mayers 2003, p. 335). Now suppose that a consistent linear multistep method is applied to a sufficiently smooth differential equation and that the starting values y 1 , … , y s − 1 {\displaystyle y_{1},\ldots ,y_{s-1}} all converge to the initial value y 0 {\displaystyle y_{0}} as h → 0 {\displaystyle h\to 0} . Then, the numerical solution converges to the exact solution as h → 0 {\displaystyle h\to 0} if and only if the method is zero-stable. This result is known as the Dahlquist equivalence theorem, named after Germund Dahlquist; this theorem is similar in spirit to the Lax equivalence theorem for finite difference methods. Furthermore, if the method has order p, then the global error (the difference between the numerical solution and the exact solution at a fixed time) is O ( h p ) {\displaystyle O(h^{p})} (Süli & Mayers 2003, p. 340). Furthermore, if the method is convergent, the method is said to be strongly stable if z = 1 {\displaystyle z=1} is the only root of modulus 1. If it is convergent and all roots of modulus 1 are not repeated, but there is more than one such root, it is said to be relatively stable. Note that 1 must be a root for the method to be convergent; thus convergent methods are always one of these two. To assess the performance of linear multistep methods on stiff equations, consider the linear test equation y' = λy. A multistep method applied to this differential equation with step size h yields a linear recurrence relation with characteristic polynomial π ( z ; h λ ) = ( 1 − h λ β s ) z s + ∑ k = 0 s − 1 ( α k − h λ β k ) z k = ρ ( z ) − h λ σ ( z ) . {\displaystyle \pi (z;h\lambda )=(1-h\lambda \beta _{s})z^{s}+\sum _{k=0}^{s-1}(\alpha _{k}-h\lambda \beta _{k})z^{k}=\rho (z)-h\lambda \sigma (z).} This polynomial is called the stability polynomial of the multistep method. If all of its roots have modulus less than one then the numerical solution of the multistep method will converge to zero and the multistep method is said to be absolutely stable for that value of hλ. The method is said to be A-stable if it is absolutely stable for all hλ with negative real part. The region of absolute stability is the set of all hλ for which the multistep method is absolutely stable (Süli & Mayers 2003, pp. 347 & 348). For more details, see the section on stiff equations and multistep methods. === Example === Consider the Adams–Bashforth three-step method y n + 3 = y n + 2 + h ( 23 12 f ( t n + 2 , y n + 2 ) − 4 3 f ( t n + 1 , y n + 1 ) + 5 12 f ( t n , y n ) ) . {\displaystyle y_{n+3}=y_{n+2}+h\left({23 \over 12}f(t_{n+2},y_{n+2})-{4 \over 3}f(t_{n+1},y_{n+1})+{5 \over 12}f(t_{n},y_{n})\right).} One characteristic polynomial is thus ρ ( z ) = z 3 − z 2 = z 2 ( z − 1 ) {\displaystyle \rho (z)=z^{3}-z^{2}=z^{2}(z-1)} which has roots z = 0 , 1 {\displaystyle z=0,1} , and the conditions above are satisfied. As z = 1 {\displaystyle z=1} is the only root of modulus 1, the method is strongly stable. The other characteristic polynomial is σ ( z ) = 23 12 z 2 − 4 3 z + 5 12 {\displaystyle \sigma (z)={\frac {23}{12}}z^{2}-{\frac {4}{3}}z+{\frac {5}{12}}} == First and second Dahlquist barriers == These two results were proved by Germund Dahlquist and represent an important bound for the order of convergence and for the A-stability of a linear multistep method. The first Dahlquist barrier was proved in Dahlquist (1956) and the second in Dahlquist (1963). === First Dahlquist barrier === The first Dahlquist barrier states that a zero-stable and linear q-step multistep method cannot attain an order of convergence greater than q + 1 if q is odd and greater than q + 2 if q is even. If the method is also explicit, then it cannot attain an order greater than q (Hairer, Nørsett & Wanner 1993, Thm III.3.5). === Second Dahlquist barrier === The second Dahlquist barrier states that no explicit linear multistep methods are A-stable. Further, the maximal order of an (implicit) A-stable linear multistep method is 2. Among the A-stable linear multistep methods of order 2, the trapezoidal rule has the smallest error constant (Dahlquist 1963, Thm 2.1 and 2.2). == See also == Digital energy gain == References == Bashforth, Francis (1883), An Attempt to test the Theories of Capillary Action by comparing the theoretical and measured forms of drops of fluid. With an explanation of the method of integration employed in constructing the tables which give the theoretical forms of such drops, by J. C. Adams, Cambridge{{citation}}: CS1 maint: location missing publisher (link). Butcher, John C. (2003), Numerical Methods for Ordinary Differential Equations, John Wiley, ISBN 978-0-471-96758-3. Dahlquist, Germund (1956), "Convergence and stability in the numerical integration of ordinary differential equations", Mathematica Scandinavica, 4: 33–53, doi:10.7146/math.scand.a-10454. Dahlquist, Germund (1963), "A special stability problem for linear multistep methods", BIT, 3: 27–43, doi:10.1007/BF01963532, ISSN 0006-3835, S2CID 120241743. Goldstine, Herman H. (1977), A History of Numerical Analysis from the 16th through the 19th Century, New York: Springer-Verlag, ISBN 978-0-387-90277-7. Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems (2nd ed.), Berlin: Springer Verlag, ISBN 978-3-540-56670-0. Hairer, Ernst; Wanner, Gerhard (1996), Solving ordinary differential equations II: Stiff and differential-algebraic problems (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-60452-5. Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, Bibcode:1996fcna.book.....I, ISBN 978-0-521-55655-2. Milne, W. E. (1926), "Numerical integration of ordinary differential equations", American Mathematical Monthly, 33 (9), Mathematical Association of America: 455–460, doi:10.2307/2299609, JSTOR 2299609. Moulton, Forest R. (1926), New methods in exterior ballistics, University of Chicago Press. Quarteroni, Alfio; Sacco, Riccardo; Saleri, Fausto (2000), Matematica Numerica, Springer Verlag, ISBN 978-88-470-0077-3. Süli, Endre; Mayers, David (2003), An Introduction to Numerical Analysis, Cambridge University Press, ISBN 0-521-00794-1. == External links == Weisstein, Eric W. "Adams Method". MathWorld.
Wikipedia/Linear_multistep_method
In mathematics, the Euler–Tricomi equation is a linear partial differential equation useful in the study of transonic flow. It is named after mathematicians Leonhard Euler and Francesco Giacomo Tricomi. u x x + x u y y = 0. {\displaystyle u_{xx}+xu_{yy}=0.\,} It is elliptic in the half plane x > 0, parabolic at x = 0 and hyperbolic in the half plane x < 0. Its characteristics are x d x 2 + d y 2 = 0 , {\displaystyle x\,dx^{2}+dy^{2}=0,\,} which have the integral y ± 2 3 x 3 / 2 = C , {\displaystyle y\pm {\frac {2}{3}}x^{3/2}=C,} where C is a constant of integration. The characteristics thus comprise two families of semicubical parabolas, with cusps on the line x = 0, the curves lying on the right hand side of the y-axis. == Particular solutions == A general expression for particular solutions to the Euler–Tricomi equations is: u k , p , q = ∑ i = 0 k ( − 1 ) i x m i y n i c i {\displaystyle u_{k,p,q}=\sum _{i=0}^{k}(-1)^{i}{\frac {x^{m_{i}}y^{n_{i}}}{c_{i}}}\,} where k ∈ N {\displaystyle k\in \mathbb {N} } p , q ∈ { 0 , 1 } {\displaystyle p,q\in \{0,1\}} m i = 3 i + p {\displaystyle m_{i}=3i+p} n i = 2 ( k − i ) + q {\displaystyle n_{i}=2(k-i)+q} c i = m i ! ! ! ⋅ ( m i − 1 ) ! ! ! ⋅ n i ! ! ⋅ ( n i − 1 ) ! ! {\displaystyle c_{i}=m_{i}!!!\cdot (m_{i}-1)!!!\cdot n_{i}!!\cdot (n_{i}-1)!!} These can be linearly combined to form further solutions such as: for k = 0: u = A + B x + C y + D x y {\displaystyle u=A+Bx+Cy+Dxy\,} for k = 1: u = A ( 1 2 y 2 − 1 6 x 3 ) + B ( 1 2 x y 2 − 1 12 x 4 ) + C ( 1 6 y 3 − 1 6 x 3 y ) + D ( 1 6 x y 3 − 1 12 x 4 y ) {\displaystyle u=A({\tfrac {1}{2}}y^{2}-{\tfrac {1}{6}}x^{3})+B({\tfrac {1}{2}}xy^{2}-{\tfrac {1}{12}}x^{4})+C({\tfrac {1}{6}}y^{3}-{\tfrac {1}{6}}x^{3}y)+D({\tfrac {1}{6}}xy^{3}-{\tfrac {1}{12}}x^{4}y)\,} etc. The Euler–Tricomi equation is a limiting form of Chaplygin's equation. == See also == Burgers equation Chaplygin's equation == Bibliography == A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, 2002. == External links == Tricomi and Generalized Tricomi Equations at EqWorld: The World of Mathematical Equations.
Wikipedia/Euler–Tricomi_equation
Explicit and implicit methods are approaches used in numerical analysis for obtaining numerical approximations to the solutions of time-dependent ordinary and partial differential equations, as is required in computer simulations of physical processes. Explicit methods calculate the state of a system at a later time from the state of the system at the current time, while implicit methods find a solution by solving an equation involving both the current state of the system and the later one. Mathematically, if Y ( t ) {\displaystyle Y(t)} is the current system state and Y ( t + Δ t ) {\displaystyle Y(t+\Delta t)} is the state at the later time ( Δ t {\displaystyle \Delta t} is a small time step), then, for an explicit method Y ( t + Δ t ) = F ( Y ( t ) ) {\displaystyle Y(t+\Delta t)=F(Y(t))\,} while for an implicit method one solves an equation G ( Y ( t ) , Y ( t + Δ t ) ) = 0 ( 1 ) {\displaystyle G{\Big (}Y(t),Y(t+\Delta t){\Big )}=0\qquad (1)\,} to find Y ( t + Δ t ) . {\displaystyle Y(t+\Delta t).} == Computation == Implicit methods require an extra computation (solving the above equation), and they can be much harder to implement. Implicit methods are used because many problems arising in practice are stiff, for which the use of an explicit method requires impractically small time steps Δ t {\displaystyle \Delta t} to keep the error in the result bounded (see numerical stability). For such problems, to achieve given accuracy, it takes much less computational time to use an implicit method with larger time steps, even taking into account that one needs to solve an equation of the form (1) at each time step. That said, whether one should use an explicit or implicit method depends upon the problem to be solved. Since the implicit method cannot be carried out for each kind of differential operator, it is sometimes advisable to make use of the so called operator splitting method, which means that the differential operator is rewritten as the sum of two complementary operators Y ( t + Δ t ) = F ( Y ( t + Δ t ) ) + G ( Y ( t ) ) , {\displaystyle Y(t+\Delta t)=F(Y(t+\Delta t))+G(Y(t)),\,} while one is treated explicitly and the other implicitly. For usual applications the implicit term is chosen to be linear while the explicit term can be nonlinear. This combination of the former method is called Implicit-Explicit Method (short IMEX,). == Illustration using the forward and backward Euler methods == Consider the ordinary differential equation d y d t = − y 2 , t ∈ [ 0 , a ] ( 2 ) {\displaystyle {\frac {dy}{dt}}=-y^{2},\ t\in [0,a]\quad \quad (2)} with the initial condition y ( 0 ) = 1. {\displaystyle y(0)=1.} Consider a grid t k = a k n {\displaystyle t_{k}=a{\frac {k}{n}}} for 0 ≤ k ≤ n, that is, the time step is Δ t = a / n , {\displaystyle \Delta t=a/n,} and denote y k = y ( t k ) {\displaystyle y_{k}=y(t_{k})} for each k {\displaystyle k} . Discretize this equation using the simplest explicit and implicit methods, which are the forward Euler and backward Euler methods (see numerical ordinary differential equations) and compare the obtained schemes. Forward Euler method The forward Euler method ( d y d t ) k ≈ y k + 1 − y k Δ t = − y k 2 {\displaystyle \left({\frac {dy}{dt}}\right)_{k}\approx {\frac {y_{k+1}-y_{k}}{\Delta t}}=-y_{k}^{2}} yields y k + 1 = y k − Δ t y k 2 ( 3 ) {\displaystyle y_{k+1}=y_{k}-\Delta ty_{k}^{2}\quad \quad \quad (3)\,} for each k = 0 , 1 , … , n . {\displaystyle k=0,1,\dots ,n.} This is an explicit formula for y k + 1 {\displaystyle y_{k+1}} . Backward Euler method With the backward Euler method y k + 1 − y k Δ t = − y k + 1 2 {\displaystyle {\frac {y_{k+1}-y_{k}}{\Delta t}}=-y_{k+1}^{2}} one finds the implicit equation y k + 1 + Δ t y k + 1 2 = y k {\displaystyle y_{k+1}+\Delta ty_{k+1}^{2}=y_{k}} for y k + 1 {\displaystyle y_{k+1}} (compare this with formula (3) where y k + 1 {\displaystyle y_{k+1}} was given explicitly rather than as an unknown in an equation). This is a quadratic equation, having one negative and one positive root. The positive root is picked because in the original equation the initial condition is positive, and then y {\displaystyle y} at the next time step is given by y k + 1 = − 1 + 1 + 4 Δ t y k 2 Δ t . ( 4 ) {\displaystyle y_{k+1}={\frac {-1+{\sqrt {1+4\Delta ty_{k}}}}{2\Delta t}}.\quad \quad (4)} In the vast majority of cases, the equation to be solved when using an implicit scheme is much more complicated than a quadratic equation, and no analytical solution exists. Then one uses root-finding algorithms, such as Newton's method, to find the numerical solution. Crank-Nicolson method With the Crank-Nicolson method y k + 1 − y k Δ t = − 1 2 y k + 1 2 − 1 2 y k 2 {\displaystyle {\frac {y_{k+1}-y_{k}}{\Delta t}}=-{\frac {1}{2}}y_{k+1}^{2}-{\frac {1}{2}}y_{k}^{2}} one finds the implicit equation y k + 1 + 1 2 Δ t y k + 1 2 = y k − 1 2 Δ t y k 2 {\displaystyle y_{k+1}+{\frac {1}{2}}{\Delta t}y_{k+1}^{2}=y_{k}-{\frac {1}{2}}\Delta ty_{k}^{2}} for y k + 1 {\displaystyle y_{k+1}} (compare this with formula (3) where y k + 1 {\displaystyle y_{k+1}} was given explicitly rather than as an unknown in an equation). This can be numerically solved using root-finding algorithms, such as Newton's method, to obtain y k + 1 {\displaystyle y_{k+1}} . Crank-Nicolson can be viewed as a form of more general IMEX (Implicit-Explicit) schemes. Forward-Backward Euler method In order to apply the IMEX-scheme, consider a slightly different differential equation: d y d t = y − y 2 , t ∈ [ 0 , a ] ( 5 ) {\displaystyle {\frac {dy}{dt}}=y-y^{2},\ t\in [0,a]\quad \quad (5)} It follows that ( d y d t ) k ≈ y k + 1 − y k 2 , t ∈ [ 0 , a ] {\displaystyle \left({\frac {dy}{dt}}\right)_{k}\approx y_{k+1}-y_{k}^{2},\ t\in [0,a]} and therefore y k + 1 = y k ( 1 − y k Δ t ) 1 − Δ t ( 6 ) {\displaystyle y_{k+1}={\frac {y_{k}(1-y_{k}\Delta t)}{1-\Delta t}}\quad \quad (6)} for each k = 0 , 1 , … , n . {\displaystyle k=0,1,\dots ,n.} == See also == Courant–Friedrichs–Lewy condition SIMPLE algorithm, a semi-implicit method for pressure-linked equations == Sources ==
Wikipedia/Explicit_and_implicit_methods
General linear methods (GLMs) are a large class of numerical methods used to obtain numerical solutions to ordinary differential equations. They include multistage Runge–Kutta methods that use intermediate collocation points, as well as linear multistep methods that save a finite time history of the solution. John C. Butcher originally coined this term for these methods and has written a series of review papers, a book chapter, and a textbook on the topic. His collaborator, Zdzislaw Jackiewicz also has an extensive textbook on the topic. The original class of methods were originally proposed by Butcher (1965), Gear (1965) and Gragg and Stetter (1964). == Some definitions == Numerical methods for first-order ordinary differential equations approximate solutions to initial value problems of the form y ′ = f ( t , y ) , y ( t 0 ) = y 0 . {\displaystyle y'=f(t,y),\quad y(t_{0})=y_{0}.} The result is approximations for the value of y ( t ) {\displaystyle y(t)} at discrete times t i {\displaystyle t_{i}} : y i ≈ y ( t i ) where t i = t 0 + i h , {\displaystyle y_{i}\approx y(t_{i})\quad {\text{where}}\quad t_{i}=t_{0}+ih,} where h is the time step (sometimes referred to as Δ t {\displaystyle \Delta t} ). == A description of the method == We follow Butcher (2006), pp. 189–190 for our description, although we note that this method can be found elsewhere. General linear methods make use of two integers: r {\displaystyle r} – the number of time points in history, and s {\displaystyle s} – the number of collocation points. In the case of r = 1 {\displaystyle r=1} , these methods reduce to classical Runge–Kutta methods, and in the case of s = 1 {\displaystyle s=1} , these methods reduce to linear multistep methods. Stage values Y i {\displaystyle Y_{i}} and stage derivatives F i , i = 1 , 2 , … s {\displaystyle F_{i},\ i=1,2,\dots s} are computed from approximations y i [ n − 1 ] , i = 1 , … , r {\displaystyle y_{i}^{[n-1]},\ i=1,\dots ,r} at time step n {\displaystyle n} : y [ n − 1 ] = [ y 1 [ n − 1 ] y 2 [ n − 1 ] ⋮ y r [ n − 1 ] ] , y [ n ] = [ y 1 [ n ] y 2 [ n ] ⋮ y r [ n ] ] , Y = [ Y 1 Y 2 ⋮ Y s ] , F = [ F 1 F 2 ⋮ F s ] = [ f ( Y 1 ) f ( Y 2 ) ⋮ f ( Y s ) ] . {\displaystyle y^{[n-1]}=\left[{\begin{matrix}y_{1}^{[n-1]}\\y_{2}^{[n-1]}\\\vdots \\y_{r}^{[n-1]}\\\end{matrix}}\right],\quad y^{[n]}=\left[{\begin{matrix}y_{1}^{[n]}\\y_{2}^{[n]}\\\vdots \\y_{r}^{[n]}\\\end{matrix}}\right],\quad Y=\left[{\begin{matrix}Y_{1}\\Y_{2}\\\vdots \\Y_{s}\end{matrix}}\right],\quad F=\left[{\begin{matrix}F_{1}\\F_{2}\\\vdots \\F_{s}\end{matrix}}\right]=\left[{\begin{matrix}f(Y_{1})\\f(Y_{2})\\\vdots \\f(Y_{s})\end{matrix}}\right].} The stage values are defined by two matrices A = [ a i j ] {\displaystyle A=[a_{ij}]} and U = [ u i j ] {\displaystyle U=[u_{ij}]} : Y i = ∑ j = 1 s a i j h F j + ∑ j = 1 r u i j y j [ n − 1 ] , i = 1 , 2 , … , s , {\displaystyle Y_{i}=\sum _{j=1}^{s}a_{ij}hF_{j}+\sum _{j=1}^{r}u_{ij}y_{j}^{[n-1]},\qquad i=1,2,\dots ,s,} and the update to time t n {\displaystyle t^{n}} is defined by two matrices B = [ b i j ] {\displaystyle B=[b_{ij}]} and V = [ v i j ] {\displaystyle V=[v_{ij}]} : y i [ n ] = ∑ j = 1 s b i j h F j + ∑ j = 1 r v i j y j [ n − 1 ] , i = 1 , 2 , … , r . {\displaystyle y_{i}^{[n]}=\sum _{j=1}^{s}b_{ij}hF_{j}+\sum _{j=1}^{r}v_{ij}y_{j}^{[n-1]},\qquad i=1,2,\dots ,r.} Given the four matrices A , U , B {\displaystyle A,U,B} and V {\displaystyle V} , one can compactly write the analogue of a Butcher tableau as [ Y y [ n ] ] = [ A ⊗ I U ⊗ I B ⊗ I V ⊗ I ] [ h F y [ n − 1 ] ] , {\displaystyle \left[{\begin{matrix}Y\\y^{[n]}\end{matrix}}\right]=\left[{\begin{matrix}A\otimes I&U\otimes I\\B\otimes I&V\otimes I\end{matrix}}\right]\left[{\begin{matrix}hF\\y^{[n-1]}\end{matrix}}\right],} where ⊗ {\displaystyle \otimes } stands for the Kronecker product. == Examples == We present an example described in (Butcher, 1996). This method consists of a single "predicted" step and "corrected" step, which uses extra information about the time history, as well as a single intermediate stage value. An intermediate stage value is defined as something that looks like it came from a linear multistep method: y n − 1 / 2 ∗ = y n − 2 + h ( 9 8 f ( y n − 1 ) + 3 8 f ( y n − 2 ) ) . {\displaystyle y_{n-1/2}^{*}=y_{n-2}+h\left({\frac {9}{8}}f(y_{n-1})+{\frac {3}{8}}f(y_{n-2})\right).} An initial "predictor" y n ∗ {\displaystyle y_{n}^{*}} uses the stage value y n − 1 / 2 ∗ {\displaystyle y_{n-1/2}^{*}} together with two pieces of time history: y n ∗ = 28 5 y n − 1 − 23 5 y n − 2 + h ( 32 15 f ( y n − 1 / 2 ∗ ) − 4 f ( y n − 1 ) − 26 15 f ( y n − 2 ) ) , {\displaystyle y_{n}^{*}={\frac {28}{5}}y_{n-1}-{\frac {23}{5}}y_{n-2}+h\left({\frac {32}{15}}f(y_{n-1/2}^{*})-4f(y_{n-1})-{\frac {26}{15}}f(y_{n-2})\right),} and the final update is given by y n = 32 31 y n − 1 − 1 31 y n − 2 + h ( 5 31 f ( y n ∗ ) + 64 93 f ( y n − 1 / 2 ∗ ) + 4 31 f ( y n − 1 ) − 1 93 f ( y n − 2 ) ) . {\displaystyle y_{n}={\frac {32}{31}}y_{n-1}-{\frac {1}{31}}y_{n-2}+h\left({\frac {5}{31}}f(y_{n}^{*})+{\frac {64}{93}}f(y_{n-1/2}^{*})+{\frac {4}{31}}f(y_{n-1})-{\frac {1}{93}}f(y_{n-2})\right).} The concise table representation for this method is given by [ 0 0 0 0 1 9 8 3 8 32 15 0 0 28 5 − 23 5 − 4 − 26 15 64 93 5 31 0 32 31 − 1 31 4 31 − 1 93 64 93 5 31 0 32 31 − 1 31 4 31 − 1 93 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 ] . {\displaystyle \left[{\begin{array}{ccc|cccc}0&0&0&0&1&{\frac {9}{8}}&{\frac {3}{8}}\\{\frac {32}{15}}&0&0&{\frac {28}{5}}&-{\frac {23}{5}}&-4&-{\frac {26}{15}}\\{\frac {64}{93}}&{\frac {5}{31}}&0&{\frac {32}{31}}&-{\frac {1}{31}}&{\frac {4}{31}}&-{\frac {1}{93}}\\\hline {\frac {64}{93}}&{\frac {5}{31}}&0&{\frac {32}{31}}&-{\frac {1}{31}}&{\frac {4}{31}}&-{\frac {1}{93}}\\0&0&0&1&0&0&0\\0&0&1&0&0&0&0\\0&0&0&0&0&1&0\\\end{array}}\right].} == See also == Runge–Kutta methods Linear multistep methods Numerical methods for ordinary differential equations == Notes == == References == Butcher, John C. (January 1965). "A Modified Multistep Method for the Numerical Integration of Ordinary Differential Equations". Journal of the ACM. 12 (1): 124–135. doi:10.1145/321250.321261. S2CID 36463504. Gear, C. W. (1965). "Hybrid Methods for Initial Value Problems in Ordinary Differential Equations". Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis. 2 (1): 69–86. Bibcode:1965SJNA....2...69G. doi:10.1137/0702006. hdl:2027/uiuo.ark:/13960/t4rj60q8s. S2CID 122744897. Gragg, William B.; Hans J. Stetter (April 1964). "Generalized Multistep Predictor-Corrector Methods". Journal of the ACM. 11 (2): 188–209. doi:10.1145/321217.321223. S2CID 17118462. Hairer, Ernst; Wanner, Wanner (1973), "Multistep-multistage-multiderivative methods for ordinary differential equations", Computing, 11 (3): 287–303, doi:10.1007/BF02252917, S2CID 25549771. == External links == General Linear Methods
Wikipedia/General_linear_methods
Euler's factorization method is a technique for factoring a number by writing it as a sum of two squares in two different ways. For example the number 1000009 {\displaystyle 1000009} can be written as 1000 2 + 3 2 {\displaystyle 1000^{2}+3^{2}} or as 972 2 + 235 2 {\displaystyle 972^{2}+235^{2}} and Euler's method gives the factorization 1000009 = 293 ⋅ 3413 {\displaystyle 1000009=293\cdot 3413} . The idea that two distinct representations of an odd positive integer may lead to a factorization was apparently first proposed by Marin Mersenne. However, it was not put to use extensively until one hundred years later by Euler. His most celebrated use of the method that now bears his name was to factor the number 1000009 {\displaystyle 1000009} , which apparently was previously thought to be prime even though it is not a pseudoprime by any major primality test. Euler's factorization method is more effective than Fermat's for integers whose factors are not close together and potentially much more efficient than trial division if one can find representations of numbers as sums of two squares reasonably easily. The methods used to find representations of numbers as sums of two squares are essentially the same as with finding differences of squares in Fermat's factorization method. == Disadvantage and limitation == The great disadvantage of Euler's factorization method is that it cannot be applied to factoring an integer with any prime factor of the form 4k + 3 occurring to an odd power in its prime factorization, as such a number can never be the sum of two squares. Even odd composite numbers of the form 4k + 1 are often the product of two primes of the form 4k + 3 (e.g. 3053 = 43 × 71) and again cannot be factored by Euler's method. This restricted applicability has made Euler's factorization method disfavoured for computer factoring algorithms, since any user attempting to factor a random integer is unlikely to know whether Euler's method can actually be applied to the integer in question. It is only relatively recently that there have been attempts to develop Euler's method into computer algorithms for use on specialised numbers where it is known Euler's method can be applied. == Theoretical basis == The Brahmagupta–Fibonacci identity states that the product of two sums of two squares is a sum of two squares. Euler's method relies on this theorem but it can be viewed as the converse, given n = a 2 + b 2 = c 2 + d 2 {\displaystyle n=a^{2}+b^{2}=c^{2}+d^{2}} we find n {\displaystyle n} as a product of sums of two squares. First deduce that a 2 − c 2 = d 2 − b 2 {\displaystyle a^{2}-c^{2}=d^{2}-b^{2}} and factor both sides to get ( a − c ) ( a + c ) = ( d − b ) ( d + b ) {\displaystyle (a-c)(a+c)=(d-b)(d+b)} (1) Now let k = gcd ⁡ ( a − c , d − b ) {\displaystyle k=\operatorname {gcd} (a-c,d-b)} and h = gcd ⁡ ( a + c , d + b ) {\displaystyle h=\operatorname {gcd} (a+c,d+b)} so that there exists some constants l , m , l ′ , m ′ {\displaystyle l,m,l',m'} satisfying ( a − c ) = k l {\displaystyle (a-c)=kl} , ( d − b ) = k m {\displaystyle (d-b)=km} , gcd ⁡ ( l , m ) = 1 {\displaystyle \operatorname {gcd} (l,m)=1} ( a + c ) = h m ′ {\displaystyle (a+c)=hm'} , ( d + b ) = h l ′ {\displaystyle (d+b)=hl'} , gcd ⁡ ( l ′ , m ′ ) = 1 {\displaystyle \operatorname {gcd} (l',m')=1} Substituting these into equation (1) gives k l h m ′ = k m h l ′ {\displaystyle klhm'=kmhl'} Canceling common factors yields l m ′ = l ′ m {\displaystyle lm'=l'm} Now using the fact that ( l , m ) {\displaystyle (l,m)} and ( l ′ , m ′ ) {\displaystyle \left(l',m'\right)} are pairs of relatively prime numbers, we find that l = l ′ {\displaystyle l=l'} m = m ′ {\displaystyle m=m'} So ( a − c ) = k l {\displaystyle (a-c)=kl} ( d − b ) = k m {\displaystyle (d-b)=km} ( a + c ) = h m {\displaystyle (a+c)=hm} ( d + b ) = h l {\displaystyle (d+b)=hl} We now see that m = gcd ⁡ ( a + c , d − b ) {\displaystyle m=\operatorname {gcd} (a+c,d-b)} and l = gcd ⁡ ( a − c , d + b ) {\displaystyle l=\operatorname {gcd} (a-c,d+b)} Applying the Brahmagupta–Fibonacci identity we get ( k 2 + h 2 ) ( l 2 + m 2 ) = ( k l + h m ) 2 + ( k m − h l ) 2 = ( ( a − c ) + ( a + c ) ) 2 + ( ( d − b ) − ( d + b ) ) 2 = ( 2 a ) 2 + ( 2 b ) 2 = 4 n . {\displaystyle \left(k^{2}+h^{2}\right)\left(l^{2}+m^{2}\right)=(kl+hm)^{2}+(km-hl)^{2}={\bigl (}(a-c)+(a+c){\bigr )}^{2}+{\bigl (}(d-b)-(d+b){\bigr )}^{2}=(2a)^{2}+(2b)^{2}=4n.} As each factor is a sum of two squares, one of these must contain both even numbers: either ( k , h ) {\displaystyle (k,h)} or ( l , m ) {\displaystyle (l,m)} . Without loss of generality, assume that pair ( k , h ) {\displaystyle (k,h)} is even. The factorization then becomes n = ( ( k 2 ) 2 + ( h 2 ) 2 ) ( l 2 + m 2 ) . {\displaystyle n=\left(\left({\tfrac {k}{2}}\right)^{2}+\left({\tfrac {h}{2}}\right)^{2}\right)\left(l^{2}+m^{2}\right).\,} == Worked example == Since: 1000009 = 1000 2 + 3 2 = 972 2 + 235 2 {\displaystyle \ 1000009=1000^{2}+3^{2}=972^{2}+235^{2}} we have from the formula above: Thus, 1000009 = [ ( 4 2 ) 2 + ( 34 2 ) 2 ] ⋅ [ ( 14 2 ) 2 + ( 116 2 ) 2 ] {\displaystyle 1000009=\left[\left({\frac {4}{2}}\right)^{2}+\left({\frac {34}{2}}\right)^{2}\right]\cdot \left[\left({\frac {14}{2}}\right)^{2}+\left({\frac {116}{2}}\right)^{2}\right]\,} = ( 2 2 + 17 2 ) ⋅ ( 7 2 + 58 2 ) {\displaystyle =\left(2^{2}+17^{2}\right)\cdot \left(7^{2}+58^{2}\right)\,} = ( 4 + 289 ) ⋅ ( 49 + 3364 ) {\displaystyle =(4+289)\cdot (49+3364)\,} = 293 ⋅ 3413 {\displaystyle =293\cdot 3413\,} == Pseudocode == function Euler_factorize(int n) -> list[int] if is_prime(n) then print("Number is not factorable") exit function for-loop from a=1 to a=ceiling(sqrt(n)) b2 = n - a*a b = floor(sqrt(b2)) if b*b==b2 break loop preserving a,b if a*a+b*b!=n then print("Failed to find any expression for n as sum of squares") exit function for-loop from c=a+1 to c=ceiling(sqrt(n)) d2 = n - c*c d = floor(sqrt(d2)) if d*d==d2 then break loop preserving c,d if c*c+d*d!=n then print("Failed to find a second expression for n as sum of squares") exit function A = c-a, B = c+a C = b-d, D = b+d k = GCD(A,C)//2, h = GCD(B,D)//2 l = GCD(A,D)//2, m = GCD(B,C)//2 factor1 = k*k + h*h factor2 = l*l + m*m return list[ factor1, factor2 ] == References == Ore, Oystein (1988). "Euler's Factorization Method". Number Theory and Its History. Courier Corporation. pp. 59–64. ISBN 978-0-486-65620-5. McKee, James (1996). "Turning Euler's Factoring Method into a Factoring Algorithm". Bulletin of the London Mathematical Society. 4 (28): 351–355. doi:10.1112/blms/28.4.351.
Wikipedia/Euler's_factorization_method
In numerical analysis, predictor–corrector methods belong to a class of algorithms designed to integrate ordinary differential equations – to find an unknown function that satisfies a given differential equation. All such algorithms proceed in two steps: The initial, "prediction" step, starts from a function fitted to the function-values and derivative-values at a preceding set of points to extrapolate ("anticipate") this function's value at a subsequent, new point. The next, "corrector" step refines the initial approximation by using the predicted value of the function and another method to interpolate that unknown function's value at the same subsequent point. == Predictor–corrector methods for solving ODEs == When considering the numerical solution of ordinary differential equations (ODEs), a predictor–corrector method typically uses an explicit method for the predictor step and an implicit method for the corrector step. === Example: Euler method with the trapezoidal rule === A simple predictor–corrector method (known as Heun's method) can be constructed from the Euler method (an explicit method) and the trapezoidal rule (an implicit method). Consider the differential equation y ′ = f ( t , y ) , y ( t 0 ) = y 0 , {\displaystyle y'=f(t,y),\quad y(t_{0})=y_{0},} and denote the step size by h {\displaystyle h} . First, the predictor step: starting from the current value y i {\displaystyle y_{i}} , calculate an initial guess value y ~ i + 1 {\displaystyle {\tilde {y}}_{i+1}} via the Euler method, y ~ i + 1 = y i + h f ( t i , y i ) . {\displaystyle {\tilde {y}}_{i+1}=y_{i}+hf(t_{i},y_{i}).} Next, the corrector step: improve the initial guess using trapezoidal rule, y i + 1 = y i + 1 2 h ( f ( t i , y i ) + f ( t i + 1 , y ~ i + 1 ) ) . {\displaystyle y_{i+1}=y_{i}+{\tfrac {1}{2}}h{\bigl (}f(t_{i},y_{i})+f(t_{i+1},{\tilde {y}}_{i+1}){\bigr )}.} That value is used as the next step. === PEC mode and PECE mode === There are different variants of a predictor–corrector method, depending on how often the corrector method is applied. The Predict–Evaluate–Correct–Evaluate (PECE) mode refers to the variant in the above example: y ~ i + 1 = y i + h f ( t i , y i ) , y i + 1 = y i + 1 2 h ( f ( t i , y i ) + f ( t i + 1 , y ~ i + 1 ) ) . {\displaystyle {\begin{aligned}{\tilde {y}}_{i+1}&=y_{i}+hf(t_{i},y_{i}),\\y_{i+1}&=y_{i}+{\tfrac {1}{2}}h{\bigl (}f(t_{i},y_{i})+f(t_{i+1},{\tilde {y}}_{i+1}){\bigr )}.\end{aligned}}} It is also possible to evaluate the function f only once per step by using the method in Predict–Evaluate–Correct (PEC) mode: y ~ i + 1 = y i + h f ( t i , y ~ i ) , y i + 1 = y i + 1 2 h ( f ( t i , y ~ i ) + f ( t i + 1 , y ~ i + 1 ) ) . {\displaystyle {\begin{aligned}{\tilde {y}}_{i+1}&=y_{i}+hf(t_{i},{\tilde {y}}_{i}),\\y_{i+1}&=y_{i}+{\tfrac {1}{2}}h{\bigl (}f(t_{i},{\tilde {y}}_{i})+f(t_{i+1},{\tilde {y}}_{i+1}){\bigr )}.\end{aligned}}} Additionally, the corrector step can be repeated in the hope that this achieves an even better approximation to the true solution. If the corrector method is run twice, this yields the PECECE mode: y ~ i + 1 = y i + h f ( t i , y i ) , y ^ i + 1 = y i + 1 2 h ( f ( t i , y i ) + f ( t i + 1 , y ~ i + 1 ) ) , y i + 1 = y i + 1 2 h ( f ( t i , y i ) + f ( t i + 1 , y ^ i + 1 ) ) . {\displaystyle {\begin{aligned}{\tilde {y}}_{i+1}&=y_{i}+hf(t_{i},y_{i}),\\{\hat {y}}_{i+1}&=y_{i}+{\tfrac {1}{2}}h{\bigl (}f(t_{i},y_{i})+f(t_{i+1},{\tilde {y}}_{i+1}){\bigr )},\\y_{i+1}&=y_{i}+{\tfrac {1}{2}}h{\bigl (}f(t_{i},y_{i})+f(t_{i+1},{\hat {y}}_{i+1}){\bigr )}.\end{aligned}}} The PECEC mode has one fewer function evaluation than PECECE mode. More generally, if the corrector is run k times, the method is in P(EC)k or P(EC)kE mode. If the corrector method is iterated until it converges, this could be called PE(CE)∞. == See also == Backward differentiation formula Beeman's algorithm Heun's method Mehrotra predictor–corrector method Numerical continuation == Notes == == References == Butcher, John C. (2003), Numerical Methods for Ordinary Differential Equations, New York: John Wiley & Sons, ISBN 978-0-471-96758-3. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 17.6. Multistep, Multivalue, and Predictor-Corrector Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. == External links == Weisstein, Eric W. "Predictor-Corrector Methods". MathWorld. Predictor–corrector methods for differential equations
Wikipedia/Predictor–corrector_method
In mathematics, the semi-implicit Euler method, also called symplectic Euler, semi-explicit Euler, Euler–Cromer, and Newton–Størmer–Verlet (NSV), is a modification of the Euler method for solving Hamilton's equations, a system of ordinary differential equations that arises in classical mechanics. It is a symplectic integrator and hence it yields better results than the standard Euler method. == Origin == The method has been discovered and forgotten many times, dating back to Newton's Principiae, as recalled by Richard Feynman in his Feynman Lectures (Vol. 1, Sec. 9.6) In modern times, the method was rediscovered in a 1956 preprint by René De Vogelaere that, although never formally published, influenced subsequent work on higher-order symplectic methods. == Setting == The semi-implicit Euler method can be applied to a pair of differential equations of the form d x d t = f ( t , v ) d v d t = g ( t , x ) , {\displaystyle {\begin{aligned}{dx \over dt}&=f(t,v)\\{dv \over dt}&=g(t,x),\end{aligned}}} where f and g are given functions. Here, x and v may be either scalars or vectors. The equations of motion in Hamiltonian mechanics take this form if the Hamiltonian is of the form H = T ( t , v ) + V ( t , x ) . {\displaystyle H=T(t,v)+V(t,x).\,} The differential equations are to be solved with the initial condition x ( t 0 ) = x 0 , v ( t 0 ) = v 0 . {\displaystyle x(t_{0})=x_{0},\qquad v(t_{0})=v_{0}.} == The method == The semi-implicit Euler method produces an approximate discrete solution by iterating v n + 1 = v n + g ( t n , x n ) Δ t x n + 1 = x n + f ( t n , v n + 1 ) Δ t {\displaystyle {\begin{aligned}v_{n+1}&=v_{n}+g(t_{n},x_{n})\,\Delta t\\[0.3em]x_{n+1}&=x_{n}+f(t_{n},v_{n+1})\,\Delta t\end{aligned}}} where Δt is the time step and tn = t0 + nΔt is the time after n steps. The difference with the standard Euler method is that the semi-implicit Euler method uses vn+1 in the equation for xn+1, while the Euler method uses vn. Applying the method with negative time step to the computation of ( x n , v n ) {\displaystyle (x_{n},v_{n})} from ( x n + 1 , v n + 1 ) {\displaystyle (x_{n+1},v_{n+1})} and rearranging leads to the second variant of the semi-implicit Euler method x n + 1 = x n + f ( t n , v n ) Δ t v n + 1 = v n + g ( t n , x n + 1 ) Δ t {\displaystyle {\begin{aligned}x_{n+1}&=x_{n}+f(t_{n},v_{n})\,\Delta t\\[0.3ex]v_{n+1}&=v_{n}+g(t_{n},x_{n+1})\,\Delta t\end{aligned}}} which has similar properties. The semi-implicit Euler is a first-order integrator, just as the standard Euler method. This means that it commits a global error of the order of Δt. However, the semi-implicit Euler method is a symplectic integrator, unlike the standard method. As a consequence, the semi-implicit Euler method almost conserves the energy (when the Hamiltonian is time-independent). Often, the energy increases steadily when the standard Euler method is applied, making it far less accurate. Alternating between the two variants of the semi-implicit Euler method leads in one simplification to the Störmer-Verlet integration and in a slightly different simplification to the leapfrog integration, increasing both the order of the error and the order of preservation of energy. The stability region of the semi-implicit method was presented by Niiranen although the semi-implicit Euler was misleadingly called symmetric Euler in his paper. The semi-implicit method models the simulated system correctly if the complex roots of the characteristic equation are within the circle shown below. For real roots the stability region extends outside the circle for which the criterion is s > − 2 / Δ t {\displaystyle s>-2/\Delta t} As can be seen, the semi-implicit method can simulate correctly both stable systems that have their roots in the left half plane and unstable systems that have their roots in the right half plane. This is clear advantage over forward (standard) Euler and backward Euler. Forward Euler tends to have less damping than the real system when the negative real parts of the roots get near the imaginary axis and backward Euler may show the system be stable even when the roots are in the right half plane. == Example == The motion of a spring satisfying Hooke's law is given by d x d t = v ( t ) d v d t = − k m x = − ω 2 x . {\displaystyle {\begin{aligned}{\frac {dx}{dt}}&=v(t)\\[0.2em]{\frac {dv}{dt}}&=-{\frac {k}{m}}\,x=-\omega ^{2}\,x.\end{aligned}}} The semi-implicit Euler for this equation is v n + 1 = v n − ω 2 x n Δ t x n + 1 = x n + v n + 1 Δ t . {\displaystyle {\begin{aligned}v_{n+1}&=v_{n}-\omega ^{2}\,x_{n}\,\Delta t\\[0.2em]x_{n+1}&=x_{n}+v_{n+1}\,\Delta t.\end{aligned}}} Substituting v n + 1 {\displaystyle v_{n+1}} in the second equation with the expression given by the first equation, the iteration can be expressed in the following matrix form [ x n + 1 v n + 1 ] = [ 1 − ω 2 Δ t 2 Δ t − ω 2 Δ t 1 ] [ x n v n ] , {\displaystyle {\begin{bmatrix}x_{n+1}\\v_{n+1}\end{bmatrix}}={\begin{bmatrix}1-\omega ^{2}\Delta t^{2}&\Delta t\\-\omega ^{2}\Delta t&1\end{bmatrix}}{\begin{bmatrix}x_{n}\\v_{n}\end{bmatrix}},} and since the determinant of the matrix is 1 the transformation is area-preserving. The iteration preserves the modified energy functional E h ( x , v ) = 1 2 ( v 2 + ω 2 x 2 − ω 2 Δ t v x ) {\displaystyle E_{h}(x,v)={\tfrac {1}{2}}\left(v^{2}+\omega ^{2}\,x^{2}-\omega ^{2}\Delta t\,vx\right)} exactly, leading to stable periodic orbits (for sufficiently small step size) that deviate by O ( Δ t ) {\displaystyle O(\Delta t)} from the exact orbits. The exact circular frequency ω {\displaystyle \omega } increases in the numerical approximation by a factor of 1 + 1 24 ω 2 Δ t 2 + O ( Δ t 4 ) {\displaystyle 1+{\tfrac {1}{24}}\omega ^{2}\Delta t^{2}+O(\Delta t^{4})} . == References == Nikolic, Branislav K. "Euler-Cromer method". University of Delaware. Retrieved 2021-09-29. Vesely, Franz J. (2001). Computational Physics: An Introduction (2nd ed.). Springer. pp. 117. ISBN 978-0-306-46631-1. Giordano, Nicholas J.; Hisao Nakanishi (July 2005). Computational Physics (2nd ed.). Benjamin Cummings. ISBN 0-13-146990-8.
Wikipedia/Semi-implicit_Euler_method
In number theory, Euler's conjecture is a disproved conjecture related to Fermat's Last Theorem. It was proposed by Leonhard Euler in 1769. It states that for all integers n and k greater than 1, if the sum of n many kth powers of positive integers is itself a kth power, then n is greater than or equal to k: a 1 k + a 2 k + ⋯ + a n k = b k ⟹ n ≥ k {\displaystyle a_{1}^{k}+a_{2}^{k}+\dots +a_{n}^{k}=b^{k}\implies n\geq k} The conjecture represents an attempt to generalize Fermat's Last Theorem, which is the special case n = 2: if a 1 k + a 2 k = b k , {\displaystyle a_{1}^{k}+a_{2}^{k}=b^{k},} then 2 ≥ k. Although the conjecture holds for the case k = 3 (which follows from Fermat's Last Theorem for the third powers), it was disproved for k = 4 and k = 5. It is unknown whether the conjecture fails or holds for any value k ≥ 6. == Background == Euler was aware of the equality 594 + 1584 = 1334 + 1344 involving sums of four fourth powers; this, however, is not a counterexample because no term is isolated on one side of the equation. He also provided a complete solution to the four cubes problem as in Plato's number 33 + 43 + 53 = 63 or the taxicab number 1729. The general solution of the equation x 1 3 + x 2 3 = x 3 3 + x 4 3 {\displaystyle x_{1}^{3}+x_{2}^{3}=x_{3}^{3}+x_{4}^{3}} is x 1 = λ ( 1 − ( a − 3 b ) ( a 2 + 3 b 2 ) ) x 2 = λ ( ( a + 3 b ) ( a 2 + 3 b 2 ) − 1 ) x 3 = λ ( ( a + 3 b ) − ( a 2 + 3 b 2 ) 2 ) x 4 = λ ( ( a 2 + 3 b 2 ) 2 − ( a − 3 b ) ) {\displaystyle {\begin{aligned}x_{1}&=\lambda (1-(a-3b)(a^{2}+3b^{2}))\\[2pt]x_{2}&=\lambda ((a+3b)(a^{2}+3b^{2})-1)\\[2pt]x_{3}&=\lambda ((a+3b)-(a^{2}+3b^{2})^{2})\\[2pt]x_{4}&=\lambda ((a^{2}+3b^{2})^{2}-(a-3b))\end{aligned}}} where a, b and λ {\displaystyle {\lambda }} are any rational numbers. == Counterexamples == Euler's conjecture was disproven by L. J. Lander and T. R. Parkin in 1966 when, through a direct computer search on a CDC 6600, they found a counterexample for k = 5. This was published in a paper comprising just two sentences. A total of three primitive (that is, in which the summands do not all have a common factor) counterexamples are known: 144 5 = 27 5 + 84 5 + 110 5 + 133 5 14132 5 = ( − 220 ) 5 + 5027 5 + 6237 5 + 14068 5 85359 5 = 55 5 + 3183 5 + 28969 5 + 85282 5 {\displaystyle {\begin{aligned}144^{5}&=27^{5}+84^{5}+110^{5}+133^{5}\\14132^{5}&=(-220)^{5}+5027^{5}+6237^{5}+14068^{5}\\85359^{5}&=55^{5}+3183^{5}+28969^{5}+85282^{5}\end{aligned}}} (Lander & Parkin, 1966); (Scher & Seidl, 1996); (Frye, 2004). In 1988, Noam Elkies published a method to construct an infinite sequence of counterexamples for the k = 4 case. His smallest counterexample was 20615673 4 = 2682440 4 + 15365639 4 + 18796760 4 . {\displaystyle 20615673^{4}=2682440^{4}+15365639^{4}+18796760^{4}.} A particular case of Elkies' solutions can be reduced to the identity ( 85 v 2 + 484 v − 313 ) 4 + ( 68 v 2 − 586 v + 10 ) 4 + ( 2 u ) 4 = ( 357 v 2 − 204 v + 363 ) 4 , {\displaystyle (85v^{2}+484v-313)^{4}+(68v^{2}-586v+10)^{4}+(2u)^{4}=(357v^{2}-204v+363)^{4},} where u 2 = 22030 + 28849 v − 56158 v 2 + 36941 v 3 − 31790 v 4 . {\displaystyle u^{2}=22030+28849v-56158v^{2}+36941v^{3}-31790v^{4}.} This is an elliptic curve with a rational point at v1 = −⁠31/467⁠. From this initial rational point, one can compute an infinite collection of others. Substituting v1 into the identity and removing common factors gives the numerical example cited above. In 1988, Roger Frye found the smallest possible counterexample 95800 4 + 217519 4 + 414560 4 = 422481 4 {\displaystyle 95800^{4}+217519^{4}+414560^{4}=422481^{4}} for k = 4 by a direct computer search using techniques suggested by Elkies. This solution is the only one with values of the variables below 1,000,000. == Generalizations == In 1967, L. J. Lander, T. R. Parkin, and John Selfridge conjectured that if ∑ i = 1 n a i k = ∑ j = 1 m b j k {\displaystyle \sum _{i=1}^{n}a_{i}^{k}=\sum _{j=1}^{m}b_{j}^{k}} , where ai ≠ bj are positive integers for all 1 ≤ i ≤ n and 1 ≤ j ≤ m, then m + n ≥ k. In the special case m = 1, the conjecture states that if ∑ i = 1 n a i k = b k {\displaystyle \sum _{i=1}^{n}a_{i}^{k}=b^{k}} (under the conditions given above) then n ≥ k − 1. The special case may be described as the problem of giving a partition of a perfect power into few like powers. For k = 4, 5, 7, 8 and n = k or k − 1, there are many known solutions. Some of these are listed below. See OEIS: A347773 for more data. === k = 3 === 3 3 + 4 3 + 5 3 = 6 3 {\displaystyle 3^{3}+4^{3}+5^{3}=6^{3}} (Plato's number 216) This is the case a = 1, b = 0 of Srinivasa Ramanujan's formula ( 3 a 2 + 5 a b − 5 b 2 ) 3 + ( 4 a 2 − 4 a b + 6 b 2 ) 3 + ( 5 a 2 − 5 a b − 3 b 2 ) 3 = ( 6 a 2 − 4 a b + 4 b 2 ) 3 {\displaystyle (3a^{2}+5ab-5b^{2})^{3}+(4a^{2}-4ab+6b^{2})^{3}+(5a^{2}-5ab-3b^{2})^{3}=(6a^{2}-4ab+4b^{2})^{3}} A cube as the sum of three cubes can also be parameterized in one of two ways: a 3 ( a 3 + b 3 ) 3 = b 3 ( a 3 + b 3 ) 3 + a 3 ( a 3 − 2 b 3 ) 3 + b 3 ( 2 a 3 − b 3 ) 3 a 3 ( a 3 + 2 b 3 ) 3 = a 3 ( a 3 − b 3 ) 3 + b 3 ( a 3 − b 3 ) 3 + b 3 ( 2 a 3 + b 3 ) 3 . {\displaystyle {\begin{aligned}a^{3}(a^{3}+b^{3})^{3}&=b^{3}(a^{3}+b^{3})^{3}+a^{3}(a^{3}-2b^{3})^{3}+b^{3}(2a^{3}-b^{3})^{3}\\[6pt]a^{3}(a^{3}+2b^{3})^{3}&=a^{3}(a^{3}-b^{3})^{3}+b^{3}(a^{3}-b^{3})^{3}+b^{3}(2a^{3}+b^{3})^{3}.\end{aligned}}} The number 2,100,0003 can be expressed as the sum of three positive cubes in nine different ways. === k = 4 === 422481 4 = 95800 4 + 217519 4 + 414560 4 353 4 = 30 4 + 120 4 + 272 4 + 315 4 {\displaystyle {\begin{aligned}422481^{4}&=95800^{4}+217519^{4}+414560^{4}\\[4pt]353^{4}&=30^{4}+120^{4}+272^{4}+315^{4}\end{aligned}}} (R. Frye, 1988); (R. Norrie, smallest, 1911). === k = 5 === 144 5 = 27 5 + 84 5 + 110 5 + 133 5 72 5 = 19 5 + 43 5 + 46 5 + 47 5 + 67 5 94 5 = 21 5 + 23 5 + 37 5 + 79 5 + 84 5 107 5 = 7 5 + 43 5 + 57 5 + 80 5 + 100 5 {\displaystyle {\begin{aligned}144^{5}&=27^{5}+84^{5}+110^{5}+133^{5}\\[2pt]72^{5}&=19^{5}+43^{5}+46^{5}+47^{5}+67^{5}\\[2pt]94^{5}&=21^{5}+23^{5}+37^{5}+79^{5}+84^{5}\\[2pt]107^{5}&=7^{5}+43^{5}+57^{5}+80^{5}+100^{5}\end{aligned}}} (Lander & Parkin, 1966); (Lander, Parkin, Selfridge, smallest, 1967); (Lander, Parkin, Selfridge, second smallest, 1967); (Sastry, 1934, third smallest). === k = 6 === It has been known since 2002 that there are no solutions for k = 6 whose final term is ≤ 730000. === k = 7 === 568 7 = 127 7 + 258 7 + 266 7 + 413 7 + 430 7 + 439 7 + 525 7 {\displaystyle 568^{7}=127^{7}+258^{7}+266^{7}+413^{7}+430^{7}+439^{7}+525^{7}} (M. Dodrill, 1999). === k = 8 === 1409 8 = 90 8 + 223 8 + 478 8 + 524 8 + 748 8 + 1088 8 + 1190 8 + 1324 8 {\displaystyle 1409^{8}=90^{8}+223^{8}+478^{8}+524^{8}+748^{8}+1088^{8}+1190^{8}+1324^{8}} (S. Chase, 2000). == See also == Jacobi–Madden equation Prouhet–Tarry–Escott problem Beal conjecture Pythagorean quadruple Generalized taxicab number Sums of powers, a list of related conjectures and theorems == References == == External links == Tito Piezas III, A Collection of Algebraic Identities Archived 2011-10-01 at the Wayback Machine Jaroslaw Wroblewski, Equal Sums of Like Powers Ed Pegg Jr., Math Games, Power Sums James Waldby, A Table of Fifth Powers equal to a Fifth Power (2009) R. Gerbicz, J.-C. Meyrignac, U. Beckert, All solutions of the Diophantine equation a6 + b6 = c6 + d6 + e6 + f6 + g6 for a,b,c,d,e,f,g < 250000 found with a distributed Boinc project EulerNet: Computing Minimal Equal Sums Of Like Powers Weisstein, Eric W. "Euler's Sum of Powers Conjecture". MathWorld. Weisstein, Eric W. "Euler Quartic Conjecture". MathWorld. Weisstein, Eric W. "Diophantine Equation--4th Powers". MathWorld. Euler's Conjecture at library.thinkquest.org A simple explanation of Euler's Conjecture at Maths Is Good For You!
Wikipedia/Euler's_sum_of_powers_conjecture
In fluid dynamics, the Euler equations are a set of partial differential equations governing adiabatic and inviscid flow. They are named after Leonhard Euler. In particular, they correspond to the Navier–Stokes equations with zero viscosity and zero thermal conductivity. The Euler equations can be applied to incompressible and compressible flows. The incompressible Euler equations consist of Cauchy equations for conservation of mass and balance of momentum, together with the incompressibility condition that the flow velocity is divergence-free. The compressible Euler equations consist of equations for conservation of mass, balance of momentum, and balance of energy, together with a suitable constitutive equation for the specific energy density of the fluid. Historically, only the equations of conservation of mass and balance of momentum were derived by Euler. However, fluid dynamics literature often refers to the full set of the compressible Euler equations – including the energy equation – as "the compressible Euler equations". The mathematical characters of the incompressible and compressible Euler equations are rather different. For constant fluid density, the incompressible equations can be written as a quasilinear advection equation for the fluid velocity together with an elliptic Poisson's equation for the pressure. On the other hand, the compressible Euler equations form a quasilinear hyperbolic system of conservation equations. The Euler equations can be formulated in a "convective form" (also called the "Lagrangian form") or a "conservation form" (also called the "Eulerian form"). The convective form emphasizes changes to the state in a frame of reference moving with the fluid. The conservation form emphasizes the mathematical interpretation of the equations as conservation equations for a control volume fixed in space (which is useful from a numerical point of view). == History == The Euler equations first appeared in published form in Euler's article "Principes généraux du mouvement des fluides", published in Mémoires de l'Académie des Sciences de Berlin in 1757 (although Euler had previously presented his work to the Berlin Academy in 1752). Prior work included contributions from the Bernoulli family as well as from Jean le Rond d'Alembert. The Euler equations were among the first partial differential equations to be written down, after the wave equation. In Euler's original work, the system of equations consisted of the momentum and continuity equations, and thus was underdetermined except in the case of an incompressible flow. An additional equation, which was called the adiabatic condition, was supplied by Pierre-Simon Laplace in 1816. During the second half of the 19th century, it was found that the equation related to the balance of energy must at all times be kept for compressible flows, and the adiabatic condition is a consequence of the fundamental laws in the case of smooth solutions. With the discovery of the special theory of relativity, the concepts of energy density, momentum density, and stress were unified into the concept of the stress–energy tensor, and energy and momentum were likewise unified into a single concept, the energy–momentum vector. == Incompressible Euler equations with constant and uniform density == In convective form (i.e., the form with the convective operator made explicit in the momentum equation), the incompressible Euler equations in case of density constant in time and uniform in space are: where: u {\displaystyle \mathbf {u} } is the flow velocity vector, with components in an N-dimensional space u 1 , u 2 , … , u N {\displaystyle u_{1},u_{2},\dots ,u_{N}} , D Φ D t = ∂ Φ ∂ t + v ⋅ ∇ Φ {\displaystyle {\frac {D{\boldsymbol {\Phi }}}{Dt}}={\frac {\partial {\boldsymbol {\Phi }}}{\partial t}}+\mathbf {v} \cdot \nabla {\boldsymbol {\Phi }}} , for a generic function (or field) Φ {\displaystyle {\boldsymbol {\Phi }}} denotes its material derivative in time with respect to the advective field v {\displaystyle \mathbf {v} } and ∇ w {\displaystyle \nabla w} is the gradient of the specific (with the sense of per unit mass) thermodynamic work, the internal source term, and ∇ ⋅ u {\displaystyle \nabla \cdot \mathbf {u} } is the flow velocity divergence. g {\displaystyle \mathbf {g} } represents body accelerations (per unit mass) acting on the continuum, for example gravity, inertial accelerations, electric field acceleration, and so on. The first equation is the Euler momentum equation with uniform density (for this equation it could also not be constant in time). By expanding the material derivative, the equations become: ∂ u ∂ t + ( u ⋅ ∇ ) u = − ∇ w + g , ∇ ⋅ u = 0. {\displaystyle {\begin{aligned}{\partial \mathbf {u} \over \partial t}+(\mathbf {u} \cdot \nabla )\mathbf {u} &=-\nabla w+\mathbf {g} ,\\\nabla \cdot \mathbf {u} &=0.\end{aligned}}} In fact for a flow with uniform density ρ 0 {\displaystyle \rho _{0}} the following identity holds: ∇ w ≡ ∇ ( p ρ 0 ) = 1 ρ 0 ∇ p , {\displaystyle \nabla w\equiv \nabla \left({\frac {p}{\rho _{0}}}\right)={\frac {1}{\rho _{0}}}\nabla p,} where p {\displaystyle p} is the mechanic pressure. The second equation is the incompressible constraint, stating the flow velocity is a solenoidal field (the order of the equations is not causal, but underlines the fact that the incompressible constraint is not a degenerate form of the continuity equation, but rather of the energy equation, as it will become clear in the following). Notably, the continuity equation would be required also in this incompressible case as an additional third equation in case of density varying in time or varying in space. For example, with density nonuniform in space but constant in time, the continuity equation to be added to the above set would correspond to: ∂ ρ ∂ t = 0. {\displaystyle {\frac {\partial \rho }{\partial t}}=0.} So the case of constant and uniform density is the only one not requiring the continuity equation as additional equation regardless of the presence or absence of the incompressible constraint. In fact, the case of incompressible Euler equations with constant and uniform density discussed here is a toy model featuring only two simplified equations, so it is ideal for didactical purposes even if with limited physical relevance. The equations above thus represent respectively conservation of mass (1 scalar equation) and momentum (1 vector equation containing N {\displaystyle N} scalar components, where N {\displaystyle N} is the physical dimension of the space of interest). Flow velocity and pressure are the so-called physical variables. In a coordinate system given by ( x 1 , … , x N ) {\displaystyle \left(x_{1},\dots ,x_{N}\right)} the velocity and external force vectors u {\displaystyle \mathbf {u} } and g {\displaystyle \mathbf {g} } have components ( u 1 , … , u N ) {\displaystyle (u_{1},\dots ,u_{N})} and ( g 1 , … , g N ) {\displaystyle \left(g_{1},\dots ,g_{N}\right)} , respectively. Then the equations may be expressed in subscript notation as: ∂ u i ∂ t + ∑ j = 1 N ∂ ( u i u j + w δ i j ) ∂ x j = g i , ∑ i = 1 N ∂ u i ∂ x i = 0. {\displaystyle {\begin{aligned}{\partial u_{i} \over \partial t}+\sum _{j=1}^{N}{\partial \left(u_{i}u_{j}+w\delta _{ij}\right) \over \partial x_{j}}&=g_{i},\\\sum _{i=1}^{N}{\partial u_{i} \over \partial x_{i}}&=0.\end{aligned}}} where the i {\displaystyle i} and j {\displaystyle j} subscripts label the N-dimensional space components, and δ i j {\displaystyle \delta _{ij}} is the Kroenecker delta. The use of Einstein notation (where the sum is implied by repeated indices instead of sigma notation) is also frequent. === Properties === Although Euler first presented these equations in 1755, many fundamental questions or concepts about them remain unanswered. In three space dimensions, in certain simplified scenarios, the Euler equations produce singularities. Smooth solutions of the free (in the sense of without source term: g=0) equations satisfy the conservation of specific kinetic energy: ∂ ∂ t ( 1 2 u 2 ) + ∇ ⋅ ( u 2 u + w u ) = 0. {\displaystyle {\partial \over \partial t}\left({\frac {1}{2}}u^{2}\right)+\nabla \cdot \left(u^{2}\mathbf {u} +w\mathbf {u} \right)=0.} In the one-dimensional case without the source term (both pressure gradient and external force), the momentum equation becomes the inviscid Burgers' equation: ∂ u ∂ t + u ∂ u ∂ x = 0. {\displaystyle {\partial u \over \partial t}+u{\partial u \over \partial x}=0.} This model equation gives many insights into Euler equations. === Nondimensionalisation === In order to make the equations dimensionless, a characteristic length r 0 {\displaystyle r_{0}} , and a characteristic velocity u 0 {\displaystyle u_{0}} , need to be defined. These should be chosen such that the dimensionless variables are all of order one. The following dimensionless variables are thus obtained: u ∗ ≡ u u 0 , r ∗ ≡ r r 0 , t ∗ ≡ u 0 r 0 t , p ∗ ≡ w u 0 2 , ∇ ∗ ≡ r 0 ∇ . {\displaystyle {\begin{aligned}u^{*}&\equiv {\frac {u}{u_{0}}},&r^{*}&\equiv {\frac {r}{r_{0}}},\\[5pt]t^{*}&\equiv {\frac {u_{0}}{r_{0}}}t,&p^{*}&\equiv {\frac {w}{u_{0}^{2}}},\\[5pt]\nabla ^{*}&\equiv r_{0}\nabla .\end{aligned}}} and of the field unit vector: g ^ ≡ g g . {\displaystyle {\hat {\mathbf {g} }}\equiv {\frac {\mathbf {g} }{g}}.} Substitution of these inversed relations in Euler equations, defining the Froude number, yields (omitting the * at apix): Euler equations in the Froude limit (no external field) are named free equations and are conservative. The limit of high Froude numbers (low external field) is thus notable and can be studied with perturbation theory. === Conservation form === The conservation form emphasizes the mathematical properties of Euler equations, and especially the contracted form is often the most convenient one for computational fluid dynamics simulations. Computationally, there are some advantages in using the conserved variables. This gives rise to a large class of numerical methods called conservative methods. The free Euler equations are conservative, in the sense they are equivalent to a conservation equation: ∂ y ∂ t + ∇ ⋅ F = 0 , {\displaystyle {\frac {\partial \mathbf {y} }{\partial t}}+\nabla \cdot \mathbf {F} ={\mathbf {0} },} or simply in Einstein notation: ∂ y j ∂ t + ∂ f i j ∂ r i = 0 i , {\displaystyle {\frac {\partial y_{j}}{\partial t}}+{\frac {\partial f_{ij}}{\partial r_{i}}}=0_{i},} where the conservation quantity y {\displaystyle \mathbf {y} } in this case is a vector, and F {\displaystyle \mathbf {F} } is a flux matrix. This can be simply proved. At last Euler equations can be recast into the particular equation: === Spatial dimensions === For certain problems, especially when used to analyze compressible flow in a duct or in case the flow is cylindrically or spherically symmetric, the one-dimensional Euler equations are a useful first approximation. Generally, the Euler equations are solved by Riemann's method of characteristics. This involves finding curves in plane of independent variables (i.e., x {\displaystyle x} and t {\displaystyle t} ) along which partial differential equations (PDEs) degenerate into ordinary differential equations (ODEs). Numerical solutions of the Euler equations rely heavily on the method of characteristics. == Incompressible Euler equations == In convective form the incompressible Euler equations in case of density variable in space are: where the additional variables are: ρ {\displaystyle \rho } is the fluid mass density, p {\displaystyle p} is the pressure, p = ρ w {\displaystyle p=\rho w} . The first equation, which is the new one, is the incompressible continuity equation. In fact the general continuity equation would be: ∂ ρ ∂ t + u ⋅ ∇ ρ + ρ ∇ ⋅ u = 0 , {\displaystyle {\partial \rho \over \partial t}+\mathbf {u} \cdot \nabla \rho +\rho \nabla \cdot \mathbf {u} =0,} but here the last term is identically zero for the incompressibility constraint. === Conservation form === The incompressible Euler equations in the Froude limit are equivalent to a single conservation equation with conserved quantity and associated flux respectively: y = ( ρ ρ u 0 ) ; F = ( ρ u ρ u ⊗ u + p I u ) . {\displaystyle \mathbf {y} ={\begin{pmatrix}\rho \\\rho \mathbf {u} \\0\end{pmatrix}};\qquad {\mathbf {F} }={\begin{pmatrix}\rho \mathbf {u} \\\rho \mathbf {u} \otimes \mathbf {u} +p\mathbf {I} \\\mathbf {u} \end{pmatrix}}.} Here y {\displaystyle \mathbf {y} } has length N + 2 {\displaystyle N+2} and F {\displaystyle \mathbf {F} } has size ( N + 2 ) N {\displaystyle (N+2)N} . In general (not only in the Froude limit) Euler equations are expressible as: ∂ ∂ t ( ρ ρ u 0 ) + ∇ ⋅ ( ρ u ρ u ⊗ u + p I u ) = ( 0 ρ g 0 ) . {\displaystyle {\frac {\partial }{\partial t}}{\begin{pmatrix}\rho \\\rho \mathbf {u} \\0\end{pmatrix}}+\nabla \cdot {\begin{pmatrix}\rho \mathbf {u} \\\rho \mathbf {u} \otimes \mathbf {u} +p\mathbf {I} \\\mathbf {u} \end{pmatrix}}={\begin{pmatrix}0\\\rho \mathbf {g} \\0\end{pmatrix}}.} === Conservation variables === The variables for the equations in conservation form are not yet optimised. In fact we could define: y = ( ρ j 0 ) ; F = ( j j ⊗ 1 ρ j + p I j ρ ) , {\displaystyle {\mathbf {y} }={\begin{pmatrix}\rho \\\mathbf {j} \\0\end{pmatrix}};\qquad {\mathbf {F} }={\begin{pmatrix}\mathbf {j} \\\mathbf {j} \otimes {\frac {1}{\rho }}\,\mathbf {j} +p\mathbf {I} \\{\frac {\mathbf {j} }{\rho }}\end{pmatrix}},} where j = ρ u {\displaystyle \mathbf {j} =\rho \mathbf {u} } is the momentum density, a conservation variable. where f = ρ g {\displaystyle \mathbf {f} =\rho \mathbf {g} } is the force density, a conservation variable. == Euler equations == In differential convective form, the compressible (and most general) Euler equations can be written shortly with the material derivative notation: where the additional variables here is: e {\displaystyle e} is the specific internal energy (internal energy per unit mass). The equations above thus represent conservation of mass, momentum, and energy: the energy equation expressed in the variable internal energy allows to understand the link with the incompressible case, but it is not in the simplest form. Mass density, flow velocity and pressure are the so-called convective variables (or physical variables, or lagrangian variables), while mass density, momentum density and total energy density are the so-called conserved variables (also called eulerian, or mathematical variables). If one expands the material derivative, the equations above become: ∂ ρ ∂ t + u ⋅ ∇ ρ + ρ ∇ ⋅ u = 0 , ∂ u ∂ t + u ⋅ ∇ u + ∇ p ρ = g , ∂ e ∂ t + u ⋅ ∇ e + p ρ ∇ ⋅ u = 0. {\displaystyle {\begin{aligned}{\partial \rho \over \partial t}+\mathbf {u} \cdot \nabla \rho +\rho \nabla \cdot \mathbf {u} &=0,\\[1.2ex]{\frac {\partial \mathbf {u} }{\partial t}}+\mathbf {u} \cdot \nabla \mathbf {u} +{\frac {\nabla p}{\rho }}&=\mathbf {g} ,\\[1.2ex]{\partial e \over \partial t}+\mathbf {u} \cdot \nabla e+{\frac {p}{\rho }}\nabla \cdot \mathbf {u} &=0.\end{aligned}}} === Incompressible constraint (revisited) === Coming back to the incompressible case, it now becomes apparent that the incompressible constraint typical of the former cases actually is a particular form valid for incompressible flows of the energy equation, and not of the mass equation. In particular, the incompressible constraint corresponds to the following very simple energy equation: D e D t = 0. {\displaystyle {\frac {De}{Dt}}=0.} Thus for an incompressible inviscid fluid the specific internal energy is constant along the flow lines, also in a time-dependent flow. The pressure in an incompressible flow acts like a Lagrange multiplier, being the multiplier of the incompressible constraint in the energy equation, and consequently in incompressible flows it has no thermodynamic meaning. In fact, thermodynamics is typical of compressible flows and degenerates in incompressible flows. Basing on the mass conservation equation, one can put this equation in the conservation form: ∂ ρ e ∂ t + ∇ ⋅ ( ρ e u ) = 0 , {\displaystyle {\partial \rho e \over \partial t}+\nabla \cdot (\rho e\mathbf {u} )=0,} meaning that for an incompressible inviscid nonconductive flow a continuity equation holds for the internal energy. === Enthalpy conservation === Since by definition the specific enthalpy is: h = e + p ρ . {\displaystyle h=e+{\frac {p}{\rho }}.} The material derivative of the specific internal energy can be expressed as: D e D t = D h D t − 1 ρ ( D p D t − p ρ D ρ D t ) . {\displaystyle {De \over Dt}={Dh \over Dt}-{\frac {1}{\rho }}\left({Dp \over Dt}-{\frac {p}{\rho }}{D\rho \over Dt}\right).} Then by substituting the momentum equation in this expression, one obtains: D e D t = D h D t − 1 ρ ( p ∇ ⋅ u + D p D t ) . {\displaystyle {De \over Dt}={Dh \over Dt}-{\frac {1}{\rho }}\left(p\nabla \cdot \mathbf {u} +{Dp \over Dt}\right).} And by substituting the latter in the energy equation, one obtains that the enthalpy expression for the Euler energy equation: D h D t = 1 ρ D p D t . {\displaystyle {Dh \over Dt}={\frac {1}{\rho }}{Dp \over Dt}.} In a reference frame moving with an inviscid and nonconductive flow, the variation of enthalpy directly corresponds to a variation of pressure. === Thermodynamics of ideal fluids === In thermodynamics the independent variables are the specific volume, and the specific entropy, while the specific energy is a function of state of these two variables. For a thermodynamic fluid, the compressible Euler equations are consequently best written as: where: v {\displaystyle v} is the specific volume u {\displaystyle \mathbf {u} } is the flow velocity vector s {\displaystyle s} is the specific entropy In the general case and not only in the incompressible case, the energy equation means that for an inviscid thermodynamic fluid the specific entropy is constant along the flow lines, also in a time-dependent flow. Basing on the mass conservation equation, one can put this equation in the conservation form: ∂ ρ s ∂ t + ∇ ⋅ ( ρ s u ) = 0 , {\displaystyle {\partial \rho s \over \partial t}+\nabla \cdot (\rho s\mathbf {u} )=0,} meaning that for an inviscid nonconductive flow a continuity equation holds for the entropy. On the other hand, the two second-order partial derivatives of the specific internal energy in the momentum equation require the specification of the fundamental equation of state of the material considered, i.e. of the specific internal energy as function of the two variables specific volume and specific entropy: e = e ( v , s ) . {\displaystyle e=e(v,s).} The fundamental equation of state contains all the thermodynamic information about the system (Callen, 1985), exactly like the couple of a thermal equation of state together with a caloric equation of state. === Conservation form === The Euler equations in the Froude limit are equivalent to a single conservation equation with conserved quantity and associated flux respectively: y = ( ρ j E t ) ; F = ( j 1 ρ j ⊗ j + p I ( E t + p ) 1 ρ j ) , {\displaystyle \mathbf {y} ={\begin{pmatrix}\rho \\\mathbf {j} \\E^{t}\end{pmatrix}};\qquad {\mathbf {F} }={\begin{pmatrix}\mathbf {j} \\{\frac {1}{\rho }}\mathbf {j} \otimes \mathbf {j} +p\mathbf {I} \\\left(E^{t}+p\right){\frac {1}{\rho }}\mathbf {j} \end{pmatrix}},} where: j = ρ u {\displaystyle \mathbf {j} =\rho \mathbf {u} } is the momentum density, a conservation variable. E t = ρ e + 1 2 ρ u 2 {\textstyle E^{t}=\rho e+{\frac {1}{2}}\rho u^{2}} is the total energy density (total energy per unit volume). Here y {\displaystyle \mathbf {y} } has length N + 2 and F {\displaystyle \mathbf {F} } has size N(N + 2). In general (not only in the Froude limit) Euler equations are expressible as: where f = ρ g {\displaystyle \mathbf {f} =\rho \mathbf {g} } is the force density, a conservation variable. We remark that also the Euler equation even when conservative (no external field, Froude limit) have no Riemann invariants in general. Some further assumptions are required However, we already mentioned that for a thermodynamic fluid the equation for the total energy density is equivalent to the conservation equation: ∂ ∂ t ( ρ s ) + ∇ ⋅ ( ρ s u ) = 0. {\displaystyle {\partial \over \partial t}(\rho s)+\nabla \cdot (\rho s\mathbf {u} )=0.} Then the conservation equations in the case of a thermodynamic fluid are more simply expressed as: where S = ρ s {\displaystyle S=\rho s} is the entropy density, a thermodynamic conservation variable. Another possible form for the energy equation, being particularly useful for isobarics, is: ∂ H t ∂ t + ∇ ⋅ ( H t u ) = u ⋅ f − ∂ p ∂ t , {\displaystyle {\frac {\partial H^{t}}{\partial t}}+\nabla \cdot \left(H^{t}\mathbf {u} \right)=\mathbf {u} \cdot \mathbf {f} -{\frac {\partial p}{\partial t}},} where H t = E t + p = ρ e + p + 1 2 ρ u 2 {\textstyle H^{t}=E^{t}+p=\rho e+p+{\frac {1}{2}}\rho u^{2}} is the total enthalpy density. == Quasilinear form and characteristic equations == Expanding the fluxes can be an important part of constructing numerical solvers, for example by exploiting (approximate) solutions to the Riemann problem. In regions where the state vector y varies smoothly, the equations in conservative form can be put in quasilinear form: ∂ y ∂ t + A i ∂ y ∂ r i = 0 . {\displaystyle {\frac {\partial \mathbf {y} }{\partial t}}+\mathbf {A} _{i}{\frac {\partial \mathbf {y} }{\partial r_{i}}}={\mathbf {0} }.} where A i {\displaystyle \mathbf {A} _{i}} are called the flux Jacobians defined as the matrices: A i ( y ) = ∂ f i ( y ) ∂ y . {\displaystyle \mathbf {A} _{i}(\mathbf {y} )={\frac {\partial \mathbf {f} _{i}(\mathbf {y} )}{\partial \mathbf {y} }}.} This Jacobian does not exist where the state variables are discontinuous, as at contact discontinuities or shocks. === Characteristic equations === The compressible Euler equations can be decoupled into a set of N+2 wave equations that describes sound in Eulerian continuum if they are expressed in characteristic variables instead of conserved variables. In fact the tensor A is always diagonalizable. If the eigenvalues (the case of Euler equations) are all real the system is defined hyperbolic, and physically eigenvalues represent the speeds of propagation of information. If they are all distinguished, the system is defined strictly hyperbolic (it will be proved to be the case of one-dimensional Euler equations). Furthermore, diagonalisation of compressible Euler equation is easier when the energy equation is expressed in the variable entropy (i.e. with equations for thermodynamic fluids) than in other energy variables. This will become clear by considering the 1D case. If p i {\displaystyle \mathbf {p} _{i}} is the right eigenvector of the matrix A {\displaystyle \mathbf {A} } corresponding to the eigenvalue λ i {\displaystyle \lambda _{i}} , by building the projection matrix: P = [ p 1 , p 2 , . . . , p n ] . {\displaystyle \mathbf {P} =\left[\mathbf {p} _{1},\mathbf {p} _{2},...,\mathbf {p} _{n}\right].} One can finally find the characteristic variables as: w = P − 1 y . {\displaystyle \mathbf {w} =\mathbf {P} ^{-1}\mathbf {y} .} Since A is constant, multiplying the original 1-D equation in flux-Jacobian form with P−1 yields the characteristic equations: ∂ w i ∂ t + λ j ∂ w i ∂ r j = 0 i . {\displaystyle {\frac {\partial w_{i}}{\partial t}}+\lambda _{j}{\frac {\partial w_{i}}{\partial r_{j}}}=0_{i}.} The original equations have been decoupled into N+2 characteristic equations each describing a simple wave, with the eigenvalues being the wave speeds. The variables wi are called the characteristic variables and are a subset of the conservative variables. The solution of the initial value problem in terms of characteristic variables is finally very simple. In one spatial dimension it is: w i ( x , t ) = w i ( x − λ i t , 0 ) . {\displaystyle w_{i}(x,t)=w_{i}\left(x-\lambda _{i}t,0\right).} Then the solution in terms of the original conservative variables is obtained by transforming back: y = P w , {\displaystyle \mathbf {y} =\mathbf {P} \mathbf {w} ,} this computation can be explicited as the linear combination of the eigenvectors: y ( x , t ) = ∑ i = 1 m w i ( x − λ i t , 0 ) p i . {\displaystyle \mathbf {y} (x,t)=\sum _{i=1}^{m}w_{i}\left(x-\lambda _{i}t,0\right)\mathbf {p} _{i}.} Now it becomes apparent that the characteristic variables act as weights in the linear combination of the jacobian eigenvectors. The solution can be seen as superposition of waves, each of which is advected independently without change in shape. Each i-th wave has shape wipi and speed of propagation λi. In the following we show a very simple example of this solution procedure. === Waves in 1D inviscid, nonconductive thermodynamic fluid === If one considers Euler equations for a thermodynamic fluid with the two further assumptions of one spatial dimension and free (no external field: g = 0): ∂ v ∂ t + u ∂ v ∂ x − v ∂ u ∂ x = 0 , ∂ u ∂ t + u ∂ u ∂ x − e v v v ∂ v ∂ x − e v s v ∂ s ∂ x = 0 , ∂ s ∂ t + u ∂ s ∂ x = 0. {\displaystyle {\begin{aligned}{\partial v \over \partial t}+u{\partial v \over \partial x}-v{\partial u \over \partial x}&=0,\\[1.2ex]{\partial u \over \partial t}+u{\partial u \over \partial x}-e_{vv}v{\partial v \over \partial x}-e_{vs}v{\partial s \over \partial x}&=0,\\[1.2ex]{\partial s \over \partial t}+u{\partial s \over \partial x}&=0.\end{aligned}}} If one defines the vector of variables: y = ( v u s ) , {\displaystyle \mathbf {y} ={\begin{pmatrix}v\\u\\s\end{pmatrix}},} recalling that v {\displaystyle v} is the specific volume, u {\displaystyle u} the flow speed, s {\displaystyle s} the specific entropy, the corresponding jacobian matrix is: A = ( u − v 0 − e v v v u − e v s v 0 0 u ) . {\displaystyle {\mathbf {A} }={\begin{pmatrix}u&-v&0\\-e_{vv}v&u&-e_{vs}v\\0&0&u\end{pmatrix}}.} At first one must find the eigenvalues of this matrix by solving the characteristic equation: det ( A ( y ) − λ ( y ) I ) = 0 , {\displaystyle \det(\mathbf {A} (\mathbf {y} )-\lambda (\mathbf {y} )\mathbf {I} )=0,} that is explicitly: det [ u − λ − v 0 − e v v v u − λ − e v s v 0 0 u − λ ] = 0. {\displaystyle \det {\begin{bmatrix}u-\lambda &-v&0\\-e_{vv}v&u-\lambda &-e_{vs}v\\0&0&u-\lambda \end{bmatrix}}=0.} This determinant is very simple: the fastest computation starts on the last row, since it has the highest number of zero elements. ( u − λ ) det [ u − λ − v − e v v v u − λ ] = 0. {\displaystyle (u-\lambda )\det {\begin{bmatrix}u-\lambda &-v\\-e_{vv}v&u-\lambda \end{bmatrix}}=0.} Now by computing the determinant 2×2: ( u − λ ) ( ( u − λ ) 2 − e v v v 2 ) = 0 , {\displaystyle (u-\lambda )\left((u-\lambda )^{2}-e_{vv}v^{2}\right)=0,} by defining the parameter: a ( v , s ) ≡ v e v v , {\displaystyle a(v,s)\equiv v{\sqrt {e_{vv}}},} or equivalently in mechanical variables, as: a ( ρ , p ) ≡ ∂ p ∂ ρ . {\displaystyle a(\rho ,p)\equiv {\sqrt {\partial p \over \partial \rho }}.} This parameter is always real according to the second law of thermodynamics. In fact the second law of thermodynamics can be expressed by several postulates. The most elementary of them in mathematical terms is the statement of convexity of the fundamental equation of state, i.e. the hessian matrix of the specific energy expressed as function of specific volume and specific entropy: ( e v v e v s e v s e s s ) , {\displaystyle {\begin{pmatrix}e_{vv}&e_{vs}\\e_{vs}&e_{ss}\end{pmatrix}},} is defined positive. This statement corresponds to the two conditions: { e v v > 0 e v v e s s − e v s 2 > 0 {\displaystyle \left\{{\begin{aligned}e_{vv}&>0\\[1.2ex]e_{vv}e_{ss}-e_{vs}^{2}&>0\end{aligned}}\right.} The first condition is the one ensuring the parameter a is defined real. The characteristic equation finally results: ( u − λ ) ( ( u − λ ) 2 − a 2 ) = 0 {\displaystyle (u-\lambda )\left((u-\lambda )^{2}-a^{2}\right)=0} That has three real solutions: λ 1 ( v , u , s ) = u − a ( v , s ) , λ 2 ( u ) = u , λ 3 ( v , u , s ) = u + a ( v , s ) . {\displaystyle \lambda _{1}(v,u,s)=u-a(v,s),\qquad \lambda _{2}(u)=u,\qquad \lambda _{3}(v,u,s)=u+a(v,s).} Then the matrix has three real eigenvalues all distinguished: the 1D Euler equations are a strictly hyperbolic system. At this point one should determine the three eigenvectors: each one is obtained by substituting one eigenvalue in the eigenvalue equation and then solving it. By substituting the first eigenvalue λ1 one obtains: ( a − v 0 − e v v v a − e v s v 0 0 a ) ( v 1 u 1 s 1 ) = 0. {\displaystyle {\begin{pmatrix}a&-v&0\\-e_{vv}v&a&-e_{vs}v\\0&0&a\end{pmatrix}}{\begin{pmatrix}v_{1}\\u_{1}\\s_{1}\end{pmatrix}}=0.} Basing on the third equation that simply has solution s1=0, the system reduces to: ( a − v − a 2 / v a ) ( v 1 u 1 ) = 0 {\displaystyle {\begin{pmatrix}a&-v\\-a^{2}/v&a\end{pmatrix}}{\begin{pmatrix}v_{1}\\u_{1}\end{pmatrix}}=0} The two equations are redundant as usual, then the eigenvector is defined with a multiplying constant. We choose as right eigenvector: p 1 = ( v a 0 ) . {\displaystyle \mathbf {p} _{1}={\begin{pmatrix}v\\a\\0\end{pmatrix}}.} The other two eigenvectors can be found with analogous procedure as: p 2 = ( e v s 0 − ( a v ) 2 ) , p 3 = ( v − a 0 ) . {\displaystyle \mathbf {p} _{2}={\begin{pmatrix}e_{vs}\\0\\-\left({\frac {a}{v}}\right)^{2}\end{pmatrix}},\qquad \mathbf {p} _{3}={\begin{pmatrix}v\\-a\\0\end{pmatrix}}.} Then the projection matrix can be built: P ( v , u , s ) = ( p 1 , p 2 , p 3 ) = ( v e v s v a 0 − a 0 − ( a v ) 2 0 ) . {\displaystyle \mathbf {P} (v,u,s)=(\mathbf {p} _{1},\mathbf {p} _{2},\mathbf {p} _{3})={\begin{pmatrix}v&e_{vs}&v\\a&0&-a\\0&-\left({\frac {a}{v}}\right)^{2}&0\end{pmatrix}}.} Finally it becomes apparent that the real parameter a previously defined is the speed of propagation of the information characteristic of the hyperbolic system made of Euler equations, i.e. it is the wave speed. It remains to be shown that the sound speed corresponds to the particular case of an isentropic transformation: a s ≡ ( ∂ p ∂ ρ ) s . {\displaystyle a_{s}\equiv {\sqrt {\left({\partial p \over \partial \rho }\right)_{s}}}.} === Compressibility and sound speed === Sound speed is defined as the wavespeed of an isentropic transformation: a s ( ρ , p ) ≡ ( ∂ p ∂ ρ ) s , {\displaystyle a_{s}(\rho ,p)\equiv {\sqrt {\left({\partial p \over \partial \rho }\right)_{s}}},} by the definition of the isoentropic compressibility: K s ( ρ , p ) ≡ ρ ( ∂ p ∂ ρ ) s , {\displaystyle K_{s}(\rho ,p)\equiv \rho \left({\partial p \over \partial \rho }\right)_{s},} the soundspeed results always the square root of ratio between the isentropic compressibility and the density: a s ≡ K s ρ . {\displaystyle a_{s}\equiv {\sqrt {\frac {K_{s}}{\rho }}}.} ==== Ideal gas ==== The sound speed in an ideal gas depends only on its temperature: a s ( T ) = γ T m . {\displaystyle a_{s}(T)={\sqrt {\gamma {\frac {T}{m}}}}.} Since the specific enthalpy in an ideal gas is proportional to its temperature: h = c p T = γ γ − 1 T m , {\displaystyle h=c_{p}T={\frac {\gamma }{\gamma -1}}{\frac {T}{m}},} the sound speed in an ideal gas can also be made dependent only on its specific enthalpy: a s ( h ) = ( γ − 1 ) h . {\displaystyle a_{s}(h)={\sqrt {(\gamma -1)h}}.} == Bernoulli's theorem for steady inviscid flow == Bernoulli's theorem is a direct consequence of the Euler equations. === Incompressible case and Lamb's form === The vector calculus identity of the cross product of a curl holds: v × ( ∇ × F ) = ∇ F ( v ⋅ F ) − v ⋅ ∇ F , {\displaystyle \mathbf {v\ \times } \left(\mathbf {\nabla \times F} \right)=\nabla _{F}\left(\mathbf {v\cdot F} \right)-\mathbf {v\cdot \nabla } \mathbf {F} \ ,} where the Feynman subscript notation ∇ F {\displaystyle \nabla _{F}} is used, which means the subscripted gradient operates only on the factor F {\displaystyle \mathbf {F} } . Lamb in his famous classical book Hydrodynamics (1895), still in print, used this identity to change the convective term of the flow velocity in rotational form: u ⋅ ∇ u = 1 2 ∇ ( u 2 ) + ( ∇ × u ) × u , {\displaystyle \mathbf {u} \cdot \nabla \mathbf {u} ={\frac {1}{2}}\nabla \left(u^{2}\right)+(\nabla \times \mathbf {u} )\times \mathbf {u} ,} the Euler momentum equation in Lamb's form becomes: ∂ u ∂ t + 1 2 ∇ ( u 2 ) + ( ∇ × u ) × u + ∇ p ρ = g = ∂ u ∂ t + 1 2 ∇ ( u 2 ) − u × ( ∇ × u ) + ∇ p ρ . {\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+{\frac {1}{2}}\nabla \left(u^{2}\right)+(\nabla \times \mathbf {u} )\times \mathbf {u} +{\frac {\nabla p}{\rho }}=\mathbf {g} ={\frac {\partial \mathbf {u} }{\partial t}}+{\frac {1}{2}}\nabla \left(u^{2}\right)-\mathbf {u} \times (\nabla \times \mathbf {u} )+{\frac {\nabla p}{\rho }}.} Now, basing on the other identity: ∇ ( p ρ ) = ∇ p ρ − p ρ 2 ∇ ρ , {\displaystyle \nabla \left({\frac {p}{\rho }}\right)={\frac {\nabla p}{\rho }}-{\frac {p}{\rho ^{2}}}\nabla \rho ,} the Euler momentum equation assumes a form that is optimal to demonstrate Bernoulli's theorem for steady flows: ∇ ( 1 2 u 2 + p ρ ) − g = − p ρ 2 ∇ ρ + u × ( ∇ × u ) − ∂ u ∂ t . {\displaystyle \nabla \left({\frac {1}{2}}u^{2}+{\frac {p}{\rho }}\right)-\mathbf {g} =-{\frac {p}{\rho ^{2}}}\nabla \rho +\mathbf {u} \times (\nabla \times \mathbf {u} )-{\frac {\partial \mathbf {u} }{\partial t}}.} In fact, in case of an external conservative field, by defining its potential φ: ∇ ( 1 2 u 2 + ϕ + p ρ ) = − p ρ 2 ∇ ρ + u × ( ∇ × u ) − ∂ u ∂ t . {\displaystyle \nabla \left({\frac {1}{2}}u^{2}+\phi +{\frac {p}{\rho }}\right)=-{\frac {p}{\rho ^{2}}}\nabla \rho +\mathbf {u} \times (\nabla \times \mathbf {u} )-{\frac {\partial \mathbf {u} }{\partial t}}.} In case of a steady flow the time derivative of the flow velocity disappears, so the momentum equation becomes: ∇ ( 1 2 u 2 + ϕ + p ρ ) = − p ρ 2 ∇ ρ + u × ( ∇ × u ) . {\displaystyle \nabla \left({\frac {1}{2}}u^{2}+\phi +{\frac {p}{\rho }}\right)=-{\frac {p}{\rho ^{2}}}\nabla \rho +\mathbf {u} \times (\nabla \times \mathbf {u} ).} And by projecting the momentum equation on the flow direction, i.e. along a streamline, the cross product disappears because its result is always perpendicular to the velocity: u ⋅ ∇ ( 1 2 u 2 + ϕ + p ρ ) = − p ρ 2 u ⋅ ∇ ρ . {\displaystyle \mathbf {u} \cdot \nabla \left({\frac {1}{2}}u^{2}+\phi +{\frac {p}{\rho }}\right)=-{\frac {p}{\rho ^{2}}}\mathbf {u} \cdot \nabla \rho .} In the steady incompressible case the mass equation is simply: u ⋅ ∇ ρ = 0 , {\displaystyle \mathbf {u} \cdot \nabla \rho =0,} that is the mass conservation for a steady incompressible flow states that the density along a streamline is constant. Then the Euler momentum equation in the steady incompressible case becomes: u ⋅ ∇ ( 1 2 u 2 + ϕ + p ρ ) = 0. {\displaystyle \mathbf {u} \cdot \nabla \left({\frac {1}{2}}u^{2}+\phi +{\frac {p}{\rho }}\right)=0.} The convenience of defining the total head for an inviscid liquid flow is now apparent: b l ≡ 1 2 u 2 + ϕ + p ρ , {\displaystyle b_{l}\equiv {\frac {1}{2}}u^{2}+\phi +{\frac {p}{\rho }},} which may be simply written as: u ⋅ ∇ b l = 0. {\displaystyle \mathbf {u} \cdot \nabla b_{l}=0.} That is, the momentum balance for a steady inviscid and incompressible flow in an external conservative field states that the total head along a streamline is constant. === Compressible case === In the most general steady (compressible) case the mass equation in conservation form is: ∇ ⋅ j = ρ ∇ ⋅ u + u ⋅ ∇ ρ = 0. {\displaystyle \nabla \cdot \mathbf {j} =\rho \nabla \cdot \mathbf {u} +\mathbf {u} \cdot \nabla \rho =0.} Therefore, the previous expression is rather u ⋅ ∇ ( 1 2 u 2 + ϕ + p ρ ) = p ρ ∇ ⋅ u . {\displaystyle \mathbf {u} \cdot \nabla \left({\frac {1}{2}}u^{2}+\phi +{\frac {p}{\rho }}\right)={\frac {p}{\rho }}\nabla \cdot \mathbf {u} .} The right-hand side appears on the energy equation in convective form, which on the steady state reads: u ⋅ ∇ e = − p ρ ∇ ⋅ u . {\displaystyle \mathbf {u} \cdot \nabla e=-{\frac {p}{\rho }}\nabla \cdot \mathbf {u} .} The energy equation therefore becomes: u ⋅ ∇ ( e + p ρ + 1 2 u 2 + ϕ ) = 0 , {\displaystyle \mathbf {u} \cdot \nabla \left(e+{\frac {p}{\rho }}+{\frac {1}{2}}u^{2}+\phi \right)=0,} so that the internal specific energy now features in the head. Since the external field potential is usually small compared to the other terms, it is convenient to group the latter ones in the total enthalpy: h t ≡ e + p ρ + 1 2 u 2 , {\displaystyle h^{t}\equiv e+{\frac {p}{\rho }}+{\frac {1}{2}}u^{2},} and the Bernoulli invariant for an inviscid gas flow is: b g ≡ h t + ϕ = b l + e , {\displaystyle b_{g}\equiv h^{t}+\phi =b_{l}+e,} which can be written as: u ⋅ ∇ b g = 0. {\displaystyle \mathbf {u} \cdot \nabla b_{g}=0.} That is, the energy balance for a steady inviscid flow in an external conservative field states that the sum of the total enthalpy and the external potential is constant along a streamline. In the usual case of small potential field, simply: u ⋅ ∇ h t ∼ 0. {\displaystyle \mathbf {u} \cdot \nabla h^{t}\sim 0.} === Friedmann form and Crocco form === By substituting the pressure gradient with the entropy and enthalpy gradient, according to the first law of thermodynamics in the enthalpy form: v ∇ p = − T ∇ s + ∇ h , {\displaystyle v\nabla p=-T\nabla s+\nabla h,} in the convective form of Euler momentum equation, one arrives to: D u D t = T ∇ s − ∇ h . {\displaystyle {\frac {D\mathbf {u} }{Dt}}=T\nabla \,s-\nabla \,h.} Friedmann deduced this equation for the particular case of a perfect gas and published it in 1922. However, this equation is general for an inviscid nonconductive fluid and no equation of state is implicit in it. On the other hand, by substituting the enthalpy form of the first law of thermodynamics in the rotational form of Euler momentum equation, one obtains: ∂ u ∂ t + 1 2 ∇ ( u 2 ) + ( ∇ × u ) × u + ∇ p ρ = g , {\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+{\frac {1}{2}}\nabla \left(u^{2}\right)+(\nabla \times \mathbf {u} )\times \mathbf {u} +{\frac {\nabla p}{\rho }}=\mathbf {g} ,} and by defining the specific total enthalpy: h t = h + 1 2 u 2 , {\displaystyle h^{t}=h+{\frac {1}{2}}u^{2},} one arrives to the Crocco–Vazsonyi form (Crocco, 1937) of the Euler momentum equation: ∂ u ∂ t + ( ∇ × u ) × u − T ∇ s + ∇ h t = g . {\displaystyle {\frac {\partial \mathbf {u} }{\partial t}}+(\nabla \times \mathbf {u} )\times \mathbf {u} -T\nabla s+\nabla h^{t}=\mathbf {g} .} In the steady case the two variables entropy and total enthalpy are particularly useful since Euler equations can be recast into the Crocco's form: u × ∇ × u + T ∇ s − ∇ h t = g , u ⋅ ∇ s = 0 , u ⋅ ∇ h t = 0. {\displaystyle {\begin{aligned}\mathbf {u} \times \nabla \times \mathbf {u} +T\nabla s-\nabla h^{t}&=\mathbf {g} ,\\\mathbf {u} \cdot \nabla s&=0,\\\mathbf {u} \cdot \nabla h^{t}&=0.\end{aligned}}} Finally if the flow is also isothermal: T ∇ s = ∇ ( T s ) , {\displaystyle T\nabla s=\nabla (Ts),} by defining the specific total Gibbs free energy: g t ≡ h t + T s , {\displaystyle g^{t}\equiv h^{t}+Ts,} the Crocco's form can be reduced to: u × ∇ × u − ∇ g t = g , u ⋅ ∇ g t = 0. {\displaystyle {\begin{aligned}\mathbf {u} \times \nabla \times \mathbf {u} -\nabla g^{t}&=\mathbf {g} ,\\\mathbf {u} \cdot \nabla g^{t}&=0.\end{aligned}}} From these relationships one deduces that the specific total free energy is uniform in a steady, irrotational, isothermal, isoentropic, inviscid flow. == Discontinuities == The Euler equations are quasilinear hyperbolic equations and their general solutions are waves. Under certain assumptions they can be simplified leading to Burgers equation. Much like the familiar oceanic waves, waves described by the Euler Equations 'break' and so-called shock waves are formed; this is a nonlinear effect and represents the solution becoming multi-valued. Physically this represents a breakdown of the assumptions that led to the formulation of the differential equations, and to extract further information from the equations we must go back to the more fundamental integral form. Then, weak solutions are formulated by working in 'jumps' (discontinuities) into the flow quantities – density, velocity, pressure, entropy – using the Rankine–Hugoniot equations. Physical quantities are rarely discontinuous; in real flows, these discontinuities are smoothed out by viscosity and by heat transfer. (See Navier–Stokes equations) Shock propagation is studied – among many other fields – in aerodynamics and rocket propulsion, where sufficiently fast flows occur. To properly compute the continuum quantities in discontinuous zones (for example shock waves or boundary layers) from the local forms (all the above forms are local forms, since the variables being described are typical of one point in the space considered, i.e. they are local variables) of Euler equations through finite difference methods generally too many space points and time steps would be necessary for the memory of computers now and in the near future. In these cases it is mandatory to avoid the local forms of the conservation equations, passing some weak forms, like the finite volume one. === Rankine–Hugoniot equations === Starting from the simplest case, one consider a steady free conservation equation in conservation form in the space domain: ∇ ⋅ F = 0 , {\displaystyle \nabla \cdot \mathbf {F} =\mathbf {0} ,} where in general F is the flux matrix. By integrating this local equation over a fixed volume Vm, it becomes: ∫ V m ∇ ⋅ F d V = 0 . {\displaystyle \int _{V_{m}}\nabla \cdot \mathbf {F} \,dV=\mathbf {0} .} Then, basing on the divergence theorem, we can transform this integral in a boundary integral of the flux: ∮ ∂ V m F d s = 0 . {\displaystyle \oint _{\partial V_{m}}\mathbf {F} \,ds=\mathbf {0} .} This global form simply states that there is no net flux of a conserved quantity passing through a region in the case steady and without source. In 1D the volume reduces to an interval, its boundary being its extrema, then the divergence theorem reduces to the fundamental theorem of calculus: ∫ x m x m + 1 F ( x ′ ) d x ′ = 0 , {\displaystyle \int _{x_{m}}^{x_{m+1}}\mathbf {F} (x')\,dx'=\mathbf {0} ,} that is the simple finite difference equation, known as the jump relation: Δ F = 0 . {\displaystyle \Delta \mathbf {F} =\mathbf {0} .} That can be made explicit as: F m + 1 − F m = 0 , {\displaystyle \mathbf {F} _{m+1}-\mathbf {F} _{m}=\mathbf {0} ,} where the notation employed is: F m = F ( x m ) . {\displaystyle \mathbf {F} _{m}=\mathbf {F} (x_{m}).} Or, if one performs an indefinite integral: F − F 0 = 0 . {\displaystyle \mathbf {F} -\mathbf {F} _{0}=\mathbf {0} .} On the other hand, a transient conservation equation: ∂ y ∂ t + ∇ ⋅ F = 0 , {\displaystyle {\partial y \over \partial t}+\nabla \cdot \mathbf {F} =\mathbf {0} ,} brings to a jump relation: d x d t Δ u = Δ F . {\displaystyle {\frac {dx}{dt}}\,\Delta u=\Delta \mathbf {F} .} For one-dimensional Euler equations the conservation variables and the flux are the vectors: y = ( 1 v j E t ) , {\displaystyle \mathbf {y} ={\begin{pmatrix}{\frac {1}{v}}\\j\\E^{t}\end{pmatrix}},} F = ( j v j 2 + p v j ( E t + p ) ) , {\displaystyle \mathbf {F} ={\begin{pmatrix}j\\vj^{2}+p\\vj\left(E^{t}+p\right)\end{pmatrix}},} where: v {\displaystyle v} is the specific volume, j {\displaystyle j} is the mass flux. In the one dimensional case the correspondent jump relations, called the Rankine–Hugoniot equations, are:< d x d t Δ ( 1 v ) = Δ j , d x d t Δ j = Δ ( v j 2 + p ) , d x d t Δ E t = Δ ( j v ( E t + p ) ) . {\displaystyle {\begin{aligned}{\frac {dx}{dt}}\Delta \left({\frac {1}{v}}\right)&=\Delta j,\\[1.2ex]{\frac {dx}{dt}}\Delta j&=\Delta (vj^{2}+p),\\[1.2ex]{\frac {dx}{dt}}\Delta E^{t}&=\Delta (jv(E^{t}+p)).\end{aligned}}} In the steady one dimensional case the become simply: Δ j = 0 , Δ ( v j 2 + p ) = 0 , Δ ( j ( E t ρ + p ρ ) ) = 0. {\displaystyle {\begin{aligned}\Delta j&=0,\\[1.2ex]\Delta \left(vj^{2}+p\right)&=0,\\[1.2ex]\Delta \left(j\left({\frac {E^{t}}{\rho }}+{\frac {p}{\rho }}\right)\right)&=0.\end{aligned}}} Thanks to the mass difference equation, the energy difference equation can be simplified without any restriction: Δ j = 0 , Δ ( v j 2 + p ) = 0 , Δ h t = 0 , {\displaystyle {\begin{aligned}\Delta j&=0,\\[1.2ex]\Delta \left(vj^{2}+p\right)&=0,\\[1.2ex]\Delta h^{t}&=0,\end{aligned}}} where h t {\displaystyle h^{t}} is the specific total enthalpy. These are the usually expressed in the convective variables: Δ j = 0 , Δ ( u 2 v + p ) = 0 , Δ ( e + 1 2 u 2 + p v ) = 0 , {\displaystyle {\begin{aligned}\Delta j&=0,\\[1.2ex]\Delta \left({\frac {u^{2}}{v}}+p\right)&=0,\\[1.2ex]\Delta \left(e+{\frac {1}{2}}u^{2}+pv\right)&=0,\end{aligned}}} where: u {\displaystyle u} is the flow speed e {\displaystyle e} is the specific internal energy. The energy equation is an integral form of the Bernoulli equation in the compressible case. The former mass and momentum equations by substitution lead to the Rayleigh equation: Δ p Δ v = − u 0 2 v 0 . {\displaystyle {\frac {\Delta p}{\Delta v}}=-{\frac {u_{0}^{2}}{v_{0}}}.} Since the second term is a constant, the Rayleigh equation always describes a simple line in the pressure volume plane not dependent of any equation of state, i.e. the Rayleigh line. By substitution in the Rankine–Hugoniot equations, that can be also made explicit as: ρ u = ρ 0 u 0 , ρ u 2 + p = ρ 0 u 0 2 + p 0 , e + 1 2 u 2 + p ρ = e 0 + 1 2 u 0 2 + p 0 ρ 0 . {\displaystyle {\begin{aligned}\rho u&=\rho _{0}u_{0},\\[1.2ex]\rho u^{2}+p&=\rho _{0}u_{0}^{2}+p_{0},\\[1.2ex]e+{\frac {1}{2}}u^{2}+{\frac {p}{\rho }}&=e_{0}+{\frac {1}{2}}u_{0}^{2}+{\frac {p_{0}}{\rho _{0}}}.\end{aligned}}} One can also obtain the kinetic equation and to the Hugoniot equation. The analytical passages are not shown here for brevity. These are respectively: u 2 ( v , p ) = u 0 2 + ( p − p 0 ) ( v 0 + v ) , e ( v , p ) = e 0 + 1 2 ( p + p 0 ) ( v 0 − v ) . {\displaystyle {\begin{aligned}u^{2}(v,p)&=u_{0}^{2}+(p-p_{0})(v_{0}+v),\\[1.2ex]e(v,p)&=e_{0}+{\tfrac {1}{2}}(p+p_{0})(v_{0}-v).\end{aligned}}} The Hugoniot equation, coupled with the fundamental equation of state of the material: e = e ( v , p ) , {\displaystyle e=e(v,p),} describes in general in the pressure volume plane a curve passing by the conditions (v0, p0), i.e. the Hugoniot curve, whose shape strongly depends on the type of material considered. It is also customary to define a Hugoniot function: h ( v , s ) ≡ e ( v , s ) − e 0 + 1 2 ( p ( v , s ) + p 0 ) ( v − v 0 ) , {\displaystyle {\mathfrak {h}}(v,s)\equiv e(v,s)-e_{0}+{\tfrac {1}{2}}(p(v,s)+p_{0})(v-v_{0}),} allowing to quantify deviations from the Hugoniot equation, similarly to the previous definition of the hydraulic head, useful for the deviations from the Bernoulli equation. === Finite volume form === On the other hand, by integrating a generic conservation equation: ∂ y ∂ t + ∇ ⋅ F = s , {\displaystyle {\frac {\partial \mathbf {y} }{\partial t}}+\nabla \cdot \mathbf {F} =\mathbf {s} ,} on a fixed volume Vm, and then basing on the divergence theorem, it becomes: d d t ∫ V m y d V + ∮ ∂ V m F ⋅ n ^ d s = S . {\displaystyle {\frac {d}{dt}}\int _{V_{m}}\mathbf {y} dV+\oint _{\partial V_{m}}\mathbf {F} \cdot {\hat {n}}ds=\mathbf {S} .} By integrating this equation also over a time interval: ∫ V m y ( r , t n + 1 ) d V − ∫ V m y ( r , t n ) d V + ∫ t n t n + 1 ∮ ∂ V m F ⋅ n ^ d s d t = 0 . {\displaystyle \int _{V_{m}}\mathbf {y} (\mathbf {r} ,t_{n+1})\,dV-\int _{V_{m}}\mathbf {y} (\mathbf {r} ,t_{n})\,dV+\int _{t_{n}}^{t_{n+1}}\oint _{\partial V_{m}}\mathbf {F} \cdot {\hat {n}}\,ds\,dt=\mathbf {0} .} Now by defining the node conserved quantity: y m , n ≡ 1 V m ∫ V m y ( r , t n ) d V , {\displaystyle \mathbf {y} _{m,n}\equiv {\frac {1}{V_{m}}}\int _{V_{m}}\mathbf {y} (\mathbf {r} ,t_{n})\,dV,} we deduce the finite volume form: y m , n + 1 = y m , n − 1 V m ∫ t n t n + 1 ∮ ∂ V m F ⋅ n ^ d s d t . {\displaystyle \mathbf {y} _{m,n+1}=\mathbf {y} _{m,n}-{\frac {1}{V_{m}}}\int _{t_{n}}^{t_{n+1}}\oint _{\partial V_{m}}\mathbf {F} \cdot {\hat {n}}\,ds\,dt.} In particular, for Euler equations, once the conserved quantities have been determined, the convective variables are deduced by back substitution: u m , n = j m , n ρ m , n , e m , n = E m , n t ρ m , n − 1 2 u m , n 2 . {\displaystyle {\begin{aligned}\displaystyle \mathbf {u} _{m,n}&={\frac {\mathbf {j} _{m,n}}{\rho _{m,n}}},\\[1.2ex]\displaystyle e_{m,n}&={\frac {E_{m,n}^{t}}{\rho _{m,n}}}-{\frac {1}{2}}u_{m,n}^{2}.\end{aligned}}} Then the explicit finite volume expressions of the original convective variables are: == Constraints == It has been shown that Euler equations are not a complete set of equations, but they require some additional constraints to admit a unique solution: these are the equation of state of the material considered. To be consistent with thermodynamics these equations of state should satisfy the two laws of thermodynamics. On the other hand, by definition non-equilibrium system are described by laws lying outside these laws. In the following we list some very simple equations of state and the corresponding influence on Euler equations. === Ideal polytropic gas === For an ideal polytropic gas the fundamental equation of state is: e ( v , s ) = e 0 e ( γ − 1 ) m ( s − s 0 ) ( v 0 v ) γ − 1 , {\displaystyle e(v,s)=e_{0}e^{(\gamma -1)m\left(s-s_{0}\right)}\left({v_{0} \over v}\right)^{\gamma -1},} where e {\displaystyle e} is the specific energy, v {\displaystyle v} is the specific volume, s {\displaystyle s} is the specific entropy, m {\displaystyle m} is the molecular mass, γ {\displaystyle \gamma } here is considered a constant (polytropic process), and can be shown to correspond to the heat capacity ratio. This equation can be shown to be consistent with the usual equations of state employed by thermodynamics. From this equation one can derive the equation for pressure by its thermodynamic definition: p ( v , e ) ≡ − ∂ e ∂ v = ( γ − 1 ) e v . {\displaystyle p(v,e)\equiv -{\partial e \over \partial v}=(\gamma -1){\frac {e}{v}}.} By inverting it one arrives to the mechanical equation of state: e ( v , p ) = p v γ − 1 . {\displaystyle e(v,p)={\frac {pv}{\gamma -1}}.} Then for an ideal gas the compressible Euler equations can be simply expressed in the mechanical or primitive variables specific volume, flow velocity and pressure, by taking the set of the equations for a thermodynamic system and modifying the energy equation into a pressure equation through this mechanical equation of state. At last, in convective form they result: and in one-dimensional quasilinear form they results: ∂ y ∂ t + A ∂ y ∂ x = 0 . {\displaystyle {\frac {\partial \mathbf {y} }{\partial t}}+\mathbf {A} {\frac {\partial \mathbf {y} }{\partial x}}={\mathbf {0} }.} where the conservative vector variable is: y = ( v u p ) , {\displaystyle {\mathbf {y} }={\begin{pmatrix}v\\u\\p\end{pmatrix}},} and the corresponding jacobian matrix is: A = ( u − v 0 0 u v 0 γ p u ) . {\displaystyle {\mathbf {A} }={\begin{pmatrix}u&-v&0\\0&u&v\\0&\gamma p&u\end{pmatrix}}.} === Steady flow in material coordinates === In the case of steady flow, it is convenient to choose the Frenet–Serret frame along a streamline as the coordinate system for describing the steady momentum Euler equation: u ⋅ ∇ u = − 1 ρ ∇ p , {\displaystyle {\boldsymbol {u}}\cdot \nabla {\boldsymbol {u}}=-{\frac {1}{\rho }}\nabla p,} where u {\displaystyle \mathbf {u} } , p {\displaystyle p} and ρ {\displaystyle \rho } denote the flow velocity, the pressure and the density, respectively. Let { e s , e n , e b } {\displaystyle \left\{\mathbf {e} _{s},\mathbf {e} _{n},\mathbf {e} _{b}\right\}} be a Frenet–Serret orthonormal basis which consists of a tangential unit vector, a normal unit vector, and a binormal unit vector to the streamline, respectively. Since a streamline is a curve that is tangent to the velocity vector of the flow, the left-hand side of the above equation, the convective derivative of velocity, can be described as follows: u ⋅ ∇ u = u ∂ ∂ s ( u e s ) = u ∂ u ∂ s e s + u 2 R e n , {\displaystyle {\boldsymbol {u}}\cdot \nabla {\boldsymbol {u}}=u{\frac {\partial }{\partial s}}(u{\boldsymbol {e}}_{s})=u{\frac {\partial u}{\partial s}}{\boldsymbol {e}}_{s}+{\frac {u^{2}}{R}}{\boldsymbol {e}}_{n},} where u = u e s , ∂ ∂ s ≡ e s ⋅ ∇ , ∂ e s ∂ s = 1 R e n , {\displaystyle {\begin{aligned}{\boldsymbol {u}}&=u{\boldsymbol {e}}_{s},\\{\frac {\partial }{\partial s}}&\equiv {\boldsymbol {e}}_{s}\cdot \nabla ,\\{\frac {\partial {\boldsymbol {e}}_{s}}{\partial s}}&={\frac {1}{R}}{\boldsymbol {e}}_{n},\end{aligned}}} and R {\displaystyle R} is the radius of curvature of the streamline. Therefore, the momentum part of the Euler equations for a steady flow is found to have a simple form: u ∂ u ∂ s = − 1 ρ ∂ p ∂ s , u 2 R = − 1 ρ ∂ p ∂ n ( ∂ / ∂ n ≡ e n ⋅ ∇ ) , 0 = − 1 ρ ∂ p ∂ b ( ∂ / ∂ b ≡ e b ⋅ ∇ ) . {\displaystyle {\begin{aligned}\displaystyle u{\frac {\partial u}{\partial s}}&=-{\frac {1}{\rho }}{\frac {\partial p}{\partial s}},\\\displaystyle {u^{2} \over R}&=-{\frac {1}{\rho }}{\frac {\partial p}{\partial n}}&({\partial /\partial n}\equiv {\boldsymbol {e}}_{n}\cdot \nabla ),\\\displaystyle 0&=-{\frac {1}{\rho }}{\frac {\partial p}{\partial b}}&({\partial /\partial b}\equiv {\boldsymbol {e}}_{b}\cdot \nabla ).\end{aligned}}} For barotropic flow ( ρ = ρ ( p ) ) {\displaystyle (\rho =\rho (p))} , Bernoulli's equation is derived from the first equation: ∂ ∂ s ( u 2 2 + ∫ d p ρ ) = 0. {\displaystyle {\frac {\partial }{\partial s}}\left({\frac {u^{2}}{2}}+\int {\frac {\mathrm {d} p}{\rho }}\right)=0.} The second equation expresses that, in the case the streamline is curved, there should exist a pressure gradient normal to the streamline because the centripetal acceleration of the fluid parcel is only generated by the normal pressure gradient. The third equation expresses that pressure is constant along the binormal axis. ==== Streamline curvature theorem ==== Let r {\displaystyle r} be the distance from the center of curvature of the streamline, then the second equation is written as follows: ∂ p ∂ r = ρ u 2 r ( > 0 ) , {\displaystyle {\frac {\partial p}{\partial r}}=\rho {\frac {u^{2}}{r}}~(>0),} where ∂ / ∂ r = − ∂ / ∂ n . {\displaystyle {\partial /\partial r}=-{\partial /\partial n}.} This equation states: In a steady flow of an inviscid fluid without external forces, the center of curvature of the streamline lies in the direction of decreasing radial pressure. Although this relationship between the pressure field and flow curvature is very useful, it doesn't have a name in the English-language scientific literature. Japanese fluid-dynamicists call the relationship the "Streamline curvature theorem". This "theorem" explains clearly why there are such low pressures in the centre of vortices, which consist of concentric circles of streamlines. This also is a way to intuitively explain why airfoils generate lift forces. == Exact solutions == All potential flow solutions are also solutions of the Euler equations, and in particular the incompressible Euler equations when the potential is harmonic. Solutions to the Euler equations with vorticity are: parallel shear flows – where the flow is unidirectional, and the flow velocity only varies in the cross-flow directions, e.g. in a Cartesian coordinate system ( x , y , z ) {\displaystyle (x,y,z)} the flow is for instance in the x {\displaystyle x} -direction – with the only non-zero velocity component being u x ( y , z ) {\displaystyle u_{x}(y,z)} only dependent on y {\displaystyle y} and z {\displaystyle z} and not on x . {\displaystyle x.} Arnold–Beltrami–Childress flow – an exact solution of the incompressible Euler equations. Two solutions of the three-dimensional Euler equations with cylindrical symmetry have been presented by Gibbon, Moore and Stuart in 2003. These two solutions have infinite energy; they blow up everywhere in space in finite time. == See also == Bernoulli's theorem Kelvin's circulation theorem Cauchy equations Froude number Madelung equations Navier–Stokes equations Burgers equation Jeans equations Perfect fluid D'Alembert's paradox == References == === Notes === === Citations === === Sources === === Further reading ===
Wikipedia/Euler_equations_(fluid_dynamics)
Euler–Bernoulli beam theory (also known as engineer's beam theory or classical beam theory) is a simplification of the linear theory of elasticity which provides a means of calculating the load-carrying and deflection characteristics of beams. It covers the case corresponding to small deflections of a beam that is subjected to lateral loads only. By ignoring the effects of shear deformation and rotatory inertia, it is thus a special case of Timoshenko–Ehrenfest beam theory. It was first enunciated circa 1750, but was not applied on a large scale until the development of the Eiffel Tower and the Ferris wheel in the late 19th century. Following these successful demonstrations, it quickly became a cornerstone of engineering and an enabler of the Second Industrial Revolution. Additional mathematical models have been developed, such as plate theory, but the simplicity of beam theory makes it an important tool in the sciences, especially structural and mechanical engineering. == History == Prevailing consensus is that Galileo Galilei made the first attempts at developing a theory of beams, but recent studies argue that Leonardo da Vinci was the first to make the crucial observations. Da Vinci lacked Hooke's law and calculus to complete the theory, whereas Galileo was held back by an incorrect assumption he made. The Bernoulli beam is named after Jacob Bernoulli, who made the significant discoveries. Leonhard Euler and Daniel Bernoulli were the first to put together a useful theory circa 1750. == Static beam equation == The Euler–Bernoulli equation describes the relationship between the beam's deflection and the applied load:The curve w ( x ) {\displaystyle w(x)} describes the deflection of the beam in the z {\displaystyle z} direction at some position x {\displaystyle x} (recall that the beam is modeled as a one-dimensional object). q {\displaystyle q} is a distributed load, in other words a force per unit length (analogous to pressure being a force per area); it may be a function of x {\displaystyle x} , w {\displaystyle w} , or other variables. E {\displaystyle E} is the elastic modulus and I {\displaystyle I} is the second moment of area of the beam's cross section. I {\displaystyle I} must be calculated with respect to the axis which is perpendicular to the applied loading. Explicitly, for a beam whose axis is oriented along x {\displaystyle x} with a loading along z {\displaystyle z} , the beam's cross section is in the y z {\displaystyle yz} plane, and the relevant second moment of area is I = ∬ z 2 d y d z , {\displaystyle I=\iint z^{2}\;dy\;dz,} It can be shown from equilibrium considerations that the centroid of the cross section must be at y = z = 0 {\displaystyle y=z=0} . Often, the product E I {\displaystyle EI} (known as the flexural rigidity) is a constant, so that E I d 4 w d x 4 = q ( x ) . {\displaystyle EI{\frac {\mathrm {d} ^{4}w}{\mathrm {d} x^{4}}}=q(x).\,} This equation, describing the deflection of a uniform, static beam, is used widely in engineering practice. Tabulated expressions for the deflection w {\displaystyle w} for common beam configurations can be found in engineering handbooks. For more complicated situations, the deflection can be determined by solving the Euler–Bernoulli equation using techniques such as "direct integration", "Macaulay's method", "moment area method, "conjugate beam method", "the principle of virtual work", "Castigliano's method", "flexibility method", "slope deflection method", "moment distribution method", or "direct stiffness method". Sign conventions are defined here since different conventions can be found in the literature. In this article, a right-handed coordinate system is used with the x {\displaystyle x} axis to the right, the z {\displaystyle z} axis pointing upwards, and the y {\displaystyle y} axis pointing into the figure. The sign of the bending moment M {\displaystyle M} is taken as positive when the torque vector associated with the bending moment on the right hand side of the section is in the positive y {\displaystyle y} direction, that is, a positive value of M {\displaystyle M} produces compressive stress at the bottom surface. With this choice of bending moment sign convention, in order to have d M = Q d x {\displaystyle dM=Qdx} , it is necessary that the shear force Q {\displaystyle Q} acting on the right side of the section be positive in the z {\displaystyle z} direction so as to achieve static equilibrium of moments. If the loading intensity q {\displaystyle q} is taken positive in the positive z {\displaystyle z} direction, then d Q = − q d x {\displaystyle dQ=-qdx} is necessary for force equilibrium. Successive derivatives of the deflection w {\displaystyle w} have important physical meanings: d w / d x {\displaystyle dw/dx} is the slope of the beam, which is the anti-clockwise angle of rotation about the y {\displaystyle y} -axis in the limit of small displacements; M = − E I d 2 w d x 2 {\displaystyle M=-EI{\frac {d^{2}w}{dx^{2}}}} is the bending moment in the beam; and Q = − d d x ( E I d 2 w d x 2 ) {\displaystyle Q=-{\frac {d}{dx}}\left(EI{\frac {d^{2}w}{dx^{2}}}\right)} is the shear force in the beam. The stresses in a beam can be calculated from the above expressions after the deflection due to a given load has been determined. === Derivation of the bending equation === Because of the fundamental importance of the bending moment equation in engineering, we will provide a short derivation. We change to polar coordinates. The length of the neutral axis in the figure is ρ d θ . {\displaystyle \rho d\theta .} The length of a fiber with a radial distance z {\displaystyle z} below the neutral axis is ( ρ + z ) d θ . {\displaystyle (\rho +z)d\theta .} Therefore, the strain of this fiber is ( ρ + z − ρ ) d θ ρ d θ = z ρ . {\displaystyle {\frac {\left(\rho +z-\rho \right)\ d\theta }{\rho \ d\theta }}={\frac {z}{\rho }}.} The stress of this fiber is E z ρ {\displaystyle E{\dfrac {z}{\rho }}} where E {\displaystyle E} is the elastic modulus in accordance with Hooke's law. The differential force vector, d F , {\displaystyle d\mathbf {F} ,} resulting from this stress, is given by d F = E z ρ d A e x . {\displaystyle d\mathbf {F} =E{\frac {z}{\rho }}dA\mathbf {e_{x}} .} This is the differential force vector exerted on the right hand side of the section shown in the figure. We know that it is in the e x {\displaystyle \mathbf {e_{x}} } direction since the figure clearly shows that the fibers in the lower half are in tension. d A {\displaystyle dA} is the differential element of area at the location of the fiber. The differential bending moment vector, d M {\displaystyle d\mathbf {M} } associated with d F {\displaystyle d\mathbf {F} } is given by d M = − z e z × d F = − e y E z 2 ρ d A . {\displaystyle d\mathbf {M} =-z\mathbf {e_{z}} \times d\mathbf {F} =-\mathbf {e_{y}} E{\frac {z^{2}}{\rho }}dA.} This expression is valid for the fibers in the lower half of the beam. The expression for the fibers in the upper half of the beam will be similar except that the moment arm vector will be in the positive z {\displaystyle z} direction and the force vector will be in the − x {\displaystyle -x} direction since the upper fibers are in compression. But the resulting bending moment vector will still be in the − y {\displaystyle -y} direction since e z × − e x = − e y . {\displaystyle \mathbf {e_{z}} \times -\mathbf {e_{x}} =-\mathbf {e_{y}} .} Therefore, we integrate over the entire cross section of the beam and get for M {\displaystyle \mathbf {M} } the bending moment vector exerted on the right cross section of the beam the expression M = ∫ d M = − e y E ρ ∫ z 2 d A = − e y E I ρ , {\displaystyle \mathbf {M} =\int d\mathbf {M} =-\mathbf {e_{y}} {\frac {E}{\rho }}\int {z^{2}}\ dA=-\mathbf {e_{y}} {\frac {EI}{\rho }},} where I {\displaystyle I} is the second moment of area. From calculus, we know that when d w d x {\displaystyle {\dfrac {dw}{dx}}} is small, as it is for an Euler–Bernoulli beam, we can make the approximation 1 ρ ≃ d 2 w d x 2 {\displaystyle {\dfrac {1}{\rho }}\simeq {\dfrac {d^{2}w}{dx^{2}}}} , where ρ {\displaystyle \rho } is the radius of curvature. Therefore, M = − e y E I d 2 w d x 2 . {\displaystyle \mathbf {M} =-\mathbf {e_{y}} EI{d^{2}w \over dx^{2}}.} This vector equation can be separated in the bending unit vector definition ( M {\displaystyle M} is oriented as e y {\displaystyle \mathbf {e_{y}} } ), and in the bending equation: M = − E I d 2 w d x 2 . {\displaystyle M=-EI{d^{2}w \over dx^{2}}.} == Dynamic beam equation == The dynamic beam equation is the Euler–Lagrange equation for the following action S = ∫ t 1 t 2 ∫ 0 L [ 1 2 μ ( ∂ w ∂ t ) 2 − 1 2 E I ( ∂ 2 w ∂ x 2 ) 2 + q ( x ) w ( x , t ) ] d x d t . {\displaystyle S=\int _{t_{1}}^{t_{2}}\int _{0}^{L}\left[{\frac {1}{2}}\mu \left({\frac {\partial w}{\partial t}}\right)^{2}-{\frac {1}{2}}EI\left({\frac {\partial ^{2}w}{\partial x^{2}}}\right)^{2}+q(x)w(x,t)\right]dxdt.} The first term represents the kinetic energy where μ {\displaystyle \mu } is the mass per unit length, the second term represents the potential energy due to internal forces (when considered with a negative sign), and the third term represents the potential energy due to the external load q ( x ) {\displaystyle q(x)} . The Euler–Lagrange equation is used to determine the function that minimizes the functional S {\displaystyle S} . For a dynamic Euler–Bernoulli beam, the Euler–Lagrange equation is When the beam is homogeneous, E {\displaystyle E} and I {\displaystyle I} are independent of x {\displaystyle x} , and the beam equation is simpler: E I ∂ 4 w ∂ x 4 = − μ ∂ 2 w ∂ t 2 + q . {\displaystyle EI{\cfrac {\partial ^{4}w}{\partial x^{4}}}=-\mu {\cfrac {\partial ^{2}w}{\partial t^{2}}}+q\,.} === Free vibration === In the absence of a transverse load, q {\displaystyle q} , we have the free vibration equation. This equation can be solved using a Fourier decomposition of the displacement into the sum of harmonic vibrations of the form w ( x , t ) = Re [ w ^ ( x ) e − i ω t ] {\displaystyle w(x,t)={\text{Re}}[{\hat {w}}(x)~e^{-i\omega t}]} where ω {\displaystyle \omega } is the frequency of vibration. Then, for each value of frequency, we can solve an ordinary differential equation E I d 4 w ^ d x 4 − μ ω 2 w ^ = 0 . {\displaystyle EI~{\cfrac {\mathrm {d} ^{4}{\hat {w}}}{\mathrm {d} x^{4}}}-\mu \omega ^{2}{\hat {w}}=0\,.} The general solution of the above equation is w ^ = A 1 cosh ⁡ ( β x ) + A 2 sinh ⁡ ( β x ) + A 3 cos ⁡ ( β x ) + A 4 sin ⁡ ( β x ) with β := ( μ ω 2 E I ) 1 / 4 {\displaystyle {\hat {w}}=A_{1}\cosh(\beta x)+A_{2}\sinh(\beta x)+A_{3}\cos(\beta x)+A_{4}\sin(\beta x)\quad {\text{with}}\quad \beta :=\left({\frac {\mu \omega ^{2}}{EI}}\right)^{1/4}} where A 1 , A 2 , A 3 , A 4 {\displaystyle A_{1},A_{2},A_{3},A_{4}} are constants. These constants are unique for a given set of boundary conditions. However, the solution for the displacement is not unique and depends on the frequency. These solutions are typically written as w ^ n = A 1 cosh ⁡ ( β n x ) + A 2 sinh ⁡ ( β n x ) + A 3 cos ⁡ ( β n x ) + A 4 sin ⁡ ( β n x ) with β n := ( μ ω n 2 E I ) 1 / 4 . {\displaystyle {\hat {w}}_{n}=A_{1}\cosh(\beta _{n}x)+A_{2}\sinh(\beta _{n}x)+A_{3}\cos(\beta _{n}x)+A_{4}\sin(\beta _{n}x)\quad {\text{with}}\quad \beta _{n}:=\left({\frac {\mu \omega _{n}^{2}}{EI}}\right)^{1/4}\,.} The quantities ω n {\displaystyle \omega _{n}} are called the natural frequencies of the beam. Each of the displacement solutions is called a mode, and the shape of the displacement curve is called a mode shape. ==== Example: Cantilevered beam ==== The boundary conditions for a cantilevered beam of length L {\displaystyle L} (fixed at x = 0 {\displaystyle x=0} ) are w ^ n = 0 , d w ^ n d x = 0 at x = 0 d 2 w ^ n d x 2 = 0 , d 3 w ^ n d x 3 = 0 at x = L . {\displaystyle {\begin{aligned}&{\hat {w}}_{n}=0~,~~{\frac {d{\hat {w}}_{n}}{dx}}=0\quad {\text{at}}~~x=0\\&{\frac {d^{2}{\hat {w}}_{n}}{dx^{2}}}=0~,~~{\frac {d^{3}{\hat {w}}_{n}}{dx^{3}}}=0\quad {\text{at}}~~x=L\,.\end{aligned}}} If we apply these conditions, non-trivial solutions are found to exist only if cosh ⁡ ( β n L ) cos ⁡ ( β n L ) + 1 = 0 . {\displaystyle \cosh(\beta _{n}L)\,\cos(\beta _{n}L)+1=0\,.} This nonlinear equation can be solved numerically. The first four roots are β 1 L = 0.596864 π {\displaystyle \beta _{1}L=0.596864\pi } , β 2 L = 1.49418 π {\displaystyle \beta _{2}L=1.49418\pi } , β 3 L = 2.50025 π {\displaystyle \beta _{3}L=2.50025\pi } , and β 4 L = 3.49999 π {\displaystyle \beta _{4}L=3.49999\pi } . The corresponding natural frequencies of vibration are ω 1 = β 1 2 E I μ = 3.5160 L 2 E I μ , … {\displaystyle \omega _{1}=\beta _{1}^{2}{\sqrt {\frac {EI}{\mu }}}={\frac {3.5160}{L^{2}}}{\sqrt {\frac {EI}{\mu }}}~,~~\dots } The boundary conditions can also be used to determine the mode shapes from the solution for the displacement: w ^ n = A 1 [ ( cosh ⁡ β n x − cos ⁡ β n x ) + cos ⁡ β n L + cosh ⁡ β n L sin ⁡ β n L + sinh ⁡ β n L ( sin ⁡ β n x − sinh ⁡ β n x ) ] {\displaystyle {\hat {w}}_{n}=A_{1}\left[(\cosh \beta _{n}x-\cos \beta _{n}x)+{\frac {\cos \beta _{n}L+\cosh \beta _{n}L}{\sin \beta _{n}L+\sinh \beta _{n}L}}(\sin \beta _{n}x-\sinh \beta _{n}x)\right]} The unknown constant (actually constants as there is one for each n {\displaystyle n} ), A 1 {\displaystyle A_{1}} , which in general is complex, is determined by the initial conditions at t = 0 {\displaystyle t=0} on the velocity and displacements of the beam. Typically a value of A 1 = 1 {\displaystyle A_{1}=1} is used when plotting mode shapes. Solutions to the undamped forced problem have unbounded displacements when the driving frequency matches a natural frequency ω n {\displaystyle \omega _{n}} , i.e., the beam can resonate. The natural frequencies of a beam therefore correspond to the frequencies at which resonance can occur. ==== Example: free–free (unsupported) beam ==== A free–free beam is a beam without any supports. The boundary conditions for a free–free beam of length L {\displaystyle L} extending from x = 0 {\displaystyle x=0} to x = L {\displaystyle x=L} are given by: d 2 w ^ n d x 2 = 0 , d 3 w ^ n d x 3 = 0 at x = 0 and x = L . {\displaystyle {\frac {d^{2}{\hat {w}}_{n}}{dx^{2}}}=0~,~~{\frac {d^{3}{\hat {w}}_{n}}{dx^{3}}}=0\quad {\text{at}}~~x=0\,{\text{and}}\,x=L\,.} If we apply these conditions, non-trivial solutions are found to exist only if cosh ⁡ ( β n L ) cos ⁡ ( β n L ) − 1 = 0 . {\displaystyle \cosh(\beta _{n}L)\,\cos(\beta _{n}L)-1=0\,.} This nonlinear equation can be solved numerically. The first four roots are β 1 L = 1.50562 π {\displaystyle \beta _{1}L=1.50562\pi } , β 2 L = 2.49975 π {\displaystyle \beta _{2}L=2.49975\pi } , β 3 L = 3.50001 π {\displaystyle \beta _{3}L=3.50001\pi } , and β 4 L = 4.50000 π {\displaystyle \beta _{4}L=4.50000\pi } . The corresponding natural frequencies of vibration are: ω 1 = β 1 2 E I μ = 22.3733 L 2 E I μ , … {\displaystyle \omega _{1}=\beta _{1}^{2}{\sqrt {\frac {EI}{\mu }}}={\frac {22.3733}{L^{2}}}{\sqrt {\frac {EI}{\mu }}}~,~~\dots } The boundary conditions can also be used to determine the mode shapes from the solution for the displacement: w ^ n = A 1 [ ( cos ⁡ β n x + cosh ⁡ β n x ) − cos ⁡ β n L − cosh ⁡ β n L sin ⁡ β n L − sinh ⁡ β n L ( sin ⁡ β n x + sinh ⁡ β n x ) ] {\displaystyle {\hat {w}}_{n}=A_{1}{\Bigl [}(\cos \beta _{n}x+\cosh \beta _{n}x)-{\frac {\cos \beta _{n}L-\cosh \beta _{n}L}{\sin \beta _{n}L-\sinh \beta _{n}L}}(\sin \beta _{n}x+\sinh \beta _{n}x){\Bigr ]}} As with the cantilevered beam, the unknown constants are determined by the initial conditions at t = 0 {\displaystyle t=0} on the velocity and displacements of the beam. Also, solutions to the undamped forced problem have unbounded displacements when the driving frequency matches a natural frequency ω n {\displaystyle \omega _{n}} . ==== Example: hinged-hinged beam ==== The boundary conditions of a hinged-hinged beam of length L {\displaystyle L} (fixed at x = 0 {\displaystyle x=0} and x = L {\displaystyle x=L} ) are w ^ n = 0 , d 2 w ^ n d x 2 = 0 at x = 0 and x = L . {\displaystyle {\hat {w}}_{n}=0~,~~{\frac {d^{2}{\hat {w}}_{n}}{dx^{2}}}=0\quad {\text{at}}~~x=0\,{\text{and}}\,x=L\,.} This implies solutions exist for sin ⁡ ( β n L ) sinh ⁡ ( β n L ) = 0 . {\displaystyle \sin(\beta _{n}L)\,\sinh(\beta _{n}L)=0\,.} Setting β n = n π / L {\displaystyle \beta _{n}=n\pi /L} enforces this condition. Rearranging for natural frequency gives ω n = n 2 π 2 L 2 E I μ {\displaystyle \omega _{n}={\frac {n^{2}\pi ^{2}}{L^{2}}}{\sqrt {\frac {EI}{\mu }}}} == Stress == Besides deflection, the beam equation describes forces and moments and can thus be used to describe stresses. For this reason, the Euler–Bernoulli beam equation is widely used in engineering, especially civil and mechanical, to determine the strength (as well as deflection) of beams under bending. Both the bending moment and the shear force cause stresses in the beam. The stress due to shear force is maximum along the neutral axis of the beam (when the width of the beam, t, is constant along the cross section of the beam; otherwise an integral involving the first moment and the beam's width needs to be evaluated for the particular cross section), and the maximum tensile stress is at either the top or bottom surfaces. Thus the maximum principal stress in the beam may be neither at the surface nor at the center but in some general area. However, shear force stresses are negligible in comparison to bending moment stresses in all but the stockiest of beams as well as the fact that stress concentrations commonly occur at surfaces, meaning that the maximum stress in a beam is likely to be at the surface. === Simple or symmetrical bending === For beam cross-sections that are symmetrical about a plane perpendicular to the neutral plane, it can be shown that the tensile stress experienced by the beam may be expressed as: σ = M z I = − z E d 2 w d x 2 . {\displaystyle \sigma ={\frac {Mz}{I}}=-zE~{\frac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}.\,} Here, z {\displaystyle z} is the distance from the neutral axis to a point of interest; and M {\displaystyle M} is the bending moment. Note that this equation implies that pure bending (of positive sign) will cause zero stress at the neutral axis, positive (tensile) stress at the "top" of the beam, and negative (compressive) stress at the bottom of the beam; and also implies that the maximum stress will be at the top surface and the minimum at the bottom. This bending stress may be superimposed with axially applied stresses, which will cause a shift in the neutral (zero stress) axis. === Maximum stresses at a cross-section === The maximum tensile stress at a cross-section is at the location z = c 1 {\displaystyle z=c_{1}} and the maximum compressive stress is at the location z = − c 2 {\displaystyle z=-c_{2}} where the height of the cross-section is h = c 1 + c 2 {\displaystyle h=c_{1}+c_{2}} . These stresses are σ 1 = M c 1 I = M S 1 ; σ 2 = − M c 2 I = − M S 2 {\displaystyle \sigma _{1}={\cfrac {Mc_{1}}{I}}={\cfrac {M}{S_{1}}}~;~~\sigma _{2}=-{\cfrac {Mc_{2}}{I}}=-{\cfrac {M}{S_{2}}}} The quantities S 1 , S 2 {\displaystyle S_{1},S_{2}} are the section moduli and are defined as S 1 = I c 1 ; S 2 = I c 2 {\displaystyle S_{1}={\cfrac {I}{c_{1}}}~;~~S_{2}={\cfrac {I}{c_{2}}}} The section modulus combines all the important geometric information about a beam's section into one quantity. For the case where a beam is doubly symmetric, c 1 = c 2 {\displaystyle c_{1}=c_{2}} and we have one section modulus S = I / c {\displaystyle S=I/c} . === Strain in an Euler–Bernoulli beam === We need an expression for the strain in terms of the deflection of the neutral surface to relate the stresses in an Euler–Bernoulli beam to the deflection. To obtain that expression we use the assumption that normals to the neutral surface remain normal during the deformation and that deflections are small. These assumptions imply that the beam bends into an arc of a circle of radius ρ {\displaystyle \rho } (see Figure 1) and that the neutral surface does not change in length during the deformation. Let d x {\displaystyle \mathrm {d} x} be the length of an element of the neutral surface in the undeformed state. For small deflections, the element does not change its length after bending but deforms into an arc of a circle of radius ρ {\displaystyle \rho } . If d θ {\displaystyle \mathrm {d} \theta } is the angle subtended by this arc, then d x = ρ d θ {\displaystyle \mathrm {d} x=\rho ~\mathrm {d} \theta } . Let us now consider another segment of the element at a distance z {\displaystyle z} above the neutral surface. The initial length of this element is d x {\displaystyle \mathrm {d} x} . However, after bending, the length of the element becomes d x ′ = ( ρ − z ) d θ = d x − z d θ {\displaystyle \mathrm {d} x'=(\rho -z)~\mathrm {d} \theta =\mathrm {d} x-z~\mathrm {d} \theta } . The strain in that segment of the beam is given by ε x = d x ′ − d x d x = − z ρ = − κ z {\displaystyle \varepsilon _{x}={\cfrac {\mathrm {d} x'-\mathrm {d} x}{\mathrm {d} x}}=-{\cfrac {z}{\rho }}=-\kappa ~z} where κ {\displaystyle \kappa } is the curvature of the beam. This gives us the axial strain in the beam as a function of distance from the neutral surface. However, we still need to find a relation between the radius of curvature and the beam deflection w {\displaystyle w} . === Relation between curvature and beam deflection === Let P be a point on the neutral surface of the beam at a distance x {\displaystyle x} from the origin of the ( x , z ) {\displaystyle (x,z)} coordinate system. The slope of the beam is approximately equal to the angle made by the neutral surface with the x {\displaystyle x} -axis for the small angles encountered in beam theory. Therefore, with this approximation, θ ( x ) = d w d x {\displaystyle \theta (x)={\cfrac {\mathrm {d} w}{\mathrm {d} x}}} Therefore, for an infinitesimal element d x {\displaystyle \mathrm {d} x} , the relation d x = ρ d θ {\displaystyle \mathrm {d} x=\rho ~\mathrm {d} \theta } can be written as 1 ρ = d θ d x = d 2 w d x 2 = κ {\displaystyle {\cfrac {1}{\rho }}={\cfrac {\mathrm {d} \theta }{\mathrm {d} x}}={\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}=\kappa } Hence the strain in the beam may be expressed as ε x = − z κ {\displaystyle \varepsilon _{x}=-z\kappa } === Stress-strain relations === For a homogeneous isotropic linear elastic material, the stress is related to the strain by σ = E ε {\displaystyle \sigma =E\varepsilon } , where E {\displaystyle E} is the Young's modulus. Hence the stress in an Euler–Bernoulli beam is given by σ x = − z E d 2 w d x 2 {\displaystyle \sigma _{x}=-zE{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}} Note that the above relation, when compared with the relation between the axial stress and the bending moment, leads to M = − E I d 2 w d x 2 {\displaystyle M=-EI{\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}} Since the shear force is given by Q = d M / d x {\displaystyle Q=\mathrm {d} M/\mathrm {d} x} , we also have Q = − E I d 3 w d x 3 {\displaystyle Q=-EI{\cfrac {\mathrm {d} ^{3}w}{\mathrm {d} x^{3}}}} == Boundary considerations == The beam equation contains a fourth-order derivative in x {\displaystyle x} . To find a unique solution w ( x , t ) {\displaystyle w(x,t)} we need four boundary conditions. The boundary conditions usually model supports, but they can also model point loads, distributed loads and moments. The support or displacement boundary conditions are used to fix values of displacement ( w {\displaystyle w} ) and rotations ( d w / d x {\displaystyle \mathrm {d} w/\mathrm {d} x} ) on the boundary. Such boundary conditions are also called Dirichlet boundary conditions. Load and moment boundary conditions involve higher derivatives of w {\displaystyle w} and represent momentum flux. Flux boundary conditions are also called Neumann boundary conditions. As an example consider a cantilever beam that is built-in at one end and free at the other as shown in the adjacent figure. At the built-in end of the beam there cannot be any displacement or rotation of the beam. This means that at the left end both deflection and slope are zero. Since no external bending moment is applied at the free end of the beam, the bending moment at that location is zero. In addition, if there is no external force applied to the beam, the shear force at the free end is also zero. Taking the x {\displaystyle x} coordinate of the left end as 0 {\displaystyle 0} and the right end as L {\displaystyle L} (the length of the beam), these statements translate to the following set of boundary conditions (assume E I {\displaystyle EI} is a constant): w | x = 0 = 0 ; ∂ w ∂ x | x = 0 = 0 (fixed end) {\displaystyle w|_{x=0}=0\quad ;\quad {\frac {\partial w}{\partial x}}{\bigg |}_{x=0}=0\qquad {\mbox{(fixed end)}}\,} ∂ 2 w ∂ x 2 | x = L = 0 ; ∂ 3 w ∂ x 3 | x = L = 0 (free end) {\displaystyle {\frac {\partial ^{2}w}{\partial x^{2}}}{\bigg |}_{x=L}=0\quad ;\quad {\frac {\partial ^{3}w}{\partial x^{3}}}{\bigg |}_{x=L}=0\qquad {\mbox{(free end)}}\,} A simple support (pin or roller) is equivalent to a point force on the beam which is adjusted in such a way as to fix the position of the beam at that point. A fixed support or clamp, is equivalent to the combination of a point force and a point torque which is adjusted in such a way as to fix both the position and slope of the beam at that point. Point forces and torques, whether from supports or directly applied, will divide a beam into a set of segments, between which the beam equation will yield a continuous solution, given four boundary conditions, two at each end of the segment. Assuming that the product EI is a constant, and defining λ = F / E I {\displaystyle \lambda =F/EI} where F is the magnitude of a point force, and τ = M / E I {\displaystyle \tau =M/EI} where M is the magnitude of a point torque, the boundary conditions appropriate for some common cases is given in the table below. The change in a particular derivative of w across the boundary as x increases is denoted by Δ {\displaystyle \Delta } followed by that derivative. For example, Δ w ″ = w ″ ( x + ) − w ″ ( x − ) {\displaystyle \Delta w''=w''(x+)-w''(x-)} where w ″ ( x + ) {\displaystyle w''(x+)} is the value of w ″ {\displaystyle w''} at the lower boundary of the upper segment, while w ″ ( x − ) {\displaystyle w''(x-)} is the value of w ″ {\displaystyle w''} at the upper boundary of the lower segment. When the values of the particular derivative are not only continuous across the boundary, but fixed as well, the boundary condition is written e.g., Δ w ″ = 0 ∗ {\displaystyle \Delta w''=0^{*}} which actually constitutes two separate equations (e.g., w ″ ( x − ) = w ″ ( x + ) {\displaystyle w''(x-)=w''(x+)} = fixed). Note that in the first cases, in which the point forces and torques are located between two segments, there are four boundary conditions, two for the lower segment, and two for the upper. When forces and torques are applied to one end of the beam, there are two boundary conditions given which apply at that end. The sign of the point forces and torques at an end will be positive for the lower end, negative for the upper end. == Loading considerations == Applied loads may be represented either through boundary conditions or through the function q ( x , t ) {\displaystyle q(x,t)} which represents an external distributed load. Using distributed loading is often favorable for simplicity. Boundary conditions are, however, often used to model loads depending on context; this practice being especially common in vibration analysis. By nature, the distributed load is very often represented in a piecewise manner, since in practice a load isn't typically a continuous function. Point loads can be modeled with help of the Dirac delta function. For example, consider a static uniform cantilever beam of length L {\displaystyle L} with an upward point load F {\displaystyle F} applied at the free end. Using boundary conditions, this may be modeled in two ways. In the first approach, the applied point load is approximated by a shear force applied at the free end. In that case the governing equation and boundary conditions are: E I d 4 w d x 4 = 0 w | x = 0 = 0 ; d w d x | x = 0 = 0 ; d 2 w d x 2 | x = L = 0 ; − E I d 3 w d x 3 | x = L = F {\displaystyle {\begin{aligned}&EI{\frac {\mathrm {d} ^{4}w}{\mathrm {d} x^{4}}}=0\\&w|_{x=0}=0\quad ;\quad {\frac {\mathrm {d} w}{\mathrm {d} x}}{\bigg |}_{x=0}=0\quad ;\quad {\frac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}{\bigg |}_{x=L}=0\quad ;\quad -EI{\frac {\mathrm {d} ^{3}w}{\mathrm {d} x^{3}}}{\bigg |}_{x=L}=F\,\end{aligned}}} Alternatively we can represent the point load as a distribution using the Dirac function. In that case the equation and boundary conditions are E I d 4 w d x 4 = F δ ( x − L ) w | x = 0 = 0 ; d w d x | x = 0 = 0 ; d 2 w d x 2 | x = L = 0 {\displaystyle {\begin{aligned}&EI{\frac {\mathrm {d} ^{4}w}{\mathrm {d} x^{4}}}=F\delta (x-L)\\&w|_{x=0}=0\quad ;\quad {\frac {\mathrm {d} w}{\mathrm {d} x}}{\bigg |}_{x=0}=0\quad ;\quad {\frac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}{\bigg |}_{x=L}=0\,\end{aligned}}} Note that shear force boundary condition (third derivative) is removed, otherwise there would be a contradiction. These are equivalent boundary value problems, and both yield the solution w = F 6 E I ( 3 L x 2 − x 3 ) . {\displaystyle w={\frac {F}{6EI}}(3Lx^{2}-x^{3})\,~.} The application of several point loads at different locations will lead to w ( x ) {\displaystyle w(x)} being a piecewise function. Use of the Dirac function greatly simplifies such situations; otherwise the beam would have to be divided into sections, each with four boundary conditions solved separately. A well organized family of functions called Singularity functions are often used as a shorthand for the Dirac function, its derivative, and its antiderivatives. Dynamic phenomena can also be modeled using the static beam equation by choosing appropriate forms of the load distribution. As an example, the free vibration of a beam can be accounted for by using the load function: q ( x , t ) = μ ∂ 2 w ∂ t 2 {\displaystyle q(x,t)=\mu {\frac {\partial ^{2}w}{\partial t^{2}}}\,} where μ {\displaystyle \mu } is the linear mass density of the beam, not necessarily a constant. With this time-dependent loading, the beam equation will be a partial differential equation: ∂ 2 ∂ x 2 ( E I ∂ 2 w ∂ x 2 ) = − μ ∂ 2 w ∂ t 2 . {\displaystyle {\frac {\partial ^{2}}{\partial x^{2}}}\left(EI{\frac {\partial ^{2}w}{\partial x^{2}}}\right)=-\mu {\frac {\partial ^{2}w}{\partial t^{2}}}.} Another interesting example describes the deflection of a beam rotating with a constant angular frequency of ω {\displaystyle \omega } : q ( x ) = μ ω 2 w ( x ) {\displaystyle q(x)=\mu \omega ^{2}w(x)\,} This is a centripetal force distribution. Note that in this case, q {\displaystyle q} is a function of the displacement (the dependent variable), and the beam equation will be an autonomous ordinary differential equation. == Examples == === Three-point bending === The three-point bending test is a classical experiment in mechanics. It represents the case of a beam resting on two roller supports and subjected to a concentrated load applied in the middle of the beam. The shear is constant in absolute value: it is half the central load, P / 2. It changes sign in the middle of the beam. The bending moment varies linearly from one end, where it is 0, and the center where its absolute value is PL / 4, is where the risk of rupture is the most important. The deformation of the beam is described by a polynomial of third degree over a half beam (the other half being symmetrical). The bending moments ( M {\displaystyle M} ), shear forces ( Q {\displaystyle Q} ), and deflections ( w {\displaystyle w} ) for a beam subjected to a central point load and an asymmetric point load are given in the table below. === Cantilever beams === Another important class of problems involves cantilever beams. The bending moments ( M {\displaystyle M} ), shear forces ( Q {\displaystyle Q} ), and deflections ( w {\displaystyle w} ) for a cantilever beam subjected to a point load at the free end and a uniformly distributed load are given in the table below. Solutions for several other commonly encountered configurations are readily available in textbooks on mechanics of materials and engineering handbooks. === Statically indeterminate beams === The bending moments and shear forces in Euler–Bernoulli beams can often be determined directly using static balance of forces and moments. However, for certain boundary conditions, the number of reactions can exceed the number of independent equilibrium equations. Such beams are called statically indeterminate. The built-in beams shown in the figure below are statically indeterminate. To determine the stresses and deflections of such beams, the most direct method is to solve the Euler–Bernoulli beam equation with appropriate boundary conditions. But direct analytical solutions of the beam equation are possible only for the simplest cases. Therefore, additional techniques such as linear superposition are often used to solve statically indeterminate beam problems. The superposition method involves adding the solutions of a number of statically determinate problems which are chosen such that the boundary conditions for the sum of the individual problems add up to those of the original problem. Another commonly encountered statically indeterminate beam problem is the cantilevered beam with the free end supported on a roller. The bending moments, shear forces, and deflections of such a beam are listed below: == Extensions == The kinematic assumptions upon which the Euler–Bernoulli beam theory is founded allow it to be extended to more advanced analysis. Simple superposition allows for three-dimensional transverse loading. Using alternative constitutive equations can allow for viscoelastic or plastic beam deformation. Euler–Bernoulli beam theory can also be extended to the analysis of curved beams, beam buckling, composite beams, and geometrically nonlinear beam deflection. Euler–Bernoulli beam theory does not account for the effects of transverse shear strain. As a result, it underpredicts deflections and overpredicts natural frequencies. For thin beams (beam length to thickness ratios of the order 20 or more) these effects are of minor importance. For thick beams, however, these effects can be significant. More advanced beam theories such as the Timoshenko beam theory (developed by the Russian-born scientist Stephen Timoshenko) have been developed to account for these effects. === Large deflections === The original Euler–Bernoulli theory is valid only for infinitesimal strains and small rotations. The theory can be extended in a straightforward manner to problems involving moderately large rotations provided that the strain remains small by using the von Kármán strains. The Euler–Bernoulli hypotheses that plane sections remain plane and normal to the axis of the beam lead to displacements of the form u 1 = u 0 ( x ) − z d w 0 d x ; u 2 = 0 ; u 3 = w 0 ( x ) {\displaystyle u_{1}=u_{0}(x)-z{\cfrac {\mathrm {d} w_{0}}{\mathrm {d} x}}~;~~u_{2}=0~;~~u_{3}=w_{0}(x)} Using the definition of the Lagrangian Green strain from finite strain theory, we can find the von Kármán strains for the beam that are valid for large rotations but small strains by discarding all the higher-order terms (which contain more than two fields) except ∂ w ∂ x i ∂ w ∂ x j . {\displaystyle {\frac {\partial {w}}{\partial {x^{i}}}}{\frac {\partial {w}}{\partial {x^{j}}}}.} The resulting strains take the form: ε 11 = d u 0 d x − z d 2 w 0 d x 2 + 1 2 [ ( d u 0 d x − z d 2 w 0 d x 2 ) 2 + ( d w 0 d x ) 2 ] ≈ d u 0 d x − z d 2 w 0 d x 2 + 1 2 ( d w 0 d x ) 2 ε 22 = 0 ε 33 = 1 2 ( d w 0 d x ) 2 ε 23 = 0 ε 31 = − 1 2 [ ( d u 0 d x − z d 2 w 0 d x 2 ) ( d w 0 d x ) ] ≈ 0 ε 12 = 0. {\displaystyle {\begin{aligned}\varepsilon _{11}&={\cfrac {\mathrm {d} {u_{0}}}{\mathrm {d} {x}}}-z{\cfrac {\mathrm {d} ^{2}{w_{0}}}{\mathrm {d} {x^{2}}}}+{\frac {1}{2}}\left[\left({\cfrac {\mathrm {d} u_{0}}{\mathrm {d} x}}-z{\cfrac {\mathrm {d} ^{2}w_{0}}{\mathrm {d} x^{2}}}\right)^{2}+\left({\cfrac {\mathrm {d} w_{0}}{\mathrm {d} x}}\right)^{2}\right]\approx {\cfrac {\mathrm {d} {u_{0}}}{\mathrm {d} {x}}}-z{\cfrac {\mathrm {d} ^{2}{w_{0}}}{\mathrm {d} {x^{2}}}}+{\frac {1}{2}}\left({\frac {\mathrm {d} {w_{0}}}{\mathrm {d} {x}}}\right)^{2}\\[0.25em]\varepsilon _{22}&=0\\[0.25em]\varepsilon _{33}&={\frac {1}{2}}\left({\frac {\mathrm {d} {w_{0}}}{\mathrm {d} {x}}}\right)^{2}\\[0.25em]\varepsilon _{23}&=0\\[0.25em]\varepsilon _{31}&=-{\frac {1}{2}}\left[\left({\cfrac {\mathrm {d} u_{0}}{\mathrm {d} x}}-z{\cfrac {\mathrm {d} ^{2}w_{0}}{\mathrm {d} x^{2}}}\right)\left({\cfrac {\mathrm {d} w_{0}}{\mathrm {d} x}}\right)\right]\approx 0\\[0.25em]\varepsilon _{12}&=0.\end{aligned}}} From the principle of virtual work, the balance of forces and moments in the beams gives us the equilibrium equations d N x x d x + f ( x ) = 0 d 2 M x x d x 2 + q ( x ) + d d x ( N x x d w 0 d x ) = 0 {\displaystyle {\begin{aligned}{\cfrac {\mathrm {d} N_{xx}}{\mathrm {d} x}}+f(x)&=0\\{\cfrac {\mathrm {d} ^{2}M_{xx}}{\mathrm {d} x^{2}}}+q(x)+{\cfrac {\mathrm {d} }{\mathrm {d} x}}\left(N_{xx}{\cfrac {\mathrm {d} w_{0}}{\mathrm {d} x}}\right)&=0\end{aligned}}} where f ( x ) {\displaystyle f(x)} is the axial load, q ( x ) {\displaystyle q(x)} is the transverse load, and N x x = ∫ A σ x x d A ; M x x = ∫ A z σ x x d A {\displaystyle N_{xx}=\int _{A}\sigma _{xx}~\mathrm {d} A~;~~M_{xx}=\int _{A}z\sigma _{xx}~\mathrm {d} A} To close the system of equations we need the constitutive equations that relate stresses to strains (and hence stresses to displacements). For large rotations and small strains these relations are N x x = A x x [ d u 0 d x + 1 2 ( d w 0 d x ) 2 ] − B x x d 2 w 0 d x 2 M x x = B x x [ d u 0 d x + 1 2 ( d w 0 d x ) 2 ] − D x x d 2 w 0 d x 2 {\displaystyle {\begin{aligned}N_{xx}&=A_{xx}\left[{\cfrac {\mathrm {d} u_{0}}{\mathrm {d} x}}+{\frac {1}{2}}\left({\cfrac {\mathrm {d} w_{0}}{\mathrm {d} x}}\right)^{2}\right]-B_{xx}{\cfrac {\mathrm {d} ^{2}w_{0}}{\mathrm {d} x^{2}}}\\M_{xx}&=B_{xx}\left[{\cfrac {\mathrm {d} u_{0}}{\mathrm {d} x}}+{\frac {1}{2}}\left({\cfrac {\mathrm {d} w_{0}}{\mathrm {d} x}}\right)^{2}\right]-D_{xx}{\cfrac {\mathrm {d} ^{2}w_{0}}{\mathrm {d} x^{2}}}\end{aligned}}} where A x x = ∫ A E d A ; B x x = ∫ A z E d A ; D x x = ∫ A z 2 E d A . {\displaystyle A_{xx}=\int _{A}E~\mathrm {d} A~;~~B_{xx}=\int _{A}zE~\mathrm {d} A~;~~D_{xx}=\int _{A}z^{2}E~\mathrm {d} A~.} The quantity A x x {\displaystyle A_{xx}} is the extensional stiffness, B x x {\displaystyle B_{xx}} is the coupled extensional-bending stiffness, and D x x {\displaystyle D_{xx}} is the bending stiffness. For the situation where the beam has a uniform cross-section and no axial load, the governing equation for a large-rotation Euler–Bernoulli beam is E I d 4 w d x 4 − 3 2 E A ( d w d x ) 2 ( d 2 w d x 2 ) = q ( x ) {\displaystyle EI~{\cfrac {\mathrm {d} ^{4}w}{\mathrm {d} x^{4}}}-{\frac {3}{2}}~EA~\left({\cfrac {\mathrm {d} w}{\mathrm {d} x}}\right)^{2}\left({\cfrac {\mathrm {d} ^{2}w}{\mathrm {d} x^{2}}}\right)=q(x)} == See also == Applied mechanics Bending Bending moment Buckling Flexural rigidity Generalised beam theory Plate theory Sandwich theory Shear and moment diagram Singularity function Strain (materials science) Timoshenko beam theory Theorem of three moments (Clapeyron's theorem) Three-point flexural test == References == === Notes === === Citations === === Further reading === == External links == Beam stress & deflection, beam deflection tables
Wikipedia/Euler–Bernoulli_beam_theory
The Newmark-beta method is a method of numerical integration used to solve certain differential equations. It is widely used in numerical evaluation of the dynamic response of structures and solids such as in finite element analysis to model dynamic systems. The method is named after Nathan M. Newmark, former Professor of Civil Engineering at the University of Illinois at Urbana–Champaign, who developed it in 1959 for use in structural dynamics. The semi-discretized structural equation is a second order ordinary differential equation system, M u ¨ + C u ˙ + f int ( u ) = f ext {\displaystyle M{\ddot {u}}+C{\dot {u}}+f^{\textrm {int}}(u)=f^{\textrm {ext}}\,} here M {\displaystyle M} is the mass matrix, C {\displaystyle C} is the damping matrix, f int {\displaystyle f^{\textrm {int}}} and f ext {\displaystyle f^{\textrm {ext}}} are internal force per unit displacement and external forces, respectively. Using the extended mean value theorem, the Newmark- β {\displaystyle \beta } method states that the first time derivative (velocity in the equation of motion) can be solved as, u ˙ n + 1 = u ˙ n + Δ t u ¨ γ {\displaystyle {\dot {u}}_{n+1}={\dot {u}}_{n}+\Delta t~{\ddot {u}}_{\gamma }\,} where u ¨ γ = ( 1 − γ ) u ¨ n + γ u ¨ n + 1 0 ≤ γ ≤ 1 {\displaystyle {\ddot {u}}_{\gamma }=(1-\gamma ){\ddot {u}}_{n}+\gamma {\ddot {u}}_{n+1}~~~~0\leq \gamma \leq 1} therefore u ˙ n + 1 = u ˙ n + ( 1 − γ ) Δ t u ¨ n + γ Δ t u ¨ n + 1 . {\displaystyle {\dot {u}}_{n+1}={\dot {u}}_{n}+(1-\gamma )\Delta t~{\ddot {u}}_{n}+\gamma \Delta t~{\ddot {u}}_{n+1}.} Because acceleration also varies with time, however, the extended mean value theorem must also be extended to the second time derivative to obtain the correct displacement. Thus, u n + 1 = u n + Δ t u ˙ n + 1 2 Δ t 2 u ¨ β {\displaystyle u_{n+1}=u_{n}+\Delta t~{\dot {u}}_{n}+{\begin{matrix}{\frac {1}{2}}\end{matrix}}\Delta t^{2}~{\ddot {u}}_{\beta }} where again u ¨ β = ( 1 − 2 β ) u ¨ n + 2 β u ¨ n + 1 0 ≤ 2 β ≤ 1 {\displaystyle {\ddot {u}}_{\beta }=(1-2\beta ){\ddot {u}}_{n}+2\beta {\ddot {u}}_{n+1}~~~~0\leq 2\beta \leq 1} The discretized structural equation becomes u ˙ n + 1 = u ˙ n + ( 1 − γ ) Δ t u ¨ n + γ Δ t u ¨ n + 1 u n + 1 = u n + Δ t u ˙ n + Δ t 2 2 ( ( 1 − 2 β ) u ¨ n + 2 β u ¨ n + 1 ) M u ¨ n + 1 + C u ˙ n + 1 + f int ( u n + 1 ) = f n + 1 ext {\displaystyle {\begin{aligned}&{\dot {u}}_{n+1}={\dot {u}}_{n}+(1-\gamma )\Delta t~{\ddot {u}}_{n}+\gamma \Delta t~{\ddot {u}}_{n+1}\\&u_{n+1}=u_{n}+\Delta t~{\dot {u}}_{n}+{\frac {\Delta t^{2}}{2}}\left((1-2\beta ){\ddot {u}}_{n}+2\beta {\ddot {u}}_{n+1}\right)\\&M{\ddot {u}}_{n+1}+C{\dot {u}}_{n+1}+f^{\textrm {int}}(u_{n+1})=f_{n+1}^{\textrm {ext}}\,\end{aligned}}} Explicit central difference scheme is obtained by setting γ = 0.5 {\displaystyle \gamma =0.5} and β = 0 {\displaystyle \beta =0} Average constant acceleration (Middle point rule) is obtained by setting γ = 0.5 {\displaystyle \gamma =0.5} and β = 0.25 {\displaystyle \beta =0.25} == Stability Analysis == A time-integration scheme is said to be stable if there exists an integration time-step Δ t 0 > 0 {\displaystyle \Delta t_{0}>0} so that for any Δ t ∈ ( 0 , Δ t 0 ] {\displaystyle \Delta t\in (0,\Delta t_{0}]} , a finite variation of the state vector q n {\displaystyle q_{n}} at time t n {\displaystyle t_{n}} induces only a non-increasing variation of the state-vector q n + 1 {\displaystyle q_{n+1}} calculated at a subsequent time t n + 1 {\displaystyle t_{n+1}} . Assume the time-integration scheme is q n + 1 = A ( Δ t ) q n + g n + 1 ( Δ t ) {\displaystyle q_{n+1}=A(\Delta t)q_{n}+g_{n+1}(\Delta t)} The linear stability is equivalent to ρ ( A ( Δ t ) ) ≤ 1 {\displaystyle \rho (A(\Delta t))\leq 1} , here ρ ( A ( Δ t ) ) {\displaystyle \rho (A(\Delta t))} is the spectral radius of the update matrix A ( Δ t ) {\displaystyle A(\Delta t)} . For the linear structural equation M u ¨ + C u ˙ + K u = f ext {\displaystyle M{\ddot {u}}+C{\dot {u}}+Ku=f^{\textrm {ext}}\,} here K {\displaystyle K} is the stiffness matrix. Let q n = [ u ˙ n , u n ] {\displaystyle q_{n}=[{\dot {u}}_{n},u_{n}]} , the update matrix is A = H 1 − 1 H 0 {\displaystyle A=H_{1}^{-1}H_{0}} , and H 1 = [ M + γ Δ t C γ Δ t K β Δ t 2 C M + β Δ t 2 K ] H 0 = [ M − ( 1 − γ ) Δ t C − ( 1 − γ ) Δ t K − ( 1 2 − β ) Δ t 2 C + Δ t M M − ( 1 2 − β ) Δ t 2 K ] {\displaystyle {\begin{aligned}H_{1}={\begin{bmatrix}M+\gamma \Delta tC&\gamma \Delta tK\\\beta \Delta t^{2}C&M+\beta \Delta t^{2}K\end{bmatrix}}\qquad H_{0}={\begin{bmatrix}M-(1-\gamma )\Delta tC&-(1-\gamma )\Delta tK\\-({\frac {1}{2}}-\beta )\Delta t^{2}C+\Delta tM&M-({\frac {1}{2}}-\beta )\Delta t^{2}K\end{bmatrix}}\end{aligned}}} For undamped case ( C = 0 {\displaystyle C=0} ), the update matrix can be decoupled by introducing the eigenmodes u = e i ω i t x i {\displaystyle u=e^{i\omega _{i}t}x_{i}} of the structural system, which are solved by the generalized eigenvalue problem ω 2 M x = K x {\displaystyle \omega ^{2}Mx=Kx\,} For each eigenmode, the update matrix becomes H 1 = [ 1 γ Δ t ω i 2 0 1 + β Δ t 2 ω i 2 ] H 0 = [ 1 − ( 1 − γ ) Δ t ω i 2 Δ t 1 − ( 1 2 − β ) Δ t 2 ω i 2 ] {\displaystyle {\begin{aligned}H_{1}={\begin{bmatrix}1&\gamma \Delta t\omega _{i}^{2}\\0&1+\beta \Delta t^{2}\omega _{i}^{2}\end{bmatrix}}\qquad H_{0}={\begin{bmatrix}1&-(1-\gamma )\Delta t\omega _{i}^{2}\\\Delta t&1-({\frac {1}{2}}-\beta )\Delta t^{2}\omega _{i}^{2}\end{bmatrix}}\end{aligned}}} The characteristic equation of the update matrix is λ 2 − ( 2 − ( γ + 1 2 ) η i 2 ) λ + 1 − ( γ − 1 2 ) η i 2 = 0 η i 2 = ω i 2 Δ t 2 1 + β ω i 2 Δ t 2 {\displaystyle \lambda ^{2}-\left(2-(\gamma +{\frac {1}{2}})\eta _{i}^{2}\right)\lambda +1-(\gamma -{\frac {1}{2}})\eta _{i}^{2}=0\,\qquad \eta _{i}^{2}={\frac {\omega _{i}^{2}\Delta t^{2}}{1+\beta \omega _{i}^{2}\Delta t^{2}}}} As for the stability, we have Explicit central difference scheme ( γ = 0.5 {\displaystyle \gamma =0.5} and β = 0 {\displaystyle \beta =0} ) is stable when ω Δ t ≤ 2 {\displaystyle \omega \Delta t\leq 2} . Average constant acceleration (Middle point rule) ( γ = 0.5 {\displaystyle \gamma =0.5} and β = 0.25 {\displaystyle \beta =0.25} ) is unconditionally stable. == References ==
Wikipedia/Newmark-beta_method
In mathematics, variation of parameters, also known as variation of constants, is a general method to solve inhomogeneous linear ordinary differential equations. For first-order inhomogeneous linear differential equations it is usually possible to find solutions via integrating factors or undetermined coefficients with considerably less effort, although those methods leverage heuristics that involve guessing and do not work for all inhomogeneous linear differential equations. Variation of parameters extends to linear partial differential equations as well, specifically to inhomogeneous problems for linear evolution equations like the heat equation, wave equation, and vibrating plate equation. In this setting, the method is more often known as Duhamel's principle, named after Jean-Marie Duhamel (1797–1872) who first applied the method to solve the inhomogeneous heat equation. Sometimes variation of parameters itself is called Duhamel's principle and vice versa. == History == The method of variation of parameters was first sketched by the Swiss mathematician Leonhard Euler (1707–1783), and later completed by the Italian-French mathematician Joseph-Louis Lagrange (1736–1813). A forerunner of the method of variation of a celestial body's orbital elements appeared in Euler's work in 1748, while he was studying the mutual perturbations of Jupiter and Saturn. In his 1749 study of the motions of the earth, Euler obtained differential equations for the orbital elements. In 1753, he applied the method to his study of the motions of the moon. Lagrange first used the method in 1766. Between 1778 and 1783, he further developed the method in two series of memoirs: one on variations in the motions of the planets and another on determining the orbit of a comet from three observations. During 1808–1810, Lagrange gave the method of variation of parameters its final form in a third series of papers. == Description of method == Given an ordinary non-homogeneous linear differential equation of order n Let y 1 ( x ) , … , y n ( x ) {\displaystyle y_{1}(x),\ldots ,y_{n}(x)} be a basis of the vector space of solutions of the corresponding homogeneous equation Then a particular solution to the non-homogeneous equation is given by where the c i ( x ) {\displaystyle c_{i}(x)} are differentiable functions which are assumed to satisfy the conditions Starting with (iii), repeated differentiation combined with repeated use of (iv) gives One last differentiation gives By substituting (iii) into (i) and applying (v) and (vi) it follows that The linear system (iv and vii) of n equations can then be solved using Cramer's rule yielding c i ′ ( x ) = W i ( x ) W ( x ) , i = 1 , … , n {\displaystyle c_{i}'(x)={\frac {W_{i}(x)}{W(x)}},\,\quad i=1,\ldots ,n} where W ( x ) {\displaystyle W(x)} is the Wronskian determinant of the basis y 1 ( x ) , … , y n ( x ) {\displaystyle y_{1}(x),\ldots ,y_{n}(x)} and W i ( x ) {\displaystyle W_{i}(x)} is the Wronskian determinant of the basis with the i-th column replaced by ( 0 , 0 , … , b ( x ) ) . {\displaystyle (0,0,\ldots ,b(x)).} The particular solution to the non-homogeneous equation can then be written as ∑ i = 1 n y i ( x ) ∫ W i ( x ) W ( x ) d x . {\displaystyle \sum _{i=1}^{n}y_{i}(x)\,\int {\frac {W_{i}(x)}{W(x)}}\,\mathrm {d} x.} == Intuitive explanation == Consider the equation of the forced dispersionless spring, in suitable units: x ″ ( t ) + x ( t ) = F ( t ) . {\displaystyle x''(t)+x(t)=F(t).} Here x is the displacement of the spring from the equilibrium x = 0, and F(t) is an external applied force that depends on time. When the external force is zero, this is the homogeneous equation (whose solutions are linear combinations of sines and cosines, corresponding to the spring oscillating with constant total energy). We can construct the solution physically, as follows. Between times t = s {\displaystyle t=s} and t = s + d s {\displaystyle t=s+ds} , the momentum corresponding to the solution has a net change F ( s ) d s {\displaystyle F(s)\,ds} (see: Impulse (physics)). A solution to the inhomogeneous equation, at the present time t > 0, is obtained by linearly superposing the solutions obtained in this manner, for s going between 0 and t. The homogeneous initial-value problem, representing a small impulse F ( s ) d s {\displaystyle F(s)\,ds} being added to the solution at time t = s {\displaystyle t=s} , is x ″ ( t ) + x ( t ) = 0 , x ( s ) = 0 , x ′ ( s ) = F ( s ) d s . {\displaystyle x''(t)+x(t)=0,\quad x(s)=0,\ x'(s)=F(s)\,ds.} The unique solution to this problem is easily seen to be x ( t ) = F ( s ) sin ⁡ ( t − s ) d s {\displaystyle x(t)=F(s)\sin(t-s)\,ds} . The linear superposition of all of these solutions is given by the integral: x ( t ) = ∫ 0 t F ( s ) sin ⁡ ( t − s ) d s . {\displaystyle x(t)=\int _{0}^{t}F(s)\sin(t-s)\,ds.} To verify that this satisfies the required equation: x ′ ( t ) = ∫ 0 t F ( s ) cos ⁡ ( t − s ) d s {\displaystyle x'(t)=\int _{0}^{t}F(s)\cos(t-s)\,ds} x ″ ( t ) = F ( t ) − ∫ 0 t F ( s ) sin ⁡ ( t − s ) d s = F ( t ) − x ( t ) , {\displaystyle x''(t)=F(t)-\int _{0}^{t}F(s)\sin(t-s)\,ds=F(t)-x(t),} as required (see: Leibniz integral rule). The general method of variation of parameters allows for solving an inhomogeneous linear equation L x ( t ) = F ( t ) {\displaystyle Lx(t)=F(t)} by means of considering the second-order linear differential operator L to be the net force, thus the total impulse imparted to a solution between time s and s+ds is F(s)ds. Denote by x s {\displaystyle x_{s}} the solution of the homogeneous initial value problem L x ( t ) = 0 , x ( s ) = 0 , x ′ ( s ) = F ( s ) d s . {\displaystyle Lx(t)=0,\quad x(s)=0,\ x'(s)=F(s)\,ds.} Then a particular solution of the inhomogeneous equation is x ( t ) = ∫ 0 t x s ( t ) d s , {\displaystyle x(t)=\int _{0}^{t}x_{s}(t)\,ds,} the result of linearly superposing the infinitesimal homogeneous solutions. There are generalizations to higher order linear differential operators. In practice, variation of parameters usually involves the fundamental solution of the homogeneous problem, the infinitesimal solutions x s {\displaystyle x_{s}} then being given in terms of explicit linear combinations of linearly independent fundamental solutions. In the case of the forced dispersionless spring, the kernel sin ⁡ ( t − s ) = sin ⁡ t cos ⁡ s − sin ⁡ s cos ⁡ t {\displaystyle \sin(t-s)=\sin t\cos s-\sin s\cos t} is the associated decomposition into fundamental solutions. == Examples == === First-order equation === y ′ + p ( x ) y = q ( x ) {\displaystyle y'+p(x)y=q(x)} The complementary solution to our original (inhomogeneous) equation is the general solution of the corresponding homogeneous equation (written below): y ′ + p ( x ) y = 0 {\displaystyle y'+p(x)y=0} This homogeneous differential equation can be solved by different methods, for example separation of variables: d d x y + p ( x ) y = 0 {\displaystyle {\frac {d}{dx}}y+p(x)y=0} d y d x = − p ( x ) y {\displaystyle {\frac {dy}{dx}}=-p(x)y} d y y = − p ( x ) d x , {\displaystyle {dy \over y}=-{p(x)\,dx},} ∫ 1 y d y = − ∫ p ( x ) d x {\displaystyle \int {\frac {1}{y}}\,dy=-\int p(x)\,dx} ln ⁡ | y | = − ∫ p ( x ) d x + C {\displaystyle \ln |y|=-\int p(x)\,dx+C} y = ± e − ∫ p ( x ) d x + C = C 0 e − ∫ p ( x ) d x {\displaystyle y=\pm e^{-\int p(x)\,dx+C}=C_{0}e^{-\int p(x)\,dx}} The complementary solution to our original equation is therefore: y c = C 0 e − ∫ p ( x ) d x {\displaystyle y_{c}=C_{0}e^{-\int p(x)\,dx}} Now we return to solving the non-homogeneous equation: y ′ + p ( x ) y = q ( x ) {\displaystyle y'+p(x)y=q(x)} Using the method variation of parameters, the particular solution is formed by multiplying the complementary solution by an unknown function C(x): y p = C ( x ) e − ∫ p ( x ) d x {\displaystyle y_{p}=C(x)e^{-\int p(x)\,dx}} By substituting the particular solution into the non-homogeneous equation, we can find C(x): C ′ ( x ) e − ∫ p ( x ) d x − C ( x ) p ( x ) e − ∫ p ( x ) d x + p ( x ) C ( x ) e − ∫ p ( x ) d x = q ( x ) {\displaystyle C'(x)e^{-\int p(x)\,dx}-C(x)p(x)e^{-\int p(x)\,dx}+p(x)C(x)e^{-\int p(x)\,dx}=q(x)} C ′ ( x ) e − ∫ p ( x ) d x = q ( x ) {\displaystyle C'(x)e^{-\int p(x)\,dx}=q(x)} C ′ ( x ) = q ( x ) e ∫ p ( x ) d x {\displaystyle C'(x)=q(x)e^{\int p(x)\,dx}} C ( x ) = ∫ q ( x ) e ∫ p ( x ) d x d x + C 1 {\displaystyle C(x)=\int q(x)e^{\int p(x)\,dx}\,dx+C_{1}} We only need a single particular solution, so we arbitrarily select C 1 = 0 {\displaystyle C_{1}=0} for simplicity. Therefore the particular solution is: y p = e − ∫ p ( x ) d x ∫ q ( x ) e ∫ p ( x ) d x d x {\displaystyle y_{p}=e^{-\int p(x)\,dx}\int q(x)e^{\int p(x)\,dx}\,dx} The final solution of the differential equation is: y = y c + y p = C 0 e − ∫ p ( x ) d x + e − ∫ p ( x ) d x ∫ q ( x ) e ∫ p ( x ) d x d x {\displaystyle {\begin{aligned}y&=y_{c}+y_{p}\\&=C_{0}e^{-\int p(x)\,dx}+e^{-\int p(x)\,dx}\int q(x)e^{\int p(x)\,dx}\,dx\end{aligned}}} This recreates the method of integrating factors. === Specific second-order equation === Let us solve y ″ + 4 y ′ + 4 y = cosh ⁡ x {\displaystyle y''+4y'+4y=\cosh x} We want to find the general solution to the differential equation, that is, we want to find solutions to the homogeneous differential equation y ″ + 4 y ′ + 4 y = 0. {\displaystyle y''+4y'+4y=0.} The characteristic equation is: λ 2 + 4 λ + 4 = ( λ + 2 ) 2 = 0 {\displaystyle \lambda ^{2}+4\lambda +4=(\lambda +2)^{2}=0} Since λ = − 2 {\displaystyle \lambda =-2} is a repeated root, we have to introduce a factor of x for one solution to ensure linear independence: u 1 = e − 2 x {\displaystyle u_{1}=e^{-2x}} and u 2 = x e − 2 x {\displaystyle u_{2}=xe^{-2x}} . The Wronskian of these two functions is W = | e − 2 x x e − 2 x − 2 e − 2 x − e − 2 x ( 2 x − 1 ) | = − e − 2 x e − 2 x ( 2 x − 1 ) + 2 x e − 2 x e − 2 x = e − 4 x . {\displaystyle W={\begin{vmatrix}e^{-2x}&xe^{-2x}\\-2e^{-2x}&-e^{-2x}(2x-1)\\\end{vmatrix}}=-e^{-2x}e^{-2x}(2x-1)+2xe^{-2x}e^{-2x}=e^{-4x}.} Because the Wronskian is non-zero, the two functions are linearly independent, so this is in fact the general solution for the homogeneous differential equation (and not a mere subset of it). We seek functions A(x) and B(x) so A(x)u1 + B(x)u2 is a particular solution of the non-homogeneous equation. We need only calculate the integrals A ( x ) = − ∫ 1 W u 2 ( x ) b ( x ) d x , B ( x ) = ∫ 1 W u 1 ( x ) b ( x ) d x {\displaystyle A(x)=-\int {1 \over W}u_{2}(x)b(x)\,\mathrm {d} x,\;B(x)=\int {1 \over W}u_{1}(x)b(x)\,\mathrm {d} x} Recall that for this example b ( x ) = cosh ⁡ x {\displaystyle b(x)=\cosh x} That is, A ( x ) = − ∫ 1 e − 4 x x e − 2 x cosh ⁡ x d x = − ∫ x e 2 x cosh ⁡ x d x = − 1 18 e x ( 9 ( x − 1 ) + e 2 x ( 3 x − 1 ) ) + C 1 {\displaystyle A(x)=-\int {1 \over e^{-4x}}xe^{-2x}\cosh x\,\mathrm {d} x=-\int xe^{2x}\cosh x\,\mathrm {d} x=-{1 \over 18}e^{x}\left(9(x-1)+e^{2x}(3x-1)\right)+C_{1}} B ( x ) = ∫ 1 e − 4 x e − 2 x cosh ⁡ x d x = ∫ e 2 x cosh ⁡ x d x = 1 6 e x ( 3 + e 2 x ) + C 2 {\displaystyle B(x)=\int {1 \over e^{-4x}}e^{-2x}\cosh x\,\mathrm {d} x=\int e^{2x}\cosh x\,\mathrm {d} x={1 \over 6}e^{x}\left(3+e^{2x}\right)+C_{2}} where C 1 {\displaystyle C_{1}} and C 2 {\displaystyle C_{2}} are constants of integration. === General second-order equation === We have a differential equation of the form u ″ + p ( x ) u ′ + q ( x ) u = f ( x ) {\displaystyle u''+p(x)u'+q(x)u=f(x)} and we define the linear operator L = D 2 + p ( x ) D + q ( x ) {\displaystyle L=D^{2}+p(x)D+q(x)} where D represents the differential operator. We therefore have to solve the equation L u ( x ) = f ( x ) {\displaystyle Lu(x)=f(x)} for u ( x ) {\displaystyle u(x)} , where L {\displaystyle L} and f ( x ) {\displaystyle f(x)} are known. We must solve first the corresponding homogeneous equation: u ″ + p ( x ) u ′ + q ( x ) u = 0 {\displaystyle u''+p(x)u'+q(x)u=0} by the technique of our choice. Once we've obtained two linearly independent solutions to this homogeneous differential equation (because this ODE is second-order) — call them u1 and u2 — we can proceed with variation of parameters. Now, we seek the general solution to the differential equation u G ( x ) {\displaystyle u_{G}(x)} which we assume to be of the form u G ( x ) = A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) . {\displaystyle u_{G}(x)=A(x)u_{1}(x)+B(x)u_{2}(x).} Here, A ( x ) {\displaystyle A(x)} and B ( x ) {\displaystyle B(x)} are unknown and u 1 ( x ) {\displaystyle u_{1}(x)} and u 2 ( x ) {\displaystyle u_{2}(x)} are the solutions to the homogeneous equation. (Observe that if A ( x ) {\displaystyle A(x)} and B ( x ) {\displaystyle B(x)} are constants, then L u G ( x ) = 0 {\displaystyle Lu_{G}(x)=0} .) Since the above is only one equation and we have two unknown functions, it is reasonable to impose a second condition. We choose the following: A ′ ( x ) u 1 ( x ) + B ′ ( x ) u 2 ( x ) = 0. {\displaystyle A'(x)u_{1}(x)+B'(x)u_{2}(x)=0.} Now, u G ′ ( x ) = ( A ( x ) u 1 ( x ) + B ( x ) u 2 ( x ) ) ′ = ( A ( x ) u 1 ( x ) ) ′ + ( B ( x ) u 2 ( x ) ) ′ = A ′ ( x ) u 1 ( x ) + A ( x ) u 1 ′ ( x ) + B ′ ( x ) u 2 ( x ) + B ( x ) u 2 ′ ( x ) = A ′ ( x ) u 1 ( x ) + B ′ ( x ) u 2 ( x ) + A ( x ) u 1 ′ ( x ) + B ( x ) u 2 ′ ( x ) = A ( x ) u 1 ′ ( x ) + B ( x ) u 2 ′ ( x ) {\displaystyle {\begin{aligned}u_{G}'(x)&=\left(A(x)u_{1}(x)+B(x)u_{2}(x)\right)'\\&=\left(A(x)u_{1}(x)\right)'+\left(B(x)u_{2}(x)\right)'\\&=A'(x)u_{1}(x)+A(x)u_{1}'(x)+B'(x)u_{2}(x)+B(x)u_{2}'(x)\\&=A'(x)u_{1}(x)+B'(x)u_{2}(x)+A(x)u_{1}'(x)+B(x)u_{2}'(x)\\&=A(x)u_{1}'(x)+B(x)u_{2}'(x)\end{aligned}}} Differentiating again (omitting intermediary steps) u G ″ ( x ) = A ( x ) u 1 ″ ( x ) + B ( x ) u 2 ″ ( x ) + A ′ ( x ) u 1 ′ ( x ) + B ′ ( x ) u 2 ′ ( x ) . {\displaystyle u_{G}''(x)=A(x)u_{1}''(x)+B(x)u_{2}''(x)+A'(x)u_{1}'(x)+B'(x)u_{2}'(x).} Now we can write the action of L upon uG as L u G = A ( x ) L u 1 ( x ) + B ( x ) L u 2 ( x ) + A ′ ( x ) u 1 ′ ( x ) + B ′ ( x ) u 2 ′ ( x ) . {\displaystyle Lu_{G}=A(x)Lu_{1}(x)+B(x)Lu_{2}(x)+A'(x)u_{1}'(x)+B'(x)u_{2}'(x).} Since u1 and u2 are solutions, then L u G = A ′ ( x ) u 1 ′ ( x ) + B ′ ( x ) u 2 ′ ( x ) . {\displaystyle Lu_{G}=A'(x)u_{1}'(x)+B'(x)u_{2}'(x).} We have the system of equations [ u 1 ( x ) u 2 ( x ) u 1 ′ ( x ) u 2 ′ ( x ) ] [ A ′ ( x ) B ′ ( x ) ] = [ 0 f ] . {\displaystyle {\begin{bmatrix}u_{1}(x)&u_{2}(x)\\u_{1}'(x)&u_{2}'(x)\end{bmatrix}}{\begin{bmatrix}A'(x)\\B'(x)\end{bmatrix}}={\begin{bmatrix}0\\f\end{bmatrix}}.} Expanding, [ A ′ ( x ) u 1 ( x ) + B ′ ( x ) u 2 ( x ) A ′ ( x ) u 1 ′ ( x ) + B ′ ( x ) u 2 ′ ( x ) ] = [ 0 f ] . {\displaystyle {\begin{bmatrix}A'(x)u_{1}(x)+B'(x)u_{2}(x)\\A'(x)u_{1}'(x)+B'(x)u_{2}'(x)\end{bmatrix}}={\begin{bmatrix}0\\f\end{bmatrix}}.} So the above system determines precisely the conditions A ′ ( x ) u 1 ( x ) + B ′ ( x ) u 2 ( x ) = 0. {\displaystyle A'(x)u_{1}(x)+B'(x)u_{2}(x)=0.} A ′ ( x ) u 1 ′ ( x ) + B ′ ( x ) u 2 ′ ( x ) = L u G = f . {\displaystyle A'(x)u_{1}'(x)+B'(x)u_{2}'(x)=Lu_{G}=f.} We seek A(x) and B(x) from these conditions, so, given [ u 1 ( x ) u 2 ( x ) u 1 ′ ( x ) u 2 ′ ( x ) ] [ A ′ ( x ) B ′ ( x ) ] = [ 0 f ] {\displaystyle {\begin{bmatrix}u_{1}(x)&u_{2}(x)\\u_{1}'(x)&u_{2}'(x)\end{bmatrix}}{\begin{bmatrix}A'(x)\\B'(x)\end{bmatrix}}={\begin{bmatrix}0\\f\end{bmatrix}}} we can solve for (A′(x), B′(x))T, so [ A ′ ( x ) B ′ ( x ) ] = [ u 1 ( x ) u 2 ( x ) u 1 ′ ( x ) u 2 ′ ( x ) ] − 1 [ 0 f ] = 1 W [ u 2 ′ ( x ) − u 2 ( x ) − u 1 ′ ( x ) u 1 ( x ) ] [ 0 f ] , {\displaystyle {\begin{bmatrix}A'(x)\\B'(x)\end{bmatrix}}={\begin{bmatrix}u_{1}(x)&u_{2}(x)\\u_{1}'(x)&u_{2}'(x)\end{bmatrix}}^{-1}{\begin{bmatrix}0\\f\end{bmatrix}}={\frac {1}{W}}{\begin{bmatrix}u_{2}'(x)&-u_{2}(x)\\-u_{1}'(x)&u_{1}(x)\end{bmatrix}}{\begin{bmatrix}0\\f\end{bmatrix}},} where W denotes the Wronskian of u1 and u2. (We know that W is nonzero, from the assumption that u1 and u2 are linearly independent.) So, A ′ ( x ) = − 1 W u 2 ( x ) f ( x ) , B ′ ( x ) = 1 W u 1 ( x ) f ( x ) A ( x ) = − ∫ 1 W u 2 ( x ) f ( x ) d x , B ( x ) = ∫ 1 W u 1 ( x ) f ( x ) d x {\displaystyle {\begin{aligned}A'(x)&=-{1 \over W}u_{2}(x)f(x),&B'(x)&={1 \over W}u_{1}(x)f(x)\\A(x)&=-\int {1 \over W}u_{2}(x)f(x)\,\mathrm {d} x,&B(x)&=\int {1 \over W}u_{1}(x)f(x)\,\mathrm {d} x\end{aligned}}} While homogeneous equations are relatively easy to solve, this method allows the calculation of the coefficients of the general solution of the inhomogeneous equation, and thus the complete general solution of the inhomogeneous equation can be determined. Note that A ( x ) {\displaystyle A(x)} and B ( x ) {\displaystyle B(x)} are each determined only up to an arbitrary additive constant (the constant of integration). Adding a constant to A ( x ) {\displaystyle A(x)} or B ( x ) {\displaystyle B(x)} does not change the value of L u G ( x ) {\displaystyle Lu_{G}(x)} because the extra term is just a linear combination of u1 and u2, which is a solution of L {\displaystyle L} by definition. == See also == Alekseev–Gröbner formula, a generalization of the variation of constants formula. Reduction of order == Notes == == References == Coddington, Earl A.; Levinson, Norman (1955). Theory of Ordinary Differential Equations. McGraw-Hill. Boyce, William E.; DiPrima, Richard C. (2005). Elementary Differential Equations and Boundary Value Problems (8th ed.). Wiley. pp. 186–192, 237–241. Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. American Mathematical Society. == External links == Online Notes / Proof by Paul Dawkins, Lamar University. PlanetMath page. A NOTE ON LAGRANGE’S METHOD OF VARIATION OF PARAMETERS
Wikipedia/Method_of_variation_of_parameters
In mathematics, Bäcklund transforms or Bäcklund transformations (named after the Swedish mathematician Albert Victor Bäcklund) relate partial differential equations and their solutions. They are an important tool in soliton theory and integrable systems. A Bäcklund transform is typically a system of first order partial differential equations relating two functions, and often depending on an additional parameter. It implies that the two functions separately satisfy partial differential equations, and each of the two functions is then said to be a Bäcklund transformation of the other. A Bäcklund transform which relates solutions of the same equation is called an invariant Bäcklund transform or auto-Bäcklund transform. If such a transform can be found, much can be deduced about the solutions of the equation especially if the Bäcklund transform contains a parameter. However, no systematic way of finding Bäcklund transforms is known. == History == Bäcklund transforms have their origins in differential geometry: the first nontrivial example is the transformation of pseudospherical surfaces introduced by L. Bianchi and A.V. Bäcklund in the 1880s. This is a geometrical construction of a new pseudospherical surface from an initial such surface using a solution of a linear differential equation. Pseudospherical surfaces can be described as solutions of the sine-Gordon equation, and hence the Bäcklund transformation of surfaces can be viewed as a transformation of solutions of the sine-Gordon equation. == The Cauchy–Riemann equations == The prototypical example of a Bäcklund transform is the Cauchy–Riemann system u x = v y , u y = − v x , {\displaystyle u_{x}=v_{y},\quad u_{y}=-v_{x},\,} which relates the real and imaginary parts u {\displaystyle u} and v {\displaystyle v} of a holomorphic function. This first order system of partial differential equations has the following properties. If u {\displaystyle u} and v {\displaystyle v} are solutions of the Cauchy–Riemann equations, then u {\displaystyle u} is a solution of the Laplace equation u x x + u y y = 0 {\displaystyle u_{xx}+u_{yy}=0} (i.e., a harmonic function), and so is v {\displaystyle v} . This follows straightforwardly by differentiating the equations with respect to x {\displaystyle x} and y {\displaystyle y} and using the fact that u x y = u y x , v x y = v y x . {\displaystyle u_{xy}=u_{yx},\quad v_{xy}=v_{yx}.\,} Conversely if u {\displaystyle u} is a solution of Laplace's equation, then there exist functions v {\displaystyle v} which solve the Cauchy–Riemann equations together with u {\displaystyle u} . Thus, in this case, a Bäcklund transformation of a harmonic function is just a conjugate harmonic function. The above properties mean, more precisely, that Laplace's equation for u {\displaystyle u} and Laplace's equation for v {\displaystyle v} are the integrability conditions for solving the Cauchy–Riemann equations. These are the characteristic features of a Bäcklund transform. If we have a partial differential equation in u {\displaystyle u} , and a Bäcklund transform from u {\displaystyle u} to v {\displaystyle v} , we can deduce a partial differential equation satisfied by v {\displaystyle v} . This example is rather trivial, because all three equations (the equation for u {\displaystyle u} , the equation for v {\displaystyle v} and the Bäcklund transform relating them) are linear. Bäcklund transforms are most interesting when just one of the three equations is linear. == The sine-Gordon equation == Suppose that u is a solution of the sine-Gordon equation u x y = sin ⁡ u . {\displaystyle u_{xy}=\sin u.\,} Then the system v x = u x + 2 a sin ⁡ ( v + u 2 ) v y = − u y + 2 a sin ⁡ ( v − u 2 ) {\displaystyle {\begin{aligned}v_{x}&=u_{x}+2a\sin {\Bigl (}{\frac {v+u}{2}}{\Bigr )}\\v_{y}&=-u_{y}+{\frac {2}{a}}\sin {\Bigl (}{\frac {v-u}{2}}{\Bigr )}\end{aligned}}\,\!} where a is an arbitrary parameter, is solvable for a function v which will also satisfy the sine-Gordon equation. This is an example of an auto-Bäcklund transform. By using a matrix system, it is also possible to find a linear Bäcklund transform for solutions of sine-Gordon equation. == The Liouville equation == A Bäcklund transform can turn a non-linear partial differential equation into a simpler, linear, partial differential equation. For example, if u and v are related via the Bäcklund transform v x = u x + 2 a exp ⁡ ( u + v 2 ) v y = − u y − 1 a exp ⁡ ( u − v 2 ) {\displaystyle {\begin{aligned}v_{x}&=u_{x}+2a\exp {\Bigl (}{\frac {u+v}{2}}{\Bigr )}\\v_{y}&=-u_{y}-{\frac {1}{a}}\exp {\Bigl (}{\frac {u-v}{2}}{\Bigr )}\end{aligned}}\,\!} where a is an arbitrary parameter, and if u is a solution of the Liouville equation u x y = exp ⁡ u {\displaystyle u_{xy}=\exp u\,\!} then v is a solution of the much simpler equation, v x y = 0 {\displaystyle v_{xy}=0} , and vice versa. We can then solve the (non-linear) Liouville equation by working with a much simpler linear equation. == See also == Integrable system Korteweg–de Vries equation Darboux transformation == References == == External links == "Bäcklund transformation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Bäcklund Transformation". MathWorld.
Wikipedia/Bäcklund_transform
In mathematics, a Riccati equation in the narrowest sense is any first-order ordinary differential equation that is quadratic in the unknown function. In other words, it is an equation of the form y ′ ( x ) = q 0 ( x ) + q 1 ( x ) y ( x ) + q 2 ( x ) y 2 ( x ) {\displaystyle y'(x)=q_{0}(x)+q_{1}(x)\,y(x)+q_{2}(x)\,y^{2}(x)} where q 0 ( x ) ≠ 0 {\displaystyle q_{0}(x)\neq 0} and q 2 ( x ) ≠ 0 {\displaystyle q_{2}(x)\neq 0} . If q 0 ( x ) = 0 {\displaystyle q_{0}(x)=0} the equation reduces to a Bernoulli equation, while if q 2 ( x ) = 0 {\displaystyle q_{2}(x)=0} the equation becomes a first order linear ordinary differential equation. The equation is named after Jacopo Riccati (1676–1754). More generally, the term Riccati equation is used to refer to matrix equations with an analogous quadratic term, which occur in both continuous-time and discrete-time linear-quadratic-Gaussian control. The steady-state (non-dynamic) version of these is referred to as the algebraic Riccati equation. == Conversion to a second order linear equation == The non-linear Riccati equation can always be converted to a second order linear ordinary differential equation (ODE): If y ′ = q 0 ( x ) + q 1 ( x ) y + q 2 ( x ) y 2 {\displaystyle y'=q_{0}(x)+q_{1}(x)y+q_{2}(x)y^{2}} then, wherever q2 is non-zero and differentiable, v = y q 2 {\displaystyle v=yq_{2}} satisfies a Riccati equation of the form v ′ = v 2 + R ( x ) v + S ( x ) , {\displaystyle v'=v^{2}+R(x)v+S(x),} where S = q 2 q 0 {\displaystyle S=q_{2}q_{0}} and R = q 1 + q 2 ′ q 2 , {\displaystyle R=q_{1}+{\tfrac {q_{2}'}{q_{2}}},} because v ′ = ( y q 2 ) ′ = y ′ q 2 + y q 2 ′ = ( q 0 + q 1 y + q 2 y 2 ) q 2 + v q 2 ′ q 2 = q 0 q 2 + ( q 1 + q 2 ′ q 2 ) v + v 2 {\displaystyle {\begin{aligned}v'&=(yq_{2})'\\[4pt]&=y'q_{2}+yq_{2}'\\&=(q_{0}+q_{1}y+q_{2}y^{2})q_{2}+v{\frac {q_{2}'}{q_{2}}}\\&=q_{0}q_{2}+\left(q_{1}+{\frac {q_{2}'}{q_{2}}}\right)v+v^{2}\end{aligned}}} Substituting v = − u ′ u , {\displaystyle v=-{\tfrac {u'}{u}},} it follows that u satisfies the linear second-order ODE u ″ − R ( x ) u ′ + S ( x ) u = 0 {\displaystyle u''-R(x)u'+S(x)u=0} since v ′ = − ( u ′ u ) ′ = − ( u ″ u ) + ( u ′ u ) 2 = − ( u ″ u ) + v 2 {\displaystyle {\begin{aligned}v'&=-\left({\frac {u'}{u}}\right)'\\[2pt]&=-\left({\frac {u''}{u}}\right)+\left({\frac {u'}{u}}\right)^{2}\\[2pt]&=-\left({\frac {u''}{u}}\right)+v^{2}\end{aligned}}} so that u ″ u = v 2 − v ′ = − S − R v = − S + R u ′ u {\displaystyle {\begin{aligned}{\frac {u''}{u}}&=v^{2}-v'\\&=-S-Rv\\&=-S+R{\frac {u'}{u}}\end{aligned}}} and hence u ″ − R u ′ + S u = 0. {\displaystyle u''-Ru'+Su=0.} Then substituting the two solutions of this linear second order equation into the transformation y = − u ′ q 2 u = − q 2 − 1 ( log ⁡ ( u ) ) ′ {\displaystyle y=-{\frac {u'}{q_{2}u}}=-q_{2}^{-1}{\bigl (}\log(u){\bigr )}'} suffices to have global knowledge of the general solution of the Riccati equation by the formula: y = − q 2 − 1 ( log ⁡ ( c 1 u 1 + c 2 u 2 ) ) ′ . {\displaystyle y=-q_{2}^{-1}{\bigl (}\log(c_{1}u_{1}+c_{2}u_{2}){\bigr )}'.} == Complex analysis == In complex analysis, the Riccati equation occurs as the first-order nonlinear ODE in the complex plane of the form d w d z = F ( w , z ) = P ( w , z ) Q ( w , z ) , {\displaystyle {\frac {dw}{dz}}=F(w,z)={\frac {P(w,z)}{Q(w,z)}},} where P {\displaystyle P} and Q {\displaystyle Q} are polynomials in w {\displaystyle w} and locally analytic functions of z ∈ C {\displaystyle z\in \mathbb {C} } , i.e., F {\displaystyle F} is a complex rational function. The only equation of this form that is of Painlevé type, is the Riccati equation d w ( z ) d z = A 0 ( z ) + A 1 ( z ) w + A 2 ( z ) w 2 , {\displaystyle {\frac {dw(z)}{dz}}=A_{0}(z)+A_{1}(z)w+A_{2}(z)w^{2},} where A i ( z ) {\displaystyle A_{i}(z)} are (possibly matrix) functions of z {\displaystyle z} . === Application to the Schwarzian equation === An important application of the Riccati equation is to the 3rd order Schwarzian differential equation S ( w ) := ( w ″ w ′ ) ′ − 1 2 ( w ″ w ′ ) 2 = f {\displaystyle S(w):=\left({\frac {w''}{w'}}\right)'-{\frac {1}{2}}\left({\frac {w''}{w'}}\right)^{2}=f} which occurs in the theory of conformal mapping and univalent functions. In this case the ODEs are in the complex domain and differentiation is with respect to a complex variable. (The Schwarzian derivative S(w) has the remarkable property that it is invariant under Möbius transformations, i.e. S ( a w + b c w + d ) = S ( w ) {\displaystyle S{\bigl (}{\tfrac {aw+b}{cw+d}}{\bigr )}=S(w)} whenever a d − b c {\displaystyle ad-bc} is non-zero.) The function y = w ″ w ′ {\displaystyle y={\tfrac {w''}{w'}}} satisfies the Riccati equation y ′ = 1 2 y 2 + f . {\displaystyle y'={\frac {1}{2}}y^{2}+f.} By the above y = − 2 u ′ u {\displaystyle y=-2{\tfrac {u'}{u}}} where u is a solution of the linear ODE u ″ + 1 2 f u = 0. {\displaystyle u''+{\frac {1}{2}}fu=0.} Since w ″ w ′ = − 2 u ′ u , {\displaystyle {\tfrac {w''}{w'}}=-2{\tfrac {u'}{u}},} integration gives w ′ = C u 2 {\displaystyle w'={\tfrac {C}{u^{2}}}} for some constant C. On the other hand any other independent solution U of the linear ODE has constant non-zero Wronskian U ′ u − U u ′ {\displaystyle U'u-Uu'} which can be taken to be C after scaling. Thus w ′ = U ′ u − U u ′ u 2 = ( U u ) ′ {\displaystyle w'={\frac {U'u-Uu'}{u^{2}}}=\left({\frac {U}{u}}\right)'} so that the Schwarzian equation has solution w = U u . {\displaystyle w={\tfrac {U}{u}}.} == Obtaining solutions by quadrature == The correspondence between Riccati equations and second-order linear ODEs has other consequences. For example, if one solution of a 2nd order ODE is known, then it is known that another solution can be obtained by quadrature, i.e., a simple integration. The same holds true for the Riccati equation. In fact, if one particular solution y1 can be found, the general solution is obtained as y = y 1 + u {\displaystyle y=y_{1}+u} Substituting y 1 + u {\displaystyle y_{1}+u} in the Riccati equation yields y 1 ′ + u ′ = q 0 + q 1 ⋅ ( y 1 + u ) + q 2 ⋅ ( y 1 + u ) 2 , {\displaystyle y_{1}'+u'=q_{0}+q_{1}\cdot (y_{1}+u)+q_{2}\cdot (y_{1}+u)^{2},} and since y 1 ′ = q 0 + q 1 y 1 + q 2 y 1 2 , {\displaystyle y_{1}'=q_{0}+q_{1}\,y_{1}+q_{2}\,y_{1}^{2},} it follows that u ′ = q 1 u + 2 q 2 y 1 u + q 2 u 2 {\displaystyle u'=q_{1}\,u+2\,q_{2}\,y_{1}\,u+q_{2}\,u^{2}} or u ′ − ( q 1 + 2 q 2 y 1 ) u = q 2 u 2 , {\displaystyle u'-(q_{1}+2\,q_{2}\,y_{1})\,u=q_{2}\,u^{2},} which is a Bernoulli equation. The substitution that is needed to solve this Bernoulli equation is z = 1 u {\displaystyle z={\frac {1}{u}}} Substituting y = y 1 + 1 z {\displaystyle y=y_{1}+{\frac {1}{z}}} directly into the Riccati equation yields the linear equation z ′ + ( q 1 + 2 q 2 y 1 ) z = − q 2 {\displaystyle z'+(q_{1}+2\,q_{2}\,y_{1})\,z=-q_{2}} A set of solutions to the Riccati equation is then given by y = y 1 + 1 z {\displaystyle y=y_{1}+{\frac {1}{z}}} where z is the general solution to the aforementioned linear equation. == See also == Linear-quadratic regulator Algebraic Riccati equation Linear-quadratic-Gaussian control == References == == Further reading == Hille, Einar (1997) [1976], Ordinary Differential Equations in the Complex Domain, New York: Dover Publications, ISBN 0-486-69620-0 Nehari, Zeev (1975) [1952], Conformal Mapping, New York: Dover Publications, ISBN 0-486-61137-X Polyanin, Andrei D.; Zaitsev, Valentin F. (2003), Handbook of Exact Solutions for Ordinary Differential Equations (2nd ed.), Boca Raton, Fla.: Chapman & Hall/CRC, ISBN 1-58488-297-2 Zelikin, Mikhail I. (2000), Homogeneous Spaces and the Riccati Equation in the Calculus of Variations, Berlin: Springer-Verlag Reid, William T. (1972), Riccati Differential Equations, London: Academic Press == External links == "Riccati equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Riccati Equation at EqWorld: The World of Mathematical Equations. Riccati Differential Equation at Mathworld MATLAB function for solving continuous-time algebraic Riccati equation. SciPy has functions for solving the continuous algebraic Riccati equation and the discrete algebraic Riccati equation.
Wikipedia/Riccati_equation
In mathematics, the method of Frobenius, named after Ferdinand Georg Frobenius, is a way to find an infinite series solution for a linear second-order ordinary differential equation of the form z 2 u ″ + p ( z ) z u ′ + q ( z ) u = 0 {\displaystyle z^{2}u''+p(z)zu'+q(z)u=0} with u ′ ≡ d u d z {\textstyle u'\equiv {\frac {du}{dz}}} and u ″ ≡ d 2 u d z 2 {\textstyle u''\equiv {\frac {d^{2}u}{dz^{2}}}} . in the vicinity of the regular singular point z = 0 {\displaystyle z=0} . One can divide by z 2 {\displaystyle z^{2}} to obtain a differential equation of the form u ″ + p ( z ) z u ′ + q ( z ) z 2 u = 0 {\displaystyle u''+{\frac {p(z)}{z}}u'+{\frac {q(z)}{z^{2}}}u=0} which will not be solvable with regular power series methods if either p(z)/z or q(z)/z2 is not analytic at z = 0. The Frobenius method enables one to create a power series solution to such a differential equation, provided that p(z) and q(z) are themselves analytic at 0 or, being analytic elsewhere, both their limits at 0 exist (and are finite). == History == Frobenius' contribution was not so much in all the possible forms of the series solutions involved (see below). These forms had all been established earlier, by Lazarus Fuchs. The indicial polynomial (see below) and its role had also been established by Fuchs. A first contribution by Frobenius to the theory was to show that - as regards a first, linearly independent solution, which then has the form of an analytical power series multiplied by an arbitrary power r of the independent variable (see below) - the coefficients of the generalized power series obey a recurrence relation, so that they can always be straightforwardly calculated. A second contribution by Frobenius was to show that, in cases in which the roots of the indicial equation differ by an integer, the general form of the second linearly independent solution (see below) can be obtained by a procedure which is based on differentiation with respect to the parameter r, mentioned above. A large part of Frobenius' 1873 publication was devoted to proofs of convergence of all the series involved in the solutions, as well as establishing the radii of convergence of these series. == Explanation == The method of Frobenius is to seek a power series solution of the form u ( z ) = z r ∑ k = 0 ∞ A k z k , ( A 0 ≠ 0 ) {\displaystyle u(z)=z^{r}\sum _{k=0}^{\infty }A_{k}z^{k},\qquad (A_{0}\neq 0)} Differentiating: u ′ ( z ) = ∑ k = 0 ∞ ( k + r ) A k z k + r − 1 {\displaystyle u'(z)=\sum _{k=0}^{\infty }(k+r)A_{k}z^{k+r-1}} u ″ ( z ) = ∑ k = 0 ∞ ( k + r − 1 ) ( k + r ) A k z k + r − 2 {\displaystyle u''(z)=\sum _{k=0}^{\infty }(k+r-1)(k+r)A_{k}z^{k+r-2}} Substituting the above differentiation into our original ODE: z 2 ∑ k = 0 ∞ ( k + r − 1 ) ( k + r ) A k z k + r − 2 + z p ( z ) ∑ k = 0 ∞ ( k + r ) A k z k + r − 1 + q ( z ) ∑ k = 0 ∞ A k z k + r = ∑ k = 0 ∞ ( k + r − 1 ) ( k + r ) A k z k + r + p ( z ) ∑ k = 0 ∞ ( k + r ) A k z k + r + q ( z ) ∑ k = 0 ∞ A k z k + r = ∑ k = 0 ∞ [ ( k + r − 1 ) ( k + r ) A k z k + r + p ( z ) ( k + r ) A k z k + r + q ( z ) A k z k + r ] = ∑ k = 0 ∞ [ ( k + r − 1 ) ( k + r ) + p ( z ) ( k + r ) + q ( z ) ] A k z k + r = [ r ( r − 1 ) + p ( z ) r + q ( z ) ] A 0 z r + ∑ k = 1 ∞ [ ( k + r − 1 ) ( k + r ) + p ( z ) ( k + r ) + q ( z ) ] A k z k + r = 0 {\displaystyle {\begin{aligned}&z^{2}\sum _{k=0}^{\infty }(k+r-1)(k+r)A_{k}z^{k+r-2}+zp(z)\sum _{k=0}^{\infty }(k+r)A_{k}z^{k+r-1}+q(z)\sum _{k=0}^{\infty }A_{k}z^{k+r}\\={}&\sum _{k=0}^{\infty }(k+r-1)(k+r)A_{k}z^{k+r}+p(z)\sum _{k=0}^{\infty }(k+r)A_{k}z^{k+r}+q(z)\sum _{k=0}^{\infty }A_{k}z^{k+r}\\={}&\sum _{k=0}^{\infty }[(k+r-1)(k+r)A_{k}z^{k+r}+p(z)(k+r)A_{k}z^{k+r}+q(z)A_{k}z^{k+r}]\\={}&\sum _{k=0}^{\infty }\left[(k+r-1)(k+r)+p(z)(k+r)+q(z)\right]A_{k}z^{k+r}\\={}&\left[r(r-1)+p(z)r+q(z)\right]A_{0}z^{r}+\sum _{k=1}^{\infty }\left[(k+r-1)(k+r)+p(z)(k+r)+q(z)\right]A_{k}z^{k+r}=0\end{aligned}}} The expression r ( r − 1 ) + p ( 0 ) r + q ( 0 ) = I ( r ) {\displaystyle r\left(r-1\right)+p\left(0\right)r+q\left(0\right)=I(r)} is known as the indicial polynomial, which is quadratic in r. The general definition of the indicial polynomial is the coefficient of the lowest power of z in the infinite series. In this case it happens to be that this is the rth coefficient but, it is possible for the lowest possible exponent to be r − 2, r − 1 or, something else depending on the given differential equation. This detail is important to keep in mind. In the process of synchronizing all the series of the differential equation to start at the same index value (which in the above expression is k = 1), one can end up with complicated expressions. However, in solving for the indicial roots attention is focused only on the coefficient of the lowest power of z. Using this, the general expression of the coefficient of zk + r is I ( k + r ) A k + ∑ j = 0 k − 1 ( j + r ) p ( k − j ) ( 0 ) + q ( k − j ) ( 0 ) ( k − j ) ! A j , {\displaystyle I(k+r)A_{k}+\sum _{j=0}^{k-1}{(j+r)p^{(k-j)}(0)+q^{(k-j)}(0) \over (k-j)!}A_{j},} These coefficients must be zero, since they should be solutions of the differential equation, so I ( k + r ) A k + ∑ j = 0 k − 1 ( j + r ) p ( k − j ) ( 0 ) + q ( k − j ) ( 0 ) ( k − j ) ! A j = 0 ∑ j = 0 k − 1 ( j + r ) p ( k − j ) ( 0 ) + q ( k − j ) ( 0 ) ( k − j ) ! A j = − I ( k + r ) A k 1 − I ( k + r ) ∑ j = 0 k − 1 ( j + r ) p ( k − j ) ( 0 ) + q ( k − j ) ( 0 ) ( k − j ) ! A j = A k {\displaystyle {\begin{aligned}I(k+r)A_{k}+\sum _{j=0}^{k-1}{(j+r)p^{(k-j)}(0)+q^{(k-j)}(0) \over (k-j)!}A_{j}&=0\\[4pt]\sum _{j=0}^{k-1}{(j+r)p^{(k-j)}(0)+q^{(k-j)}(0) \over (k-j)!}A_{j}&=-I(k+r)A_{k}\\[4pt]{1 \over -I(k+r)}\sum _{j=0}^{k-1}{(j+r)p^{(k-j)}(0)+q^{(k-j)}(0) \over (k-j)!}A_{j}&=A_{k}\end{aligned}}} The series solution with Ak above, U r ( z ) = ∑ k = 0 ∞ A k z k + r {\displaystyle U_{r}(z)=\sum _{k=0}^{\infty }A_{k}z^{k+r}} satisfies z 2 U r ( z ) ″ + p ( z ) z U r ( z ) ′ + q ( z ) U r ( z ) = I ( r ) z r {\displaystyle z^{2}U_{r}(z)''+p(z)zU_{r}(z)'+q(z)U_{r}(z)=I(r)z^{r}} If we choose one of the roots to the indicial polynomial for r in Ur(z), we gain a solution to the differential equation. If the difference between the roots is not an integer, we get another, linearly independent solution in the other root. == Example == Let us solve z 2 f ″ − z f ′ + ( 1 − z ) f = 0 {\displaystyle z^{2}f''-zf'+(1-z)f=0} Divide throughout by z2 to give f ″ − 1 z f ′ + 1 − z z 2 f = f ″ − 1 z f ′ + ( 1 z 2 − 1 z ) f = 0 {\displaystyle f''-{1 \over z}f'+{1-z \over z^{2}}f=f''-{1 \over z}f'+\left({1 \over z^{2}}-{1 \over z}\right)f=0} which has the requisite singularity at z = 0. Use the series solution f = ∑ k = 0 ∞ A k z k + r f ′ = ∑ k = 0 ∞ ( k + r ) A k z k + r − 1 f ″ = ∑ k = 0 ∞ ( k + r ) ( k + r − 1 ) A k z k + r − 2 {\displaystyle {\begin{aligned}f&=\sum _{k=0}^{\infty }A_{k}z^{k+r}\\f'&=\sum _{k=0}^{\infty }(k+r)A_{k}z^{k+r-1}\\f''&=\sum _{k=0}^{\infty }(k+r)(k+r-1)A_{k}z^{k+r-2}\end{aligned}}} Now, substituting ∑ k = 0 ∞ ( k + r ) ( k + r − 1 ) A k z k + r − 2 − 1 z ∑ k = 0 ∞ ( k + r ) A k z k + r − 1 + ( 1 z 2 − 1 z ) ∑ k = 0 ∞ A k z k + r = ∑ k = 0 ∞ ( k + r ) ( k + r − 1 ) A k z k + r − 2 − 1 z ∑ k = 0 ∞ ( k + r ) A k z k + r − 1 + 1 z 2 ∑ k = 0 ∞ A k z k + r − 1 z ∑ k = 0 ∞ A k z k + r = ∑ k = 0 ∞ ( k + r ) ( k + r − 1 ) A k z k + r − 2 − ∑ k = 0 ∞ ( k + r ) A k z k + r − 2 + ∑ k = 0 ∞ A k z k + r − 2 − ∑ k = 0 ∞ A k z k + r − 1 = ∑ k = 0 ∞ ( k + r ) ( k + r − 1 ) A k z k + r − 2 − ∑ k = 0 ∞ ( k + r ) A k z k + r − 2 + ∑ k = 0 ∞ A k z k + r − 2 − ∑ k − 1 = 0 ∞ A k − 1 z k − 1 + r − 1 = ∑ k = 0 ∞ ( k + r ) ( k + r − 1 ) A k z k + r − 2 − ∑ k = 0 ∞ ( k + r ) A k z k + r − 2 + ∑ k = 0 ∞ A k z k + r − 2 − ∑ k = 1 ∞ A k − 1 z k + r − 2 = { ∑ k = 0 ∞ ( ( k + r ) ( k + r − 1 ) − ( k + r ) + 1 ) A k z k + r − 2 } − ∑ k = 1 ∞ A k − 1 z k + r − 2 = { ( r ( r − 1 ) − r + 1 ) A 0 z r − 2 + ∑ k = 1 ∞ ( ( k + r ) ( k + r − 1 ) − ( k + r ) + 1 ) A k z k + r − 2 } − ∑ k = 1 ∞ A k − 1 z k + r − 2 = ( r − 1 ) 2 A 0 z r − 2 + { ∑ k = 1 ∞ ( k + r − 1 ) 2 A k z k + r − 2 − ∑ k = 1 ∞ A k − 1 z k + r − 2 } = ( r − 1 ) 2 A 0 z r − 2 + ∑ k = 1 ∞ ( ( k + r − 1 ) 2 A k − A k − 1 ) z k + r − 2 {\displaystyle {\begin{aligned}\sum _{k=0}^{\infty }&(k+r)(k+r-1)A_{k}z^{k+r-2}-{\frac {1}{z}}\sum _{k=0}^{\infty }(k+r)A_{k}z^{k+r-1}+\left({\frac {1}{z^{2}}}-{\frac {1}{z}}\right)\sum _{k=0}^{\infty }A_{k}z^{k+r}\\&=\sum _{k=0}^{\infty }(k+r)(k+r-1)A_{k}z^{k+r-2}-{\frac {1}{z}}\sum _{k=0}^{\infty }(k+r)A_{k}z^{k+r-1}+{\frac {1}{z^{2}}}\sum _{k=0}^{\infty }A_{k}z^{k+r}-{\frac {1}{z}}\sum _{k=0}^{\infty }A_{k}z^{k+r}\\&=\sum _{k=0}^{\infty }(k+r)(k+r-1)A_{k}z^{k+r-2}-\sum _{k=0}^{\infty }(k+r)A_{k}z^{k+r-2}+\sum _{k=0}^{\infty }A_{k}z^{k+r-2}-\sum _{k=0}^{\infty }A_{k}z^{k+r-1}\\&=\sum _{k=0}^{\infty }(k+r)(k+r-1)A_{k}z^{k+r-2}-\sum _{k=0}^{\infty }(k+r)A_{k}z^{k+r-2}+\sum _{k=0}^{\infty }A_{k}z^{k+r-2}-\sum _{k-1=0}^{\infty }A_{k-1}z^{k-1+r-1}\\&=\sum _{k=0}^{\infty }(k+r)(k+r-1)A_{k}z^{k+r-2}-\sum _{k=0}^{\infty }(k+r)A_{k}z^{k+r-2}+\sum _{k=0}^{\infty }A_{k}z^{k+r-2}-\sum _{k=1}^{\infty }A_{k-1}z^{k+r-2}\\&=\left\{\sum _{k=0}^{\infty }\left((k+r)(k+r-1)-(k+r)+1\right)A_{k}z^{k+r-2}\right\}-\sum _{k=1}^{\infty }A_{k-1}z^{k+r-2}\\&=\left\{\left(r(r-1)-r+1\right)A_{0}z^{r-2}+\sum _{k=1}^{\infty }\left((k+r)(k+r-1)-(k+r)+1\right)A_{k}z^{k+r-2}\right\}-\sum _{k=1}^{\infty }A_{k-1}z^{k+r-2}\\&=(r-1)^{2}A_{0}z^{r-2}+\left\{\sum _{k=1}^{\infty }(k+r-1)^{2}A_{k}z^{k+r-2}-\sum _{k=1}^{\infty }A_{k-1}z^{k+r-2}\right\}\\&=(r-1)^{2}A_{0}z^{r-2}+\sum _{k=1}^{\infty }\left((k+r-1)^{2}A_{k}-A_{k-1}\right)z^{k+r-2}\end{aligned}}} From (r − 1)2 = 0 we get a double root of 1. Using this root, we set the coefficient of zk + r − 2 to be zero (for it to be a solution), which gives us: ( k + 1 − 1 ) 2 A k − A k − 1 = k 2 A k − A k − 1 = 0 {\displaystyle (k+1-1)^{2}A_{k}-A_{k-1}=k^{2}A_{k}-A_{k-1}=0} hence we have the recurrence relation: A k = A k − 1 k 2 {\displaystyle A_{k}={\frac {A_{k-1}}{k^{2}}}} Given some initial conditions, we can either solve the recurrence entirely or obtain a solution in power series form. Since the ratio of coefficients A k / A k − 1 {\displaystyle A_{k}/A_{k-1}} is a rational function, the power series can be written as a generalized hypergeometric series. == Exceptional cases: roots separated by an integer == The previous example involved an indicial polynomial with a repeated root, which gives only one solution to the given differential equation. In general, the Frobenius method gives two independent solutions provided that the indicial equation's roots are not separated by an integer (including zero). If the root is repeated or the roots differ by an integer, then the second solution can be found using: y 2 = C y 1 ln ⁡ x + ∑ k = 0 ∞ B k x k + r 2 {\displaystyle y_{2}=Cy_{1}\ln x+\sum _{k=0}^{\infty }B_{k}x^{k+r_{2}}} where y 1 ( x ) {\displaystyle y_{1}(x)} is the first solution (based on the larger root in the case of unequal roots), r 2 {\displaystyle r_{2}} is the smaller root, and the constant C and the coefficients B k {\displaystyle B_{k}} are to be determined. Once B 0 {\displaystyle B_{0}} is chosen (for example by setting it to 1) then C and the B k {\displaystyle B_{k}} are determined up to but not including B r 1 − r 2 {\displaystyle B_{r_{1}-r_{2}}} , which can be set arbitrarily. This then determines the rest of the B k . {\displaystyle B_{k}.} In some cases the constant C must be zero. Example: consider the following differential equation (Kummer's equation with a = 1 and b = 2): z u ″ + ( 2 − z ) u ′ − u = 0 {\displaystyle zu''+(2-z)u'-u=0} The roots of the indicial equation are −1 and 0. Two independent solutions are 1 / z {\displaystyle 1/z} and e z / z , {\displaystyle e^{z}/z,} so we see that the logarithm does not appear in any solution. The solution ( e z − 1 ) / z {\displaystyle (e^{z}-1)/z} has a power series starting with the power zero. In a power series starting with z − 1 {\displaystyle z^{-1}} the recurrence relation places no restriction on the coefficient for the term z 0 , {\displaystyle z^{0},} which can be set arbitrarily. If it is set to zero then with this differential equation all the other coefficients will be zero and we obtain the solution 1/z. === Tandem recurrence relations for series coefficients in the exceptional cases === In cases in which roots of the indicial polynomial differ by an integer (including zero), the coefficients of all series involved in second linearly independent solutions can be calculated straightforwardly from tandem recurrence relations. These tandem relations can be constructed by further developing Frobenius' original invention of differentiating with respect to the parameter r, and using this approach to actually calculate the series coefficients in all cases. == See also == Fuchs' theorem Regular singular point Laurent series == External links == Weisstein, Eric W. "Frobenius Method". MathWorld. Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0. (Draft version available online at https://www.mat.univie.ac.at/~gerald/ftp/book-ode/). Chapter 4 contains the full method including proofs. == References ==
Wikipedia/Frobenius_method
In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common in mathematical models and scientific laws; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. The study of differential equations consists mainly of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are solvable by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly. Often when a closed-form expression for the solutions is not available, solutions may be approximated numerically using computers, and many numerical methods have been developed to determine solutions with a given degree of accuracy. The theory of dynamical systems analyzes the qualitative aspects of solutions, such as their average behavior over a long time interval. == History == Differential equations came into existence with the invention of calculus by Isaac Newton and Gottfried Leibniz. In Chapter 2 of his 1671 work Methodus fluxionum et Serierum Infinitarum, Newton listed three kinds of differential equations: d y d x = f ( x ) d y d x = f ( x , y ) x 1 ∂ y ∂ x 1 + x 2 ∂ y ∂ x 2 = y {\displaystyle {\begin{aligned}{\frac {dy}{dx}}&=f(x)\\[4pt]{\frac {dy}{dx}}&=f(x,y)\\[4pt]x_{1}{\frac {\partial y}{\partial x_{1}}}&+x_{2}{\frac {\partial y}{\partial x_{2}}}=y\end{aligned}}} In all these cases, y is an unknown function of x (or of x1 and x2), and f is a given function. He solves these examples and others using infinite series and discusses the non-uniqueness of solutions. Jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is an ordinary differential equation of the form y ′ + P ( x ) y = Q ( x ) y n {\displaystyle y'+P(x)y=Q(x)y^{n}\,} for which the following year Leibniz obtained solutions by simplifying it. Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange. In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation. The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. In 1822, Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytic Theory of Heat), in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat. This partial differential equation is now a common part of mathematical physics curriculum. == Example == In classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically (given the position, velocity, acceleration and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time. In some cases, this differential equation (called an equation of motion) may be solved explicitly. An example of modeling a real-world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the deceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity. == Types == Differential equations can be classified several different ways. Besides describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is ordinary or partial, linear or non-linear, and homogeneous or heterogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts. === Ordinary differential equations === An ordinary differential equation (ODE) is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable. Linear differential equations are the differential equations that are linear in the unknown function and its derivatives. Their theory is well developed, and in many cases one may express their solutions in terms of integrals. Most ODEs that are encountered in physics are linear. Therefore, most special functions may be defined as solutions of linear differential equations (see Holonomic function). As, in general, the solutions of a differential equation cannot be expressed by a closed-form expression, numerical methods are commonly used for solving differential equations on a computer. === Partial differential equations === A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevant computer model. PDEs can be used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalized similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. Stochastic partial differential equations generalize partial differential equations for modeling randomness. === Non-linear differential equations === A non-linear differential equation is a differential equation that is not a linear equation in the unknown function and its derivatives (the linearity or non-linearity in the arguments of the function are not considered here). There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behaviour over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution. Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations. === Equation order and degree === The order of the differential equation is the highest order of derivative of the unknown function that appears in the differential equation. For example, an equation containing only first-order derivatives is a first-order differential equation, an equation containing the second-order derivative is a second-order differential equation, and so on. When it is written as a polynomial equation in the unknown function and its derivatives, its degree of the differential equation is, depending on the context, the polynomial degree in the highest derivative of the unknown function, or its total degree in the unknown function and its derivatives. In particular, a linear differential equation has degree one for both meanings, but the non-linear differential equation y ′ + y 2 = 0 {\displaystyle y'+y^{2}=0} is of degree one for the first meaning but not for the second one. Differential equations that describe natural phenomena almost always have only first and second order derivatives in them, but there are some exceptions, such as the thin-film equation, which is a fourth order partial differential equation. === Examples === In the first group of examples u is an unknown function of x, and c and ω are constants that are supposed to be known. Two broad classifications of both ordinary and partial differential equations consist of distinguishing between linear and nonlinear differential equations, and between homogeneous differential equations and heterogeneous ones. Heterogeneous first-order linear constant coefficient ordinary differential equation: d u d x = c u + x 2 . {\displaystyle {\frac {du}{dx}}=cu+x^{2}.} Homogeneous second-order linear ordinary differential equation: d 2 u d x 2 − x d u d x + u = 0. {\displaystyle {\frac {d^{2}u}{dx^{2}}}-x{\frac {du}{dx}}+u=0.} Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator: d 2 u d x 2 + ω 2 u = 0. {\displaystyle {\frac {d^{2}u}{dx^{2}}}+\omega ^{2}u=0.} Heterogeneous first-order nonlinear ordinary differential equation: d u d x = u 2 + 4. {\displaystyle {\frac {du}{dx}}=u^{2}+4.} Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L: L d 2 u d x 2 + g sin ⁡ u = 0. {\displaystyle L{\frac {d^{2}u}{dx^{2}}}+g\sin u=0.} In the next group of examples, the unknown function u depends on two variables x and t or x and y. Homogeneous first-order linear partial differential equation: ∂ u ∂ t + t ∂ u ∂ x = 0. {\displaystyle {\frac {\partial u}{\partial t}}+t{\frac {\partial u}{\partial x}}=0.} Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation: ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 = 0. {\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0.} Homogeneous third-order non-linear partial differential equation, the KdV equation: ∂ u ∂ t = 6 u ∂ u ∂ x − ∂ 3 u ∂ x 3 . {\displaystyle {\frac {\partial u}{\partial t}}=6u{\frac {\partial u}{\partial x}}-{\frac {\partial ^{3}u}{\partial x^{3}}}.} == Existence of solutions == Solving differential equations is not like solving algebraic equations. Not only are their solutions often unclear, but whether solutions are unique or exist at all are also notable subjects of interest. For first order initial value problems, the Peano existence theorem gives one set of circumstances in which a solution exists. Given any point ( a , b ) {\displaystyle (a,b)} in the xy-plane, define some rectangular region Z {\displaystyle Z} , such that Z = [ l , m ] × [ n , p ] {\displaystyle Z=[l,m]\times [n,p]} and ( a , b ) {\displaystyle (a,b)} is in the interior of Z {\displaystyle Z} . If we are given a differential equation d y d x = g ( x , y ) {\textstyle {\frac {dy}{dx}}=g(x,y)} and the condition that y = b {\displaystyle y=b} when x = a {\displaystyle x=a} , then there is locally a solution to this problem if g ( x , y ) {\displaystyle g(x,y)} and ∂ g ∂ x {\textstyle {\frac {\partial g}{\partial x}}} are both continuous on Z {\displaystyle Z} . This solution exists on some interval with its center at a {\displaystyle a} . The solution may not be unique. (See Ordinary differential equation for other results.) However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order: f n ( x ) d n y d x n + ⋯ + f 1 ( x ) d y d x + f 0 ( x ) y = g ( x ) {\displaystyle f_{n}(x){\frac {d^{n}y}{dx^{n}}}+\cdots +f_{1}(x){\frac {dy}{dx}}+f_{0}(x)y=g(x)} such that y ( x 0 ) = y 0 , y ′ ( x 0 ) = y 0 ′ , y ″ ( x 0 ) = y 0 ″ , … {\displaystyle {\begin{aligned}y(x_{0})&=y_{0},&y'(x_{0})&=y'_{0},&y''(x_{0})&=y''_{0},&\ldots \end{aligned}}} For any nonzero f n ( x ) {\displaystyle f_{n}(x)} , if { f 0 , f 1 , … } {\displaystyle \{f_{0},f_{1},\ldots \}} and g {\displaystyle g} are continuous on some interval containing x 0 {\displaystyle x_{0}} , y {\displaystyle y} exists and is unique. == Related concepts == A delay differential equation (DDE) is an equation for a function of a single variable, usually called time, in which the derivative of the function at a certain time is given in terms of the values of the function at earlier times. Integral equations may be viewed as the analog to differential equations where instead of the equation involving derivatives, the equation contains integrals. An integro-differential equation (IDE) is an equation that combines aspects of a differential equation and an integral equation. A stochastic differential equation (SDE) is an equation in which the unknown quantity is a stochastic process and the equation involves some known stochastic processes, for example, the Wiener process in the case of diffusion equations. A stochastic partial differential equation (SPDE) is an equation that generalizes SDEs to include space-time noise processes, with applications in quantum field theory and statistical mechanics. An ultrametric pseudo-differential equation is an equation which contains p-adic numbers in an ultrametric space. Mathematical models that involve ultrametric pseudo-differential equations use pseudo-differential operators instead of differential operators. A differential algebraic equation (DAE) is a differential equation comprising differential and algebraic terms, given in implicit form. == Connection to difference equations == The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve the approximation of the solution of a differential equation by the solution of a corresponding difference equation. == Applications == The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modeling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods. Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider the propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation. The number of differential equations that have received a name, in various scientific areas is a witness of the importance of the topic. See List of named differential equations. == Software == Some CAS software can solve differential equations. These are the commands used in the leading programs: Maple: dsolve Mathematica: DSolve[] Maxima: ode2(equation, y, x) SageMath: desolve() SymPy: sympy.solvers.ode.dsolve(equation) Xcas: desolve(y'=k*y,y) == See also == == References == == Further reading == Abbott, P.; Neill, H. (2003). Teach Yourself Calculus. pp. 266–277. Blanchard, P.; Devaney, R. L.; Hall, G. R. (2006). Differential Equations. Thompson. Boyce, W.; DiPrima, R.; Meade, D. (2017). Elementary Differential Equations and Boundary Value Problems. Wiley. Coddington, E. A.; Levinson, N. (1955). Theory of Ordinary Differential Equations. McGraw-Hill. Ince, E. L. (1956). Ordinary Differential Equations. Dover. Johnson, W. (1913). A Treatise on Ordinary and Partial Differential Equations. John Wiley and Sons. In University of Michigan Historical Math Collection Polyanin, A. D.; Zaitsev, V. F. (2003). Handbook of Exact Solutions for Ordinary Differential Equations (2nd ed.). Boca Raton: Chapman & Hall/CRC Press. ISBN 1-58488-297-2. Porter, R. I. (1978). "XIX Differential Equations". Further Elementary Analysis. Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0. Daniel Zwillinger (12 May 2014). Handbook of Differential Equations. Elsevier Science. ISBN 978-1-4832-6396-0. == External links == Media related to Differential equations at Wikimedia Commons Lectures on Differential Equations MIT Open CourseWare Videos Online Notes / Differential Equations Paul Dawkins, Lamar University Differential Equations, S.O.S. Mathematics Introduction to modeling via differential equations Introduction to modeling by means of differential equations, with critical remarks. Mathematical Assistant on Web Symbolic ODE tool, using Maxima Exact Solutions of Ordinary Differential Equations Collection of ODE and DAE models of physical systems Archived 2008-12-19 at the Wayback Machine MATLAB models Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC Khan Academy Video playlist on differential equations Topics covered in a first year course in differential equations. MathDiscuss Video playlist on differential equations
Wikipedia/Examples_of_differential_equations
A singular solution ys(x) of an ordinary differential equation is a solution that is singular or one for which the initial value problem (also called the Cauchy problem by some authors) fails to have a unique solution at some point on the solution. The set on which a solution is singular may be as small as a single point or as large as the full real line. Solutions which are singular in the sense that the initial value problem fails to have a unique solution need not be singular functions. In some cases, the term singular solution is used to mean a solution at which there is a failure of uniqueness to the initial value problem at every point on the curve. A singular solution in this stronger sense is often given as tangent to every solution from a family of solutions. By tangent we mean that there is a point x where ys(x) = yc(x) and y's(x) = y'c(x) where yc is a solution in a family of solutions parameterized by c. This means that the singular solution is the envelope of the family of solutions. Usually, singular solutions appear in differential equations when there is a need to divide in a term that might be equal to zero. Therefore, when one is solving a differential equation and using division one must check what happens if the term is equal to zero, and whether it leads to a singular solution. The Picard–Lindelöf theorem, which gives sufficient conditions for unique solutions to exist, can be used to rule out the existence of singular solutions. Other theorems, such as the Peano existence theorem, give sufficient conditions for solutions to exist without necessarily being unique, which can allow for the existence of singular solutions. == A divergent solution == Consider the homogeneous linear ordinary differential equation x y ′ ( x ) + 2 y ( x ) = 0 , {\displaystyle xy'(x)+2y(x)=0,\,\!} where primes denote derivatives with respect to x. The general solution to this equation is y ( x ) = C x − 2 . {\displaystyle y(x)=Cx^{-2}.\,\!} For a given C {\displaystyle C} , this solution is smooth except at x = 0 {\displaystyle x=0} where the solution is divergent. Furthermore, for a given x ≠ 0 {\displaystyle x\not =0} , this is the unique solution going through ( x , y ( x ) ) {\displaystyle (x,y(x))} . == Failure of uniqueness == Consider the differential equation y ′ ( x ) 2 = 4 y ( x ) . {\displaystyle y'(x)^{2}=4y(x).\,\!} A one-parameter family of solutions to this equation is given by y c ( x ) = ( x − c ) 2 . {\displaystyle y_{c}(x)=(x-c)^{2}.\,\!} Another solution is given by y s ( x ) = 0. {\displaystyle y_{s}(x)=0.\,\!} Since the equation being studied is a first-order equation, the initial conditions are the initial x and y values. By considering the two sets of solutions above, one can see that the solution fails to be unique when y = 0 {\displaystyle y=0} . (It can be shown that for y > 0 {\displaystyle y>0} if a single branch of the square root is chosen, then there is a local solution which is unique using the Picard–Lindelöf theorem.) Thus, the solutions above are all singular solutions, in the sense that solution fails to be unique in a neighbourhood of one or more points. (Commonly, we say "uniqueness fails" at these points.) For the first set of solutions, uniqueness fails at one point, x = c {\displaystyle x=c} , and for the second solution, uniqueness fails at every value of x {\displaystyle x} . Thus, the solution y s {\displaystyle y_{s}} is a singular solution in the stronger sense that uniqueness fails at every value of x. However, it is not a singular function since it and all its derivatives are continuous. In this example, the solution y s ( x ) = 0 {\displaystyle y_{s}(x)=0} is the envelope of the family of solutions y c ( x ) = ( x − c ) 2 {\displaystyle y_{c}(x)=(x-c)^{2}} . The solution y s {\displaystyle y_{s}} is tangent to every curve y c ( x ) {\displaystyle y_{c}(x)} at the point ( c , 0 ) {\displaystyle (c,0)} . The failure of uniqueness can be used to construct more solutions. These can be found by taking two constant c 1 < c 2 {\displaystyle c_{1}<c_{2}} and defining a solution y ( x ) {\displaystyle y(x)} to be ( x − c 1 ) 2 {\displaystyle (x-c_{1})^{2}} when x < c 1 {\displaystyle x<c_{1}} , to be 0 {\displaystyle 0} when c 1 ≤ x ≤ c 2 {\displaystyle c_{1}\leq x\leq c_{2}} , and to be ( x − c 2 ) 2 {\displaystyle (x-c_{2})^{2}} when x > c 2 {\displaystyle x>c_{2}} . Direct calculation shows that this is a solution of the differential equation at every point, including x = c 1 {\displaystyle x=c_{1}} and x = c 2 {\displaystyle x=c_{2}} . Uniqueness fails for these solutions on the interval c 1 ≤ x ≤ c 2 {\displaystyle c_{1}\leq x\leq c_{2}} , and the solutions are singular, in the sense that the second derivative fails to exist, at x = c 1 {\displaystyle x=c_{1}} and x = c 2 {\displaystyle x=c_{2}} . == Further example of failure of uniqueness == The previous example might give the erroneous impression that failure of uniqueness is directly related to y ( x ) = 0 {\displaystyle y(x)=0} . Failure of uniqueness can also be seen in the following example of a Clairaut's equation: y ( x ) = x ⋅ y ′ + ( y ′ ) 2 {\displaystyle y(x)=x\cdot y'+(y')^{2}\,\!} We write y' = p and then y ( x ) = x ⋅ p + ( p ) 2 . {\displaystyle y(x)=x\cdot p+(p)^{2}.} Now, we shall take the differential according to x: p = y ′ = p + x p ′ + 2 p p ′ {\displaystyle p=y'=p+xp'+2pp'} which by simple algebra yields 0 = ( 2 p + x ) p ′ . {\displaystyle 0=(2p+x)p'.} This condition is solved if 2p+x=0 or if p′=0. If p' = 0 it means that y' = p = c = constant, and the general solution of this new equation is: y c ( x ) = c ⋅ x + c 2 {\displaystyle y_{c}(x)=c\cdot x+c^{2}} where c is determined by the initial value. If x + 2p = 0 then we get that p = −½x and substituting in the ODE gives y s ( x ) = − 1 2 x 2 + ( − 1 2 x ) 2 = − 1 4 x 2 . {\displaystyle y_{s}(x)=-{\tfrac {1}{2}}x^{2}+(-{\tfrac {1}{2}}x)^{2}=-{\tfrac {1}{4}}x^{2}.} Now we shall check when these solutions are singular solutions. If two solutions intersect each other, that is, they both go through the same point (x,y), then there is a failure of uniqueness for a first-order ordinary differential equation. Thus, there will be a failure of uniqueness if a solution of the first form intersects the second solution. The condition of intersection is : ys(x) = yc(x). We solve c ⋅ x + c 2 = y c ( x ) = y s ( x ) = − 1 4 x 2 {\displaystyle c\cdot x+c^{2}=y_{c}(x)=y_{s}(x)=-{\tfrac {1}{4}}x^{2}} to find the intersection point, which is ( − 2 c , − c 2 ) {\displaystyle (-2c,-c^{2})} . We can verify that the curves are tangent at this point y's(x) = y'c(x). We calculate the derivatives: y c ′ ( − 2 c ) = c {\displaystyle y_{c}'(-2c)=c\,\!} y s ′ ( − 2 c ) = − 1 2 x | x = − 2 c = c . {\displaystyle y_{s}'(-2c)=-{\tfrac {1}{2}}x|_{x=-2c}=c.\,\!} Hence, y s ( x ) = − 1 4 ⋅ x 2 {\displaystyle y_{s}(x)=-{\tfrac {1}{4}}\cdot x^{2}\,\!} is tangent to every member of the one-parameter family of solutions y c ( x ) = c ⋅ x + c 2 {\displaystyle y_{c}(x)=c\cdot x+c^{2}\,\!} of this Clairaut equation: y ( x ) = x ⋅ y ′ + ( y ′ ) 2 . {\displaystyle y(x)=x\cdot y'+(y')^{2}.\,\!} == See also == Chandrasekhar equation Chrystal's equation Caustic (mathematics) Envelope (mathematics) Initial value problem Picard–Lindelöf theorem == Bibliography == Rozov, N.Kh. (2001) [1994], "Singular solution", Encyclopedia of Mathematics, EMS Press
Wikipedia/Singular_solution
A population model is a type of mathematical model that is applied to the study of population dynamics. == Rationale == Models allow a better understanding of how complex interactions and processes work. Modeling of dynamic interactions in nature can provide a manageable way of understanding how numbers change over time or in relation to each other. Many patterns can be noticed by using population modeling as a tool. Ecological population modeling is concerned with the changes in parameters such as population size and age distribution within a population. This might be due to interactions with the environment, individuals of their own species, or other species. Population models are used to determine maximum harvest for agriculturists, to understand the dynamics of biological invasions, and for environmental conservation. Population models are also used to understand the spread of parasites, viruses, and disease. Another way populations models are useful are when species become endangered. Population models can track the fragile species and work and curb the decline. [1] == History == Late 18th-century biologists began to develop techniques in population modeling in order to understand the dynamics of growing and shrinking of all populations of living organisms. Thomas Malthus was one of the first to note that populations grew with a geometric pattern while contemplating the fate of humankind. One of the most basic and milestone models of population growth was the logistic model of population growth formulated by Pierre François Verhulst in 1838. The logistic model takes the shape of a sigmoid curve and describes the growth of a population as exponential, followed by a decrease in growth, and bound by a carrying capacity due to environmental pressures. Population modeling became of particular interest to biologists in the 20th century as pressure on limited means of sustenance due to increasing human populations in parts of Europe were noticed by biologist like Raymond Pearl. In 1921 Pearl invited physicist Alfred J. Lotka to assist him in his lab. Lotka developed paired differential equations that showed the effect of a parasite on its prey. Mathematician Vito Volterra equated the relationship between two species independent from Lotka. Together, Lotka and Volterra formed the Lotka–Volterra model for competition that applies the logistic equation to two species illustrating competition, predation, and parasitism interactions between species. In 1939 contributions to population modeling were given by Patrick Leslie as he began work in biomathematics. Leslie emphasized the importance of constructing a life table in order to understand the effect that key life history strategies played in the dynamics of whole populations. Matrix algebra was used by Leslie in conjunction with life tables to extend the work of Lotka. Matrix models of populations calculate the growth of a population with life history variables. Later, Robert MacArthur and E. O. Wilson characterized island biogeography. The equilibrium model of island biogeography describes the number of species on an island as an equilibrium of immigration and extinction. The logistic population model, the Lotka–Volterra model of community ecology, life table matrix modeling, the equilibrium model of island biogeography and variations thereof are the basis for ecological population modeling today. == Equations == Logistic growth equation: d N d t = r N ( 1 − N K ) {\displaystyle {\frac {dN}{dt}}=rN\left(1-{\frac {N}{K}}\right)\,} Competitive Lotka–Volterra equations: d N 1 d t = r 1 N 1 K 1 − N 1 − α N 2 K 1 {\displaystyle {\frac {dN_{1}}{dt}}=r_{1}N_{1}{\frac {K_{1}-N_{1}-\alpha N_{2}}{K_{1}}}\,} Island biogeography: S = I P I + E {\displaystyle S={\frac {IP}{I+E}}} Species–area relationship: log ⁡ ( S ) = log ⁡ ( c ) + z log ⁡ ( A ) {\displaystyle \log(S)=\log(c)+z\log(A)\,} == Examples of individual-based models == == See also == Population dynamics Population dynamics of fisheries Population ecology Moment closure == References == == External links == GreenBoxes code sharing network. Greenboxes (Beta) is a repository for open-source population modeling code. Greenboxes allows users an easy way to share their code and to search for others shared code.
Wikipedia/Population_modeling
A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations. SDEs have a random differential that is in the most basic case random white noise calculated as the distributional derivative of a Brownian motion or more generally a semimartingale. However, other types of random behaviour are possible, such as jump processes like Lévy processes or semimartingales with jumps. Stochastic differential equations are in general neither differential equations nor random differential equations. Random differential equations are conjugate to stochastic differential equations. Stochastic differential equations can also be extended to differential manifolds. == Background == Stochastic differential equations originated in the theory of Brownian motion, in the work of Albert Einstein and Marian Smoluchowski in 1905, although Louis Bachelier was the first person credited with modeling Brownian motion in 1900, giving a very early example of a stochastic differential equation now known as Bachelier model. Some of these early examples were linear stochastic differential equations, also called Langevin equations after French physicist Langevin, describing the motion of a harmonic oscillator subject to a random force. The mathematical theory of stochastic differential equations was developed in the 1940s through the groundbreaking work of Japanese mathematician Kiyosi Itô, who introduced the concept of stochastic integral and initiated the study of nonlinear stochastic differential equations. Another approach was later proposed by Russian physicist Stratonovich, leading to a calculus similar to ordinary calculus. === Terminology === The most common form of SDEs in the literature is an ordinary differential equation with the right hand side perturbed by a term dependent on a white noise variable. In most cases, SDEs are understood as continuous time limit of the corresponding stochastic difference equations. This understanding of SDEs is ambiguous and must be complemented by a proper mathematical definition of the corresponding integral. Such a mathematical definition was first proposed by Kiyosi Itô in the 1940s, leading to what is known today as the Itô calculus. Another construction was later proposed by Russian physicist Stratonovich, leading to what is known as the Stratonovich integral. The Itô integral and Stratonovich integral are related, but different, objects and the choice between them depends on the application considered. The Itô calculus is based on the concept of non-anticipativeness or causality, which is natural in applications where the variable is time. The Stratonovich calculus, on the other hand, has rules which resemble ordinary calculus and has intrinsic geometric properties which render it more natural when dealing with geometric problems such as random motion on manifolds, although it is possible and in some cases preferable to model random motion on manifolds through Itô SDEs, for example when trying to optimally approximate SDEs on submanifolds. An alternative view on SDEs is the stochastic flow of diffeomorphisms. This understanding is unambiguous and corresponds to the Stratonovich version of the continuous time limit of stochastic difference equations. Associated with SDEs is the Smoluchowski equation or the Fokker–Planck equation, an equation describing the time evolution of probability distribution functions. The generalization of the Fokker–Planck evolution to temporal evolution of differential forms is provided by the concept of stochastic evolution operator. In physical science, there is an ambiguity in the usage of the term "Langevin SDEs". While Langevin SDEs can be of a more general form, this term typically refers to a narrow class of SDEs with gradient flow vector fields. This class of SDEs is particularly popular because it is a starting point of the Parisi–Sourlas stochastic quantization procedure, leading to a N=2 supersymmetric model closely related to supersymmetric quantum mechanics. From the physical point of view, however, this class of SDEs is not very interesting because it never exhibits spontaneous breakdown of topological supersymmetry, i.e., (overdamped) Langevin SDEs are never chaotic. === Stochastic calculus === Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is almost surely nowhere differentiable; thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Itô stochastic calculus and the Stratonovich stochastic calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation. Guidelines exist (e.g. Øksendal, 2003) and conveniently, one can readily convert an Itô SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down. === Numerical solutions === Numerical methods for solving stochastic differential equations include the Euler–Maruyama method, Milstein method, Runge–Kutta method (SDE), Rosenbrock method, and methods based on different representations of iterated stochastic integrals. == Use in physics == In physics, SDEs have wide applicability ranging from molecular dynamics to neurodynamics and to the dynamics of astrophysical objects. More specifically, SDEs describe all dynamical systems, in which quantum effects are either unimportant or can be taken into account as perturbations. SDEs can be viewed as a generalization of the dynamical systems theory to models with noise. This is an important generalization because real systems cannot be completely isolated from their environments and for this reason always experience external stochastic influence. There are standard techniques for transforming higher-order equations into several coupled first-order equations by introducing new unknowns. Therefore, the following is the most general class of SDEs: d x ( t ) d t = F ( x ( t ) ) + ∑ α = 1 n g α ( x ( t ) ) ξ α ( t ) , {\displaystyle {\frac {\mathrm {d} x(t)}{\mathrm {d} t}}=F(x(t))+\sum _{\alpha =1}^{n}g_{\alpha }(x(t))\xi ^{\alpha }(t),\,} where x ∈ X {\displaystyle x\in X} is the position in the system in its phase (or state) space, X {\displaystyle X} , assumed to be a differentiable manifold, the F ∈ T X {\displaystyle F\in TX} is a flow vector field representing deterministic law of evolution, and g α ∈ T X {\displaystyle g_{\alpha }\in TX} is a set of vector fields that define the coupling of the system to Gaussian white noise, ξ α {\displaystyle \xi ^{\alpha }} . If X {\displaystyle X} is a linear space and g {\displaystyle g} are constants, the system is said to be subject to additive noise, otherwise it is said to be subject to multiplicative noise. For additive noise, the Itô and Stratonovich forms of the SDE generate the same solution, and it is not important which definition is used to solve the SDE. For multiplicative noise SDEs the Itô and Stratonovich forms of the SDE are different, and care should be used in mapping between them. For a fixed configuration of noise, SDE has a unique solution differentiable with respect to the initial condition. Nontriviality of stochastic case shows up when one tries to average various objects of interest over noise configurations. In this sense, an SDE is not a uniquely defined entity when noise is multiplicative and when the SDE is understood as a continuous time limit of a stochastic difference equation. In this case, SDE must be complemented by what is known as "interpretations of SDE" such as Itô or a Stratonovich interpretations of SDEs. Nevertheless, when SDE is viewed as a continuous-time stochastic flow of diffeomorphisms, it is a uniquely defined mathematical object that corresponds to Stratonovich approach to a continuous time limit of a stochastic difference equation. In physics, the main method of solution is to find the probability distribution function as a function of time using the equivalent Fokker–Planck equation (FPE). The Fokker–Planck equation is a deterministic partial differential equation. It tells how the probability distribution function evolves in time similarly to how the Schrödinger equation gives the time evolution of the quantum wave function or the diffusion equation gives the time evolution of chemical concentration. Alternatively, numerical solutions can be obtained by Monte Carlo simulation. Other techniques include the path integration that draws on the analogy between statistical physics and quantum mechanics (for example, the Fokker-Planck equation can be transformed into the Schrödinger equation by rescaling a few variables) or by writing down ordinary differential equations for the statistical moments of the probability distribution function. == Use in probability and mathematical finance == The notation used in probability theory (and in many applications of probability theory, for instance in signal processing with the filtering problem and in mathematical finance) is slightly different. It is also the notation used in publications on numerical methods for solving stochastic differential equations. This notation makes the exotic nature of the random function of time ξ α {\displaystyle \xi ^{\alpha }} in the physics formulation more explicit. In strict mathematical terms, ξ α {\displaystyle \xi ^{\alpha }} cannot be chosen as an ordinary function, but only as a generalized function. The mathematical formulation treats this complication with less ambiguity than the physics formulation. A typical equation is of the form d X t = μ ( X t , t ) d t + σ ( X t , t ) d B t , {\displaystyle \mathrm {d} X_{t}=\mu (X_{t},t)\,\mathrm {d} t+\sigma (X_{t},t)\,\mathrm {d} B_{t},} where B {\displaystyle B} denotes a Wiener process (standard Brownian motion). This equation should be interpreted as an informal way of expressing the corresponding integral equation X t + s − X t = ∫ t t + s μ ( X u , u ) d u + ∫ t t + s σ ( X u , u ) d B u . {\displaystyle X_{t+s}-X_{t}=\int _{t}^{t+s}\mu (X_{u},u)\mathrm {d} u+\int _{t}^{t+s}\sigma (X_{u},u)\,\mathrm {d} B_{u}.} The equation above characterizes the behavior of the continuous time stochastic process Xt as the sum of an ordinary Lebesgue integral and an Itô integral. A heuristic (but very helpful) interpretation of the stochastic differential equation is that in a small time interval of length δ the stochastic process Xt changes its value by an amount that is normally distributed with expectation μ(Xt, t) δ and variance σ(Xt, t)2 δ and is independent of the past behavior of the process. This is so because the increments of a Wiener process are independent and normally distributed. The function μ is referred to as the drift coefficient, while σ is called the diffusion coefficient. The stochastic process Xt is called a diffusion process, and satisfies the Markov property. The formal interpretation of an SDE is given in terms of what constitutes a solution to the SDE. There are two main definitions of a solution to an SDE, a strong solution and a weak solution Both require the existence of a process Xt that solves the integral equation version of the SDE. The difference between the two lies in the underlying probability space ( Ω , F , P {\displaystyle \Omega ,\,{\mathcal {F}},\,P} ). A weak solution consists of a probability space and a process that satisfies the integral equation, while a strong solution is a process that satisfies the equation and is defined on a given probability space. The Yamada–Watanabe theorem makes a connection between the two. An important example is the equation for geometric Brownian motion d X t = μ X t d t + σ X t d B t . {\displaystyle \mathrm {d} X_{t}=\mu X_{t}\,\mathrm {d} t+\sigma X_{t}\,\mathrm {d} B_{t}.} which is the equation for the dynamics of the price of a stock in the Black–Scholes options pricing model of financial mathematics. Generalizing the geometric Brownian motion, it is also possible to define SDEs admitting strong solutions and whose distribution is a convex combination of densities coming from different geometric Brownian motions or Black Scholes models, obtaining a single SDE whose solutions is distributed as a mixture dynamics of lognormal distributions of different Black Scholes models. This leads to models that can deal with the volatility smile in financial mathematics. The simpler SDE called arithmetic Brownian motion d X t = μ d t + σ d B t {\displaystyle \mathrm {d} X_{t}=\mu \,\mathrm {d} t+\sigma \,\mathrm {d} B_{t}} was used by Louis Bachelier as the first model for stock prices in 1900, known today as Bachelier model. There are also more general stochastic differential equations where the coefficients μ and σ depend not only on the present value of the process Xt, but also on previous values of the process and possibly on present or previous values of other processes too. In that case the solution process, X, is not a Markov process, and it is called an Itô process and not a diffusion process. When the coefficients depends only on present and past values of X, the defining equation is called a stochastic delay differential equation. A generalization of stochastic differential equations with the Fisk-Stratonovich integral to semimartingales with jumps are the SDEs of Marcus type. The Marcus integral is an extension of McShane's stochastic calculus. An innovative application in stochastic finance derives from the usage of the equation for Ornstein–Uhlenbeck process d R t = μ R t d t + σ t d B t . {\displaystyle \mathrm {d} R_{t}=\mu R_{t}\,\mathrm {d} t+\sigma _{t}\,\mathrm {d} B_{t}.} which is the equation for the dynamics of the return of the price of a stock under the hypothesis that returns display a Log-normal distribution. Under this hypothesis, the methodologies developed by Marcello Minenna determines prediction interval able to identify abnormal return that could hide market abuse phenomena. === SDEs on manifolds === More generally one can extend the theory of stochastic calculus onto differential manifolds and for this purpose one uses the Fisk-Stratonovich integral. Consider a manifold M {\displaystyle M} , some finite-dimensional vector space E {\displaystyle E} , a filtered probability space ( Ω , F , ( F t ) t ∈ R + , P ) {\displaystyle (\Omega ,{\mathcal {F}},({\mathcal {F}}_{t})_{t\in \mathbb {R} _{+}},P)} with ( F t ) t ∈ R + {\displaystyle ({\mathcal {F}}_{t})_{t\in \mathbb {R} _{+}}} satisfying the usual conditions and let M ^ = M ∪ { ∞ } {\displaystyle {\widehat {M}}=M\cup \{\infty \}} be the one-point compactification and x 0 {\displaystyle x_{0}} be F 0 {\displaystyle {\mathcal {F}}_{0}} -measurable. A stochastic differential equation on M {\displaystyle M} written d X = A ( X ) ∘ d Z {\displaystyle \mathrm {d} X=A(X)\circ dZ} is a pair ( A , Z ) {\displaystyle (A,Z)} , such that Z {\displaystyle Z} is a continuous E {\displaystyle E} -valued semimartingale, A : M × E → T M , ( x , e ) ↦ A ( x ) e {\displaystyle A:M\times E\to TM,(x,e)\mapsto A(x)e} is a homomorphism of vector bundles over M {\displaystyle M} . For each x ∈ M {\displaystyle x\in M} the map A ( x ) : E → T x M {\displaystyle A(x):E\to T_{x}M} is linear and A ( ⋅ ) e ∈ Γ ( T M ) {\displaystyle A(\cdot )e\in \Gamma (TM)} for each e ∈ E {\displaystyle e\in E} . A solution to the SDE on M {\displaystyle M} with initial condition X 0 = x 0 {\displaystyle X_{0}=x_{0}} is a continuous { F t } {\displaystyle \{{\mathcal {F}}_{t}\}} -adapted M {\displaystyle M} -valued process ( X t ) t < ζ {\displaystyle (X_{t})_{t<\zeta }} up to life time ζ {\displaystyle \zeta } , s.t. for each test function f ∈ C c ∞ ( M ) {\displaystyle f\in C_{c}^{\infty }(M)} the process f ( X ) {\displaystyle f(X)} is a real-valued semimartingale and for each stopping time τ {\displaystyle \tau } with 0 ≤ τ < ζ {\displaystyle 0\leq \tau <\zeta } the equation f ( X τ ) = f ( x 0 ) + ∫ 0 τ ( d f ) X A ( X ) ∘ d Z {\displaystyle f(X_{\tau })=f(x_{0})+\int _{0}^{\tau }(\mathrm {d} f)_{X}A(X)\circ \mathrm {d} Z} holds P {\displaystyle P} -almost surely, where ( d f ) X : T x M → T f ( x ) M {\displaystyle (df)_{X}:T_{x}M\to T_{f(x)}M} is the differential at X {\displaystyle X} . It is a maximal solution if the life time is maximal, i.e., { ζ < ∞ } ⊂ { lim t ↗ ζ X t = ∞ in M ^ } {\displaystyle \{\zeta <\infty \}\subset \left\{\lim \limits _{t\nearrow \zeta }X_{t}=\infty {\text{ in }}{\widehat {M}}\right\}} P {\displaystyle P} -almost surely. It follows from the fact that f ( X ) {\displaystyle f(X)} for each test function f ∈ C c ∞ ( M ) {\displaystyle f\in C_{c}^{\infty }(M)} is a semimartingale, that X {\displaystyle X} is a semimartingale on M {\displaystyle M} . Given a maximal solution we can extend the time of X {\displaystyle X} onto full R + {\displaystyle \mathbb {R} _{+}} and after a continuation of f {\displaystyle f} on M ^ {\displaystyle {\widehat {M}}} we get f ( X t ) = f ( X 0 ) + ∫ 0 t ( d f ) X A ( X ) ∘ d Z , t ≥ 0 {\displaystyle f(X_{t})=f(X_{0})+\int _{0}^{t}(\mathrm {d} f)_{X}A(X)\circ \mathrm {d} Z,\quad t\geq 0} up to indistinguishable processes. Although Stratonovich SDEs are the natural choice for SDEs on manifolds, given that they satisfy the chain rule and that their drift and diffusion coefficients behave as vector fields under changes of coordinates, there are cases where Ito calculus on manifolds is preferable. A theory of Ito calculus on manifolds was first developed by Laurent Schwartz through the concept of Schwartz morphism, see also the related 2-jet interpretation of Ito SDEs on manifolds based on the jet-bundle. This interpretation is helpful when trying to optimally approximate the solution of an SDE given on a large space with the solutions of an SDE given on a submanifold of that space, in that a Stratonovich based projection does not result to be optimal. This has been applied to the filtering problem, leading to optimal projection filters. == As rough paths == Usually the solution of an SDE requires a probabilistic setting, as the integral implicit in the solution is a stochastic integral. If it were possible to deal with the differential equation path by path, one would not need to define a stochastic integral and one could develop a theory independently of probability theory. This points to considering the SDE d X t ( ω ) = μ ( X t ( ω ) , t ) d t + σ ( X t ( ω ) , t ) d B t ( ω ) {\displaystyle \mathrm {d} X_{t}(\omega )=\mu (X_{t}(\omega ),t)\,\mathrm {d} t+\sigma (X_{t}(\omega ),t)\,\mathrm {d} B_{t}(\omega )} as a single deterministic differential equation for every ω ∈ Ω {\displaystyle \omega \in \Omega } , where Ω {\displaystyle \Omega } is the sample space in the given probability space ( Ω , F , P {\displaystyle \Omega ,\,{\mathcal {F}},\,P} ). However, a direct path-wise interpretation of the SDE is not possible, as the Brownian motion paths have unbounded variation and are nowhere differentiable with probability one, so that there is no naive way to give meaning to terms like d B t ( ω ) {\displaystyle \mathrm {d} B_{t}(\omega )} , precluding also a naive path-wise definition of the stochastic integral as an integral against every single d B t ( ω ) {\displaystyle \mathrm {d} B_{t}(\omega )} . However, motivated by the Wong-Zakai result for limits of solutions of SDEs with regular noise and using rough paths theory, while adding a chosen definition of iterated integrals of Brownian motion, it is possible to define a deterministic rough integral for every single ω ∈ Ω {\displaystyle \omega \in \Omega } that coincides for example with the Ito integral with probability one for a particular choice of the iterated Brownian integral. Other definitions of the iterated integral lead to deterministic pathwise equivalents of different stochastic integrals, like the Stratonovich integral. This has been used for example in financial mathematics to price options without probability. == Existence and uniqueness of solutions == As with deterministic ordinary and partial differential equations, it is important to know whether a given SDE has a solution, and whether or not it is unique. The following is a typical existence and uniqueness theorem for Itô SDEs taking values in n-dimensional Euclidean space Rn and driven by an m-dimensional Brownian motion B; the proof may be found in Øksendal (2003, §5.2). Let T > 0, and let μ : R n × [ 0 , T ] → R n ; {\displaystyle \mu :\mathbb {R} ^{n}\times [0,T]\to \mathbb {R} ^{n};} σ : R n × [ 0 , T ] → R n × m ; {\displaystyle \sigma :\mathbb {R} ^{n}\times [0,T]\to \mathbb {R} ^{n\times m};} be measurable functions for which there exist constants C and D such that | μ ( x , t ) | + | σ ( x , t ) | ≤ C ( 1 + | x | ) ; {\displaystyle {\big |}\mu (x,t){\big |}+{\big |}\sigma (x,t){\big |}\leq C{\big (}1+|x|{\big )};} | μ ( x , t ) − μ ( y , t ) | + | σ ( x , t ) − σ ( y , t ) | ≤ D | x − y | ; {\displaystyle {\big |}\mu (x,t)-\mu (y,t){\big |}+{\big |}\sigma (x,t)-\sigma (y,t){\big |}\leq D|x-y|;} for all t ∈ [0, T] and all x and y ∈ Rn, where | σ | 2 = ∑ i , j = 1 n | σ i j | 2 . {\displaystyle |\sigma |^{2}=\sum _{i,j=1}^{n}|\sigma _{ij}|^{2}.} Let Z be a random variable that is independent of the σ-algebra generated by Bs, s ≥ 0, and with finite second moment: E [ | Z | 2 ] < + ∞ . {\displaystyle \mathbb {E} {\big [}|Z|^{2}{\big ]}<+\infty .} Then the stochastic differential equation/initial value problem d X t = μ ( X t , t ) d t + σ ( X t , t ) d B t for t ∈ [ 0 , T ] ; {\displaystyle \mathrm {d} X_{t}=\mu (X_{t},t)\,\mathrm {d} t+\sigma (X_{t},t)\,\mathrm {d} B_{t}{\mbox{ for }}t\in [0,T];} X 0 = Z ; {\displaystyle X_{0}=Z;} has a P-almost surely unique t-continuous solution (t, ω) ↦ Xt(ω) such that X is adapted to the filtration FtZ generated by Z and Bs, s ≤ t, and E [ ∫ 0 T | X t | 2 d t ] < + ∞ . {\displaystyle \mathbb {E} \left[\int _{0}^{T}|X_{t}|^{2}\,\mathrm {d} t\right]<+\infty .} === General case: local Lipschitz condition and maximal solutions === The stochastic differential equation above is only a special case of a more general form d Y t = α ( t , Y t ) d X t {\displaystyle \mathrm {d} Y_{t}=\alpha (t,Y_{t})\mathrm {d} X_{t}} where X {\displaystyle X} is a continuous semimartingale in R n {\displaystyle \mathbb {R} ^{n}} and Y {\displaystyle Y} is a continuous semimartingal in R d {\displaystyle \mathbb {R} ^{d}} α : R + × U → Lin ⁡ ( R n ; R d ) {\displaystyle \alpha :\mathbb {R} _{+}\times U\to \operatorname {Lin} (\mathbb {R} ^{n};\mathbb {R} ^{d})} is a map from some open nonempty set U ⊂ R d {\displaystyle U\subset \mathbb {R} ^{d}} , where Lin ⁡ ( R n ; R d ) {\displaystyle \operatorname {Lin} (\mathbb {R} ^{n};\mathbb {R} ^{d})} is the space of all linear maps from R n {\displaystyle \mathbb {R} ^{n}} to R d {\displaystyle \mathbb {R} ^{d}} . More generally one can also look at stochastic differential equations on manifolds. Whether the solution of this equation explodes depends on the choice of α {\displaystyle \alpha } . Suppose α {\displaystyle \alpha } satisfies some local Lipschitz condition, i.e., for t ≥ 0 {\displaystyle t\geq 0} and some compact set K ⊂ U {\displaystyle K\subset U} and some constant L ( t , K ) {\displaystyle L(t,K)} the condition | α ( s , y ) − α ( s , x ) | ≤ L ( t , K ) | y − x | , x , y ∈ K , 0 ≤ s ≤ t , {\displaystyle |\alpha (s,y)-\alpha (s,x)|\leq L(t,K)|y-x|,\quad x,y\in K,\;0\leq s\leq t,} where | ⋅ | {\displaystyle |\cdot |} is the Euclidean norm. This condition guarantees the existence and uniqueness of a so-called maximal solution. Suppose α {\displaystyle \alpha } is continuous and satisfies the above local Lipschitz condition and let F : Ω → U {\displaystyle F:\Omega \to U} be some initial condition, meaning it is a measurable function with respect to the initial σ-algebra. Let ζ : Ω → R ¯ + {\displaystyle \zeta :\Omega \to {\overline {\mathbb {R} }}_{+}} be a predictable stopping time with ζ > 0 {\displaystyle \zeta >0} almost surely. A U {\displaystyle U} -valued semimartingale ( Y t ) t < ζ {\displaystyle (Y_{t})_{t<\zeta }} is called a maximal solution of d Y t = α ( t , Y t ) d X t , Y 0 = F {\displaystyle dY_{t}=\alpha (t,Y_{t})dX_{t},\quad Y_{0}=F} with life time ζ {\displaystyle \zeta } if for one (and hence all) announcing ζ n ↗ ζ {\displaystyle \zeta _{n}\nearrow \zeta } the stopped process Y ζ n {\displaystyle Y^{\zeta _{n}}} is a solution to the stopped stochastic differential equation d Y = α ( t , Y ) d X ζ n {\displaystyle \mathrm {d} Y=\alpha (t,Y)\mathrm {d} X^{\zeta _{n}}} on the set { ζ < ∞ } {\displaystyle \{\zeta <\infty \}} we have almost surely that Y t → ∂ U {\displaystyle Y_{t}\to \partial U} with t → ζ {\displaystyle t\to \zeta } . ζ {\displaystyle \zeta } is also a so-called explosion time. == Some explicitly solvable examples == Explicitly solvable SDEs include: === Linear SDE: General case === d X t = ( a ( t ) X t + c ( t ) ) d t + ( b ( t ) X t + d ( t ) ) d W t {\displaystyle \mathrm {d} X_{t}=(a(t)X_{t}+c(t))\mathrm {d} t+(b(t)X_{t}+d(t))\mathrm {d} W_{t}} X t = Φ t , t 0 ( X t 0 + ∫ t 0 t Φ s , t 0 − 1 ( c ( s ) − b ( s ) d ( s ) ) d s + ∫ t 0 t Φ s , t 0 − 1 d ( s ) d W s ) {\displaystyle X_{t}=\Phi _{t,t_{0}}\left(X_{t_{0}}+\int _{t_{0}}^{t}\Phi _{s,t_{0}}^{-1}(c(s)-b(s)d(s))\mathrm {d} s+\int _{t_{0}}^{t}\Phi _{s,t_{0}}^{-1}d(s)\mathrm {d} W_{s}\right)} where Φ t , t 0 = exp ⁡ ( ∫ t 0 t ( a ( s ) − b 2 ( s ) 2 ) d s + ∫ t 0 t b ( s ) d W s ) {\displaystyle \Phi _{t,t_{0}}=\exp \left(\int _{t_{0}}^{t}\left(a(s)-{\frac {b^{2}(s)}{2}}\right)\mathrm {d} s+\int _{t_{0}}^{t}b(s)\mathrm {d} W_{s}\right)} === Reducible SDEs: Case 1 === d X t = 1 2 f ( X t ) f ′ ( X t ) d t + f ( X t ) d W t {\displaystyle \mathrm {d} X_{t}={\frac {1}{2}}f(X_{t})f'(X_{t})\mathrm {d} t+f(X_{t})\mathrm {d} W_{t}} for a given differentiable function f {\displaystyle f} is equivalent to the Stratonovich SDE d X t = f ( X t ) ∘ W t {\displaystyle \mathrm {d} X_{t}=f(X_{t})\circ W_{t}} which has a general solution X t = h − 1 ( W t + h ( X 0 ) ) {\displaystyle X_{t}=h^{-1}(W_{t}+h(X_{0}))} where h ( x ) = ∫ x d s f ( s ) {\displaystyle h(x)=\int ^{x}{\frac {\mathrm {d} s}{f(s)}}} === Reducible SDEs: Case 2 === d X t = ( α f ( X t ) + 1 2 f ( X t ) f ′ ( X t ) ) d t + f ( X t ) d W t {\displaystyle \mathrm {d} X_{t}=\left(\alpha f(X_{t})+{\frac {1}{2}}f(X_{t})f'(X_{t})\right)\mathrm {d} t+f(X_{t})\mathrm {d} W_{t}} for a given differentiable function f {\displaystyle f} is equivalent to the Stratonovich SDE d X t = α f ( X t ) d t + f ( X t ) ∘ W t {\displaystyle \mathrm {d} X_{t}=\alpha f(X_{t})\mathrm {d} t+f(X_{t})\circ W_{t}} which is reducible to d Y t = α d t + d W t {\displaystyle \mathrm {d} Y_{t}=\alpha \mathrm {d} t+\mathrm {d} W_{t}} where Y t = h ( X t ) {\displaystyle Y_{t}=h(X_{t})} where h {\displaystyle h} is defined as before. Its general solution is X t = h − 1 ( α t + W t + h ( X 0 ) ) {\displaystyle X_{t}=h^{-1}(\alpha t+W_{t}+h(X_{0}))} == SDEs and supersymmetry == In supersymmetric theory of SDEs, stochastic dynamics is defined via stochastic evolution operator acting on the differential forms on the phase space of the model. In this exact formulation of stochastic dynamics, all SDEs possess topological supersymmetry which represents the preservation of the continuity of the phase space by continuous time flow. The spontaneous breakdown of this supersymmetry is the mathematical essence of the ubiquitous dynamical phenomenon known across disciplines as chaos, turbulence, self-organized criticality etc. and the Goldstone theorem explains the associated long-range dynamical behavior, i.e., the butterfly effect, 1/f and crackling noises, and scale-free statistics of earthquakes, neuroavalanches, solar flares etc. == See also == Backward stochastic differential equation Langevin dynamics Local volatility Stochastic process Stochastic volatility Stochastic partial differential equations Diffusion process Stochastic difference equation == References == == Further reading == Evans, Lawrence C. (2013). An Introduction to Stochastic Differential Equations American Mathematical Society. Adomian, George (1983). Stochastic systems. Mathematics in Science and Engineering (169). Orlando, FL: Academic Press Inc. Adomian, George (1986). Nonlinear stochastic operator equations. Orlando, FL: Academic Press Inc. ISBN 978-0-12-044375-8. Adomian, George (1989). Nonlinear stochastic systems theory and applications to physics. Mathematics and its Applications (46). Dordrecht: Kluwer Academic Publishers Group. Calin, Ovidiu (2015). An Informal Introduction to Stochastic Calculus with Applications. Singapore: World Scientific Publishing. p. 315. ISBN 978-981-4678-93-3. Teugels, J.; Sund, B., eds. (2004). Encyclopedia of Actuarial Science. Chichester: Wiley. pp. 523–527. Gardiner, C. W. (2004). Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences. Springer. p. 415. Mikosch, Thomas (1998). Elementary Stochastic Calculus: with Finance in View. Singapore: World Scientific Publishing. p. 212. ISBN 981-02-3543-7. Seifedine Kadry (2007). "A Solution of Linear Stochastic Differential Equation". Wseas Transactions on Mathematics. USA: WSEAS TRANSACTIONS on MATHEMATICS, April 2007.: 618. ISSN 1109-2769. Higham, Desmond J. (January 2001). "An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations". SIAM Review. 43 (3): 525–546. Bibcode:2001SIAMR..43..525H. CiteSeerX 10.1.1.137.6375. doi:10.1137/S0036144500378302. Desmond Higham and Peter Kloeden: "An Introduction to the Numerical Simulation of Stochastic Differential Equations", SIAM, ISBN 978-1-611976-42-7 (2021).
Wikipedia/Stochastic_differential_equations
A differential equation is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. A matrix differential equation contains more than one function stacked into vector form with a matrix relating the functions to their derivatives. For example, a first-order matrix ordinary differential equation is x ˙ ( t ) = A ( t ) x ( t ) {\displaystyle \mathbf {\dot {x}} (t)=\mathbf {A} (t)\mathbf {x} (t)} where x ( t ) {\displaystyle \mathbf {x} (t)} is an n × 1 {\displaystyle n\times 1} vector of functions of an underlying variable t {\displaystyle t} , x ˙ ( t ) {\displaystyle \mathbf {\dot {x}} (t)} is the vector of first derivatives of these functions, and A ( t ) {\displaystyle \mathbf {A} (t)} is an n × n {\displaystyle n\times n} matrix of coefficients. In the case where A {\displaystyle \mathbf {A} } is constant and has n linearly independent eigenvectors, this differential equation has the following general solution, x ( t ) = c 1 e λ 1 t u 1 + c 2 e λ 2 t u 2 + ⋯ + c n e λ n t u n , {\displaystyle \mathbf {x} (t)=c_{1}e^{\lambda _{1}t}\mathbf {u} _{1}+c_{2}e^{\lambda _{2}t}\mathbf {u} _{2}+\cdots +c_{n}e^{\lambda _{n}t}\mathbf {u} _{n}~,} where λ1, λ2, …, λn are the eigenvalues of A; u1, u2, …, un are the respective eigenvectors of A; and c1, c2, …, cn are constants. More generally, if A ( t ) {\displaystyle \mathbf {A} (t)} commutes with its integral ∫ a t A ( s ) d s {\displaystyle \int _{a}^{t}\mathbf {A} (s)ds} then the Magnus expansion reduces to leading order, and the general solution to the differential equation is x ( t ) = e ∫ a t A ( s ) d s c , {\displaystyle \mathbf {x} (t)=e^{\int _{a}^{t}\mathbf {A} (s)ds}\mathbf {c} ~,} where c {\displaystyle \mathbf {c} } is an n × 1 {\displaystyle n\times 1} constant vector. By use of the Cayley–Hamilton theorem and Vandermonde-type matrices, this formal matrix exponential solution may be reduced to a simple form. Below, this solution is displayed in terms of Putzer's algorithm. == Stability and steady state of the matrix system == The matrix equation x ˙ ( t ) = A x ( t ) + b {\displaystyle \mathbf {\dot {x}} (t)=\mathbf {Ax} (t)+\mathbf {b} } with n×1 parameter constant vector b is stable if and only if all eigenvalues of the constant matrix A have a negative real part. The steady state x* to which it converges if stable is found by setting x ˙ ∗ ( t ) = 0 , {\displaystyle \mathbf {\dot {x}} ^{*}(t)=\mathbf {0} ~,} thus yielding x ∗ = − A − 1 b , {\displaystyle \mathbf {x} ^{*}=-\mathbf {A} ^{-1}\mathbf {b} ~,} assuming A is invertible. Thus, the original equation can be written in the homogeneous form in terms of deviations from the steady state, x ˙ ( t ) = A [ x ( t ) − x ∗ ] . {\displaystyle \mathbf {\dot {x}} (t)=\mathbf {A} [\mathbf {x} (t)-\mathbf {x} ^{*}]~.} An equivalent way of expressing this is that x* is a particular solution to the inhomogeneous equation, while all solutions are in the form x h + x ∗ , {\displaystyle \mathbf {x} _{h}+\mathbf {x} ^{*}~,} with x h {\displaystyle \mathbf {x} _{h}} a solution to the homogeneous equation (b=0). === Stability of the two-state-variable case === In the n = 2 case (with two state variables), the stability conditions that the two eigenvalues of the transition matrix A each have a negative real part are equivalent to the conditions that the trace of A be negative and its determinant be positive. == Solution in matrix form == The formal solution of x ˙ ( t ) = A [ x ( t ) − x ∗ ] {\displaystyle \mathbf {\dot {x}} (t)=\mathbf {A} [\mathbf {x} (t)-\mathbf {x} ^{*}]} has the matrix exponential form x ( t ) = x ∗ + e A t [ x ( 0 ) − x ∗ ] , {\displaystyle \mathbf {x} (t)=\mathbf {x} ^{*}+e^{\mathbf {A} t}[\mathbf {x} (0)-\mathbf {x} ^{*}]~,} evaluated using any of a multitude of techniques. === Putzer Algorithm for computing eAt === Given a matrix A with eigenvalues λ 1 , λ 2 , … , λ n {\displaystyle \lambda _{1},\lambda _{2},\dots ,\lambda _{n}} , e A t = ∑ j = 0 n − 1 r j + 1 ( t ) P j {\displaystyle e^{\mathbf {A} t}=\sum _{j=0}^{n-1}r_{j+1}{\left(t\right)}\mathbf {P} _{j}} where P 0 = I {\displaystyle \mathbf {P} _{0}=\mathbf {I} } P j = ∏ k = 1 j ( A − λ k I ) = P j − 1 ( A − λ j I ) , j = 1 , 2 , … , n − 1 {\displaystyle \mathbf {P} _{j}=\prod _{k=1}^{j}\left(\mathbf {A} -\lambda _{k}\mathbf {I} \right)=\mathbf {P} _{j-1}\left(\mathbf {A} -\lambda _{j}\mathbf {I} \right),\qquad j=1,2,\dots ,n-1} r ˙ 1 = λ 1 r 1 {\displaystyle {\dot {r}}_{1}=\lambda _{1}r_{1}} r 1 ( 0 ) = 1 {\displaystyle r_{1}{\left(0\right)}=1} r ˙ j = λ j r j + r j − 1 , j = 2 , 3 , … , n {\displaystyle {\dot {r}}_{j}=\lambda _{j}r_{j}+r_{j-1},\qquad j=2,3,\dots ,n} r j ( 0 ) = 0 , j = 2 , 3 , … , n {\displaystyle r_{j}{\left(0\right)}=0,\qquad j=2,3,\dots ,n} The equations for r i ( t ) {\displaystyle r_{i}(t)} are simple first order inhomogeneous ODEs. Note the algorithm does not require that the matrix A be diagonalizable and bypasses complexities of the Jordan canonical forms normally utilized. == Deconstructed example of a matrix ordinary differential equation == A first-order homogeneous matrix ordinary differential equation in two functions x(t) and y(t), when taken out of matrix form, has the following form: d x d t = a 1 x + b 1 y , d y d t = a 2 x + b 2 y {\displaystyle {\frac {dx}{dt}}=a_{1}x+b_{1}y,\quad {\frac {dy}{dt}}=a_{2}x+b_{2}y} where a 1 {\displaystyle a_{1}} , a 2 {\displaystyle a_{2}} , b 1 {\displaystyle b_{1}} , and b 2 {\displaystyle b_{2}} may be any arbitrary scalars. Higher order matrix ODE's may possess a much more complicated form. == Solving deconstructed matrix ordinary differential equations == The process of solving the above equations and finding the required functions of this particular order and form consists of 3 main steps. Brief descriptions of each of these steps are listed below: Finding the eigenvalues Finding the eigenvectors Finding the needed functions The final, third, step in solving these sorts of ordinary differential equations is usually done by means of plugging in the values calculated in the two previous steps into a specialized general form equation, mentioned later in this article. == Solved example of a matrix ODE == To solve a matrix ODE according to the three steps detailed above, using simple matrices in the process, let us find, say, a function x and a function y both in terms of the single independent variable t, in the following homogeneous linear differential equation of the first order, d x d t = 3 x − 4 y , d y d t = 4 x − 7 y . {\displaystyle {\frac {dx}{dt}}=3x-4y,\quad {\frac {dy}{dt}}=4x-7y~.} To solve this particular ordinary differential equation system, at some point in the solution process, we shall need a set of two initial values (corresponding to the two state variables at the starting point). In this case, let us pick x(0) = y(0) = 1. === First step === The first step, already mentioned above, is finding the eigenvalues of A in [ x ′ y ′ ] = [ 3 − 4 4 − 7 ] [ x y ] . {\displaystyle {\begin{bmatrix}x'\\y'\end{bmatrix}}={\begin{bmatrix}3&-4\\4&-7\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}~.} The derivative notation x′ etc. seen in one of the vectors above is known as Lagrange's notation (first introduced by Joseph Louis Lagrange. It is equivalent to the derivative notation dx/dt used in the previous equation, known as Leibniz's notation, honoring the name of Gottfried Leibniz.) Once the coefficients of the two variables have been written in the matrix form A displayed above, one may evaluate the eigenvalues. To that end, one finds the determinant of the matrix that is formed when an identity matrix, I n {\displaystyle I_{n}} , multiplied by some constant λ, is subtracted from the above coefficient matrix to yield the characteristic polynomial of it, det ( [ 3 − 4 4 − 7 ] − λ [ 1 0 0 1 ] ) , {\displaystyle \det \left({\begin{bmatrix}3&-4\\4&-7\end{bmatrix}}-\lambda {\begin{bmatrix}1&0\\0&1\end{bmatrix}}\right)~,} and solve for its zeroes. Applying further simplification and basic rules of matrix addition yields det [ 3 − λ − 4 4 − 7 − λ ] . {\displaystyle \det {\begin{bmatrix}3-\lambda &-4\\4&-7-\lambda \end{bmatrix}}~.} Applying the rules of finding the determinant of a single 2×2 matrix, yields the following elementary quadratic equation, det [ 3 − λ − 4 4 − 7 − λ ] = 0 {\displaystyle \det {\begin{bmatrix}3-\lambda &-4\\4&-7-\lambda \end{bmatrix}}=0} − 21 − 3 λ + 7 λ + λ 2 + 16 = 0 {\displaystyle -21-3\lambda +7\lambda +\lambda ^{2}+16=0\,\!} which may be reduced further to get a simpler version of the above, λ 2 + 4 λ − 5 = 0 . {\displaystyle \lambda ^{2}+4\lambda -5=0~.} Now finding the two roots, λ 1 {\displaystyle \lambda _{1}} and λ 2 {\displaystyle \lambda _{2}} of the given quadratic equation by applying the factorization method yields λ 2 + 5 λ − λ − 5 = 0 {\displaystyle \lambda ^{2}+5\lambda -\lambda -5=0} λ ( λ + 5 ) − 1 ( λ + 5 ) = 0 {\displaystyle \lambda (\lambda +5)-1(\lambda +5)=0} ( λ − 1 ) ( λ + 5 ) = 0 {\displaystyle (\lambda -1)(\lambda +5)=0} λ = 1 , − 5 . {\displaystyle \lambda =1,-5~.} The values λ 1 = 1 {\displaystyle \lambda _{1}=1} and λ 2 = − 5 {\displaystyle \lambda _{2}=-5} , calculated above are the required eigenvalues of A. In some cases, say other matrix ODE's, the eigenvalues may be complex, in which case the following step of the solving process, as well as the final form and the solution, may dramatically change. === Second step === As mentioned above, this step involves finding the eigenvectors of A from the information originally provided. For each of the eigenvalues calculated, we have an individual eigenvector. For the first eigenvalue, which is λ 1 = 1 {\displaystyle \lambda _{1}=1} , we have [ 3 − 4 4 − 7 ] [ α β ] = 1 [ α β ] . {\displaystyle {\begin{bmatrix}3&-4\\4&-7\end{bmatrix}}{\begin{bmatrix}\alpha \\\beta \end{bmatrix}}=1{\begin{bmatrix}\alpha \\\beta \end{bmatrix}}.} Simplifying the above expression by applying basic matrix multiplication rules yields 3 α − 4 β = α {\displaystyle 3\alpha -4\beta =\alpha } α = 2 β . {\displaystyle \alpha =2\beta ~.} All of these calculations have been done only to obtain the last expression, which in our case is α = 2β. Now taking some arbitrary value, presumably, a small insignificant value, which is much easier to work with, for either α or β (in most cases, it does not really matter), we substitute it into α = 2β. Doing so produces a simple vector, which is the required eigenvector for this particular eigenvalue. In our case, we pick α = 2, which, in turn determines that β = 1 and, using the standard vector notation, our vector looks like v ^ 1 = [ 2 1 ] . {\displaystyle \mathbf {\hat {v}} _{1}={\begin{bmatrix}2\\1\end{bmatrix}}.} Performing the same operation using the second eigenvalue we calculated, which is λ = − 5 {\displaystyle \lambda =-5} , we obtain our second eigenvector. The process of working out this vector is not shown, but the final result is v ^ 2 = [ 1 2 ] . {\displaystyle \mathbf {\hat {v}} _{2}={\begin{bmatrix}1\\2\end{bmatrix}}.} === Third step === This final step finds the required functions that are 'hidden' behind the derivatives given to us originally. There are two functions, because our differential equations deal with two variables. The equation which involves all the pieces of information that we have previously found, has the following form: [ x y ] = A e λ 1 t v ^ 1 + B e λ 2 t v ^ 2 . {\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}=Ae^{\lambda _{1}t}\mathbf {\hat {v}} _{1}+Be^{\lambda _{2}t}\mathbf {\hat {v}} _{2}.} Substituting the values of eigenvalues and eigenvectors yields [ x y ] = A e t [ 2 1 ] + B e − 5 t [ 1 2 ] . {\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}=Ae^{t}{\begin{bmatrix}2\\1\end{bmatrix}}+Be^{-5t}{\begin{bmatrix}1\\2\end{bmatrix}}.} Applying further simplification, [ x y ] = [ 2 1 1 2 ] [ A e t B e − 5 t ] . {\displaystyle {\begin{bmatrix}x\\y\end{bmatrix}}={\begin{bmatrix}2&1\\1&2\end{bmatrix}}{\begin{bmatrix}Ae^{t}\\Be^{-5t}\end{bmatrix}}.} Simplifying further and writing the equations for functions x and y separately, x = 2 A e t + B e − 5 t {\displaystyle x=2Ae^{t}+Be^{-5t}} y = A e t + 2 B e − 5 t . {\displaystyle y=Ae^{t}+2Be^{-5t}.} The above equations are, in fact, the general functions sought, but they are in their general form (with unspecified values of A and B), whilst we want to actually find their exact forms and solutions. So now we consider the problem’s given initial conditions (the problem including given initial conditions is the so-called initial value problem). Suppose we are given x ( 0 ) = y ( 0 ) = 1 {\displaystyle x(0)=y(0)=1} , which plays the role of starting point for our ordinary differential equation; application of these conditions specifies the constants, A and B. As we see from the x ( 0 ) = y ( 0 ) = 1 {\displaystyle x(0)=y(0)=1} conditions, when t = 0, the left sides of the above equations equal 1. Thus we may construct the following system of linear equations, 1 = 2 A + B {\displaystyle 1=2A+B} 1 = A + 2 B . {\displaystyle 1=A+2B~.} Solving these equations, we find that both constants A and B equal 1/3. Therefore substituting these values into the general form of these two functions specifies their exact forms, x = 2 3 e t + 1 3 e − 5 t {\displaystyle x={\tfrac {2}{3}}e^{t}+{\tfrac {1}{3}}e^{-5t}} y = 1 3 e t + 2 3 e − 5 t , {\displaystyle y={\tfrac {1}{3}}e^{t}+{\tfrac {2}{3}}e^{-5t}~,} the two functions sought. === Using matrix exponentiation === The above problem could have been solved with a direct application of the matrix exponential. That is, we can say that [ x ( t ) y ( t ) ] = exp ⁡ ( [ 3 − 4 4 − 7 ] t ) [ x 0 ( t ) y 0 ( t ) ] {\displaystyle {\begin{bmatrix}x(t)\\y(t)\end{bmatrix}}=\exp \left({\begin{bmatrix}3&-4\\4&-7\end{bmatrix}}t\right){\begin{bmatrix}x_{0}(t)\\y_{0}(t)\end{bmatrix}}} Given that (which can be computed using any suitable tool, such as MATLAB's expm tool, or by performing matrix diagonalisation and leveraging the property that the matrix exponential of a diagonal matrix is the same as element-wise exponentiation of its elements) exp ⁡ ( [ 3 − 4 4 − 7 ] t ) = [ 4 e t / 3 − e − 5 t / 3 2 e − 5 t / 3 − 2 e t / 3 2 e t / 3 − 2 e − 5 t / 3 4 e − 5 t / 3 − e t / 3 ] {\displaystyle \exp \left({\begin{bmatrix}3&-4\\4&-7\end{bmatrix}}t\right)={\begin{bmatrix}4e^{t}/3-e^{-5t}/3&2e^{-5t}/3-2e^{t}/3\\2e^{t}/3-2e^{-5t}/3&4e^{-5t}/3-e^{t}/3\end{bmatrix}}} the final result is [ x ( t ) y ( t ) ] = [ 4 e t / 3 − e − 5 t / 3 2 e − 5 t / 3 − 2 e t / 3 2 e t / 3 − 2 e − 5 t / 3 4 e − 5 t / 3 − e t / 3 ] [ 1 1 ] {\displaystyle {\begin{bmatrix}x(t)\\y(t)\end{bmatrix}}={\begin{bmatrix}4e^{t}/3-e^{-5t}/3&2e^{-5t}/3-2e^{t}/3\\2e^{t}/3-2e^{-5t}/3&4e^{-5t}/3-e^{t}/3\end{bmatrix}}{\begin{bmatrix}1\\1\end{bmatrix}}} [ x ( t ) y ( t ) ] = [ e − 5 t / 3 + 2 e t / 3 e t / 3 + 2 e − 5 t / 3 ] {\displaystyle {\begin{bmatrix}x(t)\\y(t)\end{bmatrix}}={\begin{bmatrix}e^{-5t}/3+2e^{t}/3\\e^{t}/3+2e^{-5t}/3\end{bmatrix}}} This is the same as the eigenvector approach shown before. == See also == Nonhomogeneous equations Matrix difference equation Newton's law of cooling Fibonacci sequence Difference equation Wave equation Autonomous system (mathematics) == References ==
Wikipedia/Matrix_differential_equation
In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common in mathematical models and scientific laws; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology. The study of differential equations consists mainly of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are solvable by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly. Often when a closed-form expression for the solutions is not available, solutions may be approximated numerically using computers, and many numerical methods have been developed to determine solutions with a given degree of accuracy. The theory of dynamical systems analyzes the qualitative aspects of solutions, such as their average behavior over a long time interval. == History == Differential equations came into existence with the invention of calculus by Isaac Newton and Gottfried Leibniz. In Chapter 2 of his 1671 work Methodus fluxionum et Serierum Infinitarum, Newton listed three kinds of differential equations: d y d x = f ( x ) d y d x = f ( x , y ) x 1 ∂ y ∂ x 1 + x 2 ∂ y ∂ x 2 = y {\displaystyle {\begin{aligned}{\frac {dy}{dx}}&=f(x)\\[4pt]{\frac {dy}{dx}}&=f(x,y)\\[4pt]x_{1}{\frac {\partial y}{\partial x_{1}}}&+x_{2}{\frac {\partial y}{\partial x_{2}}}=y\end{aligned}}} In all these cases, y is an unknown function of x (or of x1 and x2), and f is a given function. He solves these examples and others using infinite series and discusses the non-uniqueness of solutions. Jacob Bernoulli proposed the Bernoulli differential equation in 1695. This is an ordinary differential equation of the form y ′ + P ( x ) y = Q ( x ) y n {\displaystyle y'+P(x)y=Q(x)y^{n}\,} for which the following year Leibniz obtained solutions by simplifying it. Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange. In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation. The Euler–Lagrange equation was developed in the 1750s by Euler and Lagrange in connection with their studies of the tautochrone problem. This is the problem of determining a curve on which a weighted particle will fall to a fixed point in a fixed amount of time, independent of the starting point. Lagrange solved this problem in 1755 and sent the solution to Euler. Both further developed Lagrange's method and applied it to mechanics, which led to the formulation of Lagrangian mechanics. In 1822, Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytic Theory of Heat), in which he based his reasoning on Newton's law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures. Contained in this book was Fourier's proposal of his heat equation for conductive diffusion of heat. This partial differential equation is now a common part of mathematical physics curriculum. == Example == In classical mechanics, the motion of a body is described by its position and velocity as the time value varies. Newton's laws allow these variables to be expressed dynamically (given the position, velocity, acceleration and various forces acting on the body) as a differential equation for the unknown position of the body as a function of time. In some cases, this differential equation (called an equation of motion) may be solved explicitly. An example of modeling a real-world problem using differential equations is the determination of the velocity of a ball falling through the air, considering only gravity and air resistance. The ball's acceleration towards the ground is the acceleration due to gravity minus the deceleration due to air resistance. Gravity is considered constant, and air resistance may be modeled as proportional to the ball's velocity. This means that the ball's acceleration, which is a derivative of its velocity, depends on the velocity (and the velocity depends on time). Finding the velocity as a function of time involves solving a differential equation and verifying its validity. == Types == Differential equations can be classified several different ways. Besides describing the properties of the equation itself, these classes of differential equations can help inform the choice of approach to a solution. Commonly used distinctions include whether the equation is ordinary or partial, linear or non-linear, and homogeneous or heterogeneous. This list is far from exhaustive; there are many other properties and subclasses of differential equations which can be very useful in specific contexts. === Ordinary differential equations === An ordinary differential equation (ODE) is an equation containing an unknown function of one real or complex variable x, its derivatives, and some given functions of x. The unknown function is generally represented by a variable (often denoted y), which, therefore, depends on x. Thus x is often called the independent variable of the equation. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable. Linear differential equations are the differential equations that are linear in the unknown function and its derivatives. Their theory is well developed, and in many cases one may express their solutions in terms of integrals. Most ODEs that are encountered in physics are linear. Therefore, most special functions may be defined as solutions of linear differential equations (see Holonomic function). As, in general, the solutions of a differential equation cannot be expressed by a closed-form expression, numerical methods are commonly used for solving differential equations on a computer. === Partial differential equations === A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved in closed form, or used to create a relevant computer model. PDEs can be used to describe a wide variety of phenomena in nature such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalized similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. Stochastic partial differential equations generalize partial differential equations for modeling randomness. === Non-linear differential equations === A non-linear differential equation is a differential equation that is not a linear equation in the unknown function and its derivatives (the linearity or non-linearity in the arguments of the function are not considered here). There are very few methods of solving nonlinear differential equations exactly; those that are known typically depend on the equation having particular symmetries. Nonlinear differential equations can exhibit very complicated behaviour over extended time intervals, characteristic of chaos. Even the fundamental questions of existence, uniqueness, and extendability of solutions for nonlinear differential equations, and well-posedness of initial and boundary value problems for nonlinear PDEs are hard problems and their resolution in special cases is considered to be a significant advance in the mathematical theory (cf. Navier–Stokes existence and smoothness). However, if the differential equation is a correctly formulated representation of a meaningful physical process, then one expects it to have a solution. Linear differential equations frequently appear as approximations to nonlinear equations. These approximations are only valid under restricted conditions. For example, the harmonic oscillator equation is an approximation to the nonlinear pendulum equation that is valid for small amplitude oscillations. === Equation order and degree === The order of the differential equation is the highest order of derivative of the unknown function that appears in the differential equation. For example, an equation containing only first-order derivatives is a first-order differential equation, an equation containing the second-order derivative is a second-order differential equation, and so on. When it is written as a polynomial equation in the unknown function and its derivatives, its degree of the differential equation is, depending on the context, the polynomial degree in the highest derivative of the unknown function, or its total degree in the unknown function and its derivatives. In particular, a linear differential equation has degree one for both meanings, but the non-linear differential equation y ′ + y 2 = 0 {\displaystyle y'+y^{2}=0} is of degree one for the first meaning but not for the second one. Differential equations that describe natural phenomena almost always have only first and second order derivatives in them, but there are some exceptions, such as the thin-film equation, which is a fourth order partial differential equation. === Examples === In the first group of examples u is an unknown function of x, and c and ω are constants that are supposed to be known. Two broad classifications of both ordinary and partial differential equations consist of distinguishing between linear and nonlinear differential equations, and between homogeneous differential equations and heterogeneous ones. Heterogeneous first-order linear constant coefficient ordinary differential equation: d u d x = c u + x 2 . {\displaystyle {\frac {du}{dx}}=cu+x^{2}.} Homogeneous second-order linear ordinary differential equation: d 2 u d x 2 − x d u d x + u = 0. {\displaystyle {\frac {d^{2}u}{dx^{2}}}-x{\frac {du}{dx}}+u=0.} Homogeneous second-order linear constant coefficient ordinary differential equation describing the harmonic oscillator: d 2 u d x 2 + ω 2 u = 0. {\displaystyle {\frac {d^{2}u}{dx^{2}}}+\omega ^{2}u=0.} Heterogeneous first-order nonlinear ordinary differential equation: d u d x = u 2 + 4. {\displaystyle {\frac {du}{dx}}=u^{2}+4.} Second-order nonlinear (due to sine function) ordinary differential equation describing the motion of a pendulum of length L: L d 2 u d x 2 + g sin ⁡ u = 0. {\displaystyle L{\frac {d^{2}u}{dx^{2}}}+g\sin u=0.} In the next group of examples, the unknown function u depends on two variables x and t or x and y. Homogeneous first-order linear partial differential equation: ∂ u ∂ t + t ∂ u ∂ x = 0. {\displaystyle {\frac {\partial u}{\partial t}}+t{\frac {\partial u}{\partial x}}=0.} Homogeneous second-order linear constant coefficient partial differential equation of elliptic type, the Laplace equation: ∂ 2 u ∂ x 2 + ∂ 2 u ∂ y 2 = 0. {\displaystyle {\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0.} Homogeneous third-order non-linear partial differential equation, the KdV equation: ∂ u ∂ t = 6 u ∂ u ∂ x − ∂ 3 u ∂ x 3 . {\displaystyle {\frac {\partial u}{\partial t}}=6u{\frac {\partial u}{\partial x}}-{\frac {\partial ^{3}u}{\partial x^{3}}}.} == Existence of solutions == Solving differential equations is not like solving algebraic equations. Not only are their solutions often unclear, but whether solutions are unique or exist at all are also notable subjects of interest. For first order initial value problems, the Peano existence theorem gives one set of circumstances in which a solution exists. Given any point ( a , b ) {\displaystyle (a,b)} in the xy-plane, define some rectangular region Z {\displaystyle Z} , such that Z = [ l , m ] × [ n , p ] {\displaystyle Z=[l,m]\times [n,p]} and ( a , b ) {\displaystyle (a,b)} is in the interior of Z {\displaystyle Z} . If we are given a differential equation d y d x = g ( x , y ) {\textstyle {\frac {dy}{dx}}=g(x,y)} and the condition that y = b {\displaystyle y=b} when x = a {\displaystyle x=a} , then there is locally a solution to this problem if g ( x , y ) {\displaystyle g(x,y)} and ∂ g ∂ x {\textstyle {\frac {\partial g}{\partial x}}} are both continuous on Z {\displaystyle Z} . This solution exists on some interval with its center at a {\displaystyle a} . The solution may not be unique. (See Ordinary differential equation for other results.) However, this only helps us with first order initial value problems. Suppose we had a linear initial value problem of the nth order: f n ( x ) d n y d x n + ⋯ + f 1 ( x ) d y d x + f 0 ( x ) y = g ( x ) {\displaystyle f_{n}(x){\frac {d^{n}y}{dx^{n}}}+\cdots +f_{1}(x){\frac {dy}{dx}}+f_{0}(x)y=g(x)} such that y ( x 0 ) = y 0 , y ′ ( x 0 ) = y 0 ′ , y ″ ( x 0 ) = y 0 ″ , … {\displaystyle {\begin{aligned}y(x_{0})&=y_{0},&y'(x_{0})&=y'_{0},&y''(x_{0})&=y''_{0},&\ldots \end{aligned}}} For any nonzero f n ( x ) {\displaystyle f_{n}(x)} , if { f 0 , f 1 , … } {\displaystyle \{f_{0},f_{1},\ldots \}} and g {\displaystyle g} are continuous on some interval containing x 0 {\displaystyle x_{0}} , y {\displaystyle y} exists and is unique. == Related concepts == A delay differential equation (DDE) is an equation for a function of a single variable, usually called time, in which the derivative of the function at a certain time is given in terms of the values of the function at earlier times. Integral equations may be viewed as the analog to differential equations where instead of the equation involving derivatives, the equation contains integrals. An integro-differential equation (IDE) is an equation that combines aspects of a differential equation and an integral equation. A stochastic differential equation (SDE) is an equation in which the unknown quantity is a stochastic process and the equation involves some known stochastic processes, for example, the Wiener process in the case of diffusion equations. A stochastic partial differential equation (SPDE) is an equation that generalizes SDEs to include space-time noise processes, with applications in quantum field theory and statistical mechanics. An ultrametric pseudo-differential equation is an equation which contains p-adic numbers in an ultrametric space. Mathematical models that involve ultrametric pseudo-differential equations use pseudo-differential operators instead of differential operators. A differential algebraic equation (DAE) is a differential equation comprising differential and algebraic terms, given in implicit form. == Connection to difference equations == The theory of differential equations is closely related to the theory of difference equations, in which the coordinates assume only discrete values, and the relationship involves values of the unknown function or functions and values at nearby coordinates. Many methods to compute numerical solutions of differential equations or study the properties of differential equations involve the approximation of the solution of a differential equation by the solution of a corresponding difference equation. == Applications == The study of differential equations is a wide field in pure and applied mathematics, physics, and engineering. All of these disciplines are concerned with the properties of differential equations of various types. Pure mathematics focuses on the existence and uniqueness of solutions, while applied mathematics emphasizes the rigorous justification of the methods for approximating solutions. Differential equations play an important role in modeling virtually every physical, technical, or biological process, from celestial motion, to bridge design, to interactions between neurons. Differential equations such as those used to solve real-life problems may not necessarily be directly solvable, i.e. do not have closed form solutions. Instead, solutions can be approximated using numerical methods. Many fundamental laws of physics and chemistry can be formulated as differential equations. In biology and economics, differential equations are used to model the behavior of complex systems. The mathematical theory of differential equations first developed together with the sciences where the equations had originated and where the results found application. However, diverse problems, sometimes originating in quite distinct scientific fields, may give rise to identical differential equations. Whenever this happens, mathematical theory behind the equations can be viewed as a unifying principle behind diverse phenomena. As an example, consider the propagation of light and sound in the atmosphere, and of waves on the surface of a pond. All of them may be described by the same second-order partial differential equation, the wave equation, which allows us to think of light and sound as forms of waves, much like familiar waves in the water. Conduction of heat, the theory of which was developed by Joseph Fourier, is governed by another second-order partial differential equation, the heat equation. It turns out that many diffusion processes, while seemingly different, are described by the same equation; the Black–Scholes equation in finance is, for instance, related to the heat equation. The number of differential equations that have received a name, in various scientific areas is a witness of the importance of the topic. See List of named differential equations. == Software == Some CAS software can solve differential equations. These are the commands used in the leading programs: Maple: dsolve Mathematica: DSolve[] Maxima: ode2(equation, y, x) SageMath: desolve() SymPy: sympy.solvers.ode.dsolve(equation) Xcas: desolve(y'=k*y,y) == See also == == References == == Further reading == Abbott, P.; Neill, H. (2003). Teach Yourself Calculus. pp. 266–277. Blanchard, P.; Devaney, R. L.; Hall, G. R. (2006). Differential Equations. Thompson. Boyce, W.; DiPrima, R.; Meade, D. (2017). Elementary Differential Equations and Boundary Value Problems. Wiley. Coddington, E. A.; Levinson, N. (1955). Theory of Ordinary Differential Equations. McGraw-Hill. Ince, E. L. (1956). Ordinary Differential Equations. Dover. Johnson, W. (1913). A Treatise on Ordinary and Partial Differential Equations. John Wiley and Sons. In University of Michigan Historical Math Collection Polyanin, A. D.; Zaitsev, V. F. (2003). Handbook of Exact Solutions for Ordinary Differential Equations (2nd ed.). Boca Raton: Chapman & Hall/CRC Press. ISBN 1-58488-297-2. Porter, R. I. (1978). "XIX Differential Equations". Further Elementary Analysis. Teschl, Gerald (2012). Ordinary Differential Equations and Dynamical Systems. Providence: American Mathematical Society. ISBN 978-0-8218-8328-0. Daniel Zwillinger (12 May 2014). Handbook of Differential Equations. Elsevier Science. ISBN 978-1-4832-6396-0. == External links == Media related to Differential equations at Wikimedia Commons Lectures on Differential Equations MIT Open CourseWare Videos Online Notes / Differential Equations Paul Dawkins, Lamar University Differential Equations, S.O.S. Mathematics Introduction to modeling via differential equations Introduction to modeling by means of differential equations, with critical remarks. Mathematical Assistant on Web Symbolic ODE tool, using Maxima Exact Solutions of Ordinary Differential Equations Collection of ODE and DAE models of physical systems Archived 2008-12-19 at the Wayback Machine MATLAB models Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC Khan Academy Video playlist on differential equations Topics covered in a first year course in differential equations. MathDiscuss Video playlist on differential equations
Wikipedia/Order_of_differential_equation
In mathematics, an implicit equation is a relation of the form R ( x 1 , … , x n ) = 0 , {\displaystyle R(x_{1},\dots ,x_{n})=0,} where R is a function of several variables (often a polynomial). For example, the implicit equation of the unit circle is x 2 + y 2 − 1 = 0. {\displaystyle x^{2}+y^{2}-1=0.} An implicit function is a function that is defined by an implicit equation, that relates one of the variables, considered as the value of the function, with the others considered as the arguments.: 204–206  For example, the equation x 2 + y 2 − 1 = 0 {\displaystyle x^{2}+y^{2}-1=0} of the unit circle defines y as an implicit function of x if −1 ≤ x ≤ 1, and y is restricted to nonnegative values. The implicit function theorem provides conditions under which some kinds of implicit equations define implicit functions, namely those that are obtained by equating to zero multivariable functions that are continuously differentiable. == Examples == === Inverse functions === A common type of implicit function is an inverse function. Not all functions have a unique inverse function. If g is a function of x that has a unique inverse, then the inverse function of g, called g−1, is the unique function giving a solution of the equation y = g ( x ) {\displaystyle y=g(x)} for x in terms of y. This solution can then be written as x = g − 1 ( y ) . {\displaystyle x=g^{-1}(y)\,.} Defining g−1 as the inverse of g is an implicit definition. For some functions g, g−1(y) can be written out explicitly as a closed-form expression — for instance, if g(x) = 2x − 1, then g−1(y) = ⁠1/2⁠(y + 1). However, this is often not possible, or only by introducing a new notation (as in the product log example below). Intuitively, an inverse function is obtained from g by interchanging the roles of the dependent and independent variables. Example: The product log is an implicit function giving the solution for x of the equation y − xex = 0. === Algebraic functions === An algebraic function is a function that satisfies a polynomial equation whose coefficients are themselves polynomials. For example, an algebraic function in one variable x gives a solution for y of an equation a n ( x ) y n + a n − 1 ( x ) y n − 1 + ⋯ + a 0 ( x ) = 0 , {\displaystyle a_{n}(x)y^{n}+a_{n-1}(x)y^{n-1}+\cdots +a_{0}(x)=0\,,} where the coefficients ai(x) are polynomial functions of x. This algebraic function can be written as the right side of the solution equation y = f(x). Written like this, f is a multi-valued implicit function. Algebraic functions play an important role in mathematical analysis and algebraic geometry. A simple example of an algebraic function is given by the left side of the unit circle equation: x 2 + y 2 − 1 = 0 . {\displaystyle x^{2}+y^{2}-1=0\,.} Solving for y gives an explicit solution: y = ± 1 − x 2 . {\displaystyle y=\pm {\sqrt {1-x^{2}}}\,.} But even without specifying this explicit solution, it is possible to refer to the implicit solution of the unit circle equation as y = f(x), where f is the multi-valued implicit function. While explicit solutions can be found for equations that are quadratic, cubic, and quartic in y, the same is not in general true for quintic and higher degree equations, such as y 5 + 2 y 4 − 7 y 3 + 3 y 2 − 6 y − x = 0 . {\displaystyle y^{5}+2y^{4}-7y^{3}+3y^{2}-6y-x=0\,.} Nevertheless, one can still refer to the implicit solution y = f(x) involving the multi-valued implicit function f. == Caveats == Not every equation R(x, y) = 0 implies a graph of a single-valued function, the circle equation being one prominent example. Another example is an implicit function given by x − C(y) = 0 where C is a cubic polynomial having a "hump" in its graph. Thus, for an implicit function to be a true (single-valued) function it might be necessary to use just part of the graph. An implicit function can sometimes be successfully defined as a true function only after "zooming in" on some part of the x-axis and "cutting away" some unwanted function branches. Then an equation expressing y as an implicit function of the other variables can be written. The defining equation R(x, y) = 0 can also have other pathologies. For example, the equation x = 0 does not imply a function f(x) giving solutions for y at all; it is a vertical line. In order to avoid a problem like this, various constraints are frequently imposed on the allowable sorts of equations or on the domain. The implicit function theorem provides a uniform way of handling these sorts of pathologies. == Implicit differentiation == In calculus, a method called implicit differentiation makes use of the chain rule to differentiate implicitly defined functions. To differentiate an implicit function y(x), defined by an equation R(x, y) = 0, it is not generally possible to solve it explicitly for y and then differentiate. Instead, one can totally differentiate R(x, y) = 0 with respect to x and y and then solve the resulting linear equation for ⁠dy/dx⁠ to explicitly get the derivative in terms of x and y. Even when it is possible to explicitly solve the original equation, the formula resulting from total differentiation is, in general, much simpler and easier to use. === Examples === ==== Example 1 ==== Consider y + x + 5 = 0 . {\displaystyle y+x+5=0\,.} This equation is easy to solve for y, giving y = − x − 5 , {\displaystyle y=-x-5\,,} where the right side is the explicit form of the function y(x). Differentiation then gives ⁠dy/dx⁠ = −1. Alternatively, one can totally differentiate the original equation: d y d x + d x d x + d d x ( 5 ) = 0 ; d y d x + 1 + 0 = 0 . {\displaystyle {\begin{aligned}{\frac {dy}{dx}}+{\frac {dx}{dx}}+{\frac {d}{dx}}(5)&=0\,;\\[6px]{\frac {dy}{dx}}+1+0&=0\,.\end{aligned}}} Solving for ⁠dy/dx⁠ gives d y d x = − 1 , {\displaystyle {\frac {dy}{dx}}=-1\,,} the same answer as obtained previously. ==== Example 2 ==== An example of an implicit function for which implicit differentiation is easier than using explicit differentiation is the function y(x) defined by the equation x 4 + 2 y 2 = 8 . {\displaystyle x^{4}+2y^{2}=8\,.} To differentiate this explicitly with respect to x, one has first to get y ( x ) = ± 8 − x 4 2 , {\displaystyle y(x)=\pm {\sqrt {\frac {8-x^{4}}{2}}}\,,} and then differentiate this function. This creates two derivatives: one for y ≥ 0 and another for y < 0. It is substantially easier to implicitly differentiate the original equation: 4 x 3 + 4 y d y d x = 0 , {\displaystyle 4x^{3}+4y{\frac {dy}{dx}}=0\,,} giving d y d x = − 4 x 3 4 y = − x 3 y . {\displaystyle {\frac {dy}{dx}}={\frac {-4x^{3}}{4y}}=-{\frac {x^{3}}{y}}\,.} ==== Example 3 ==== Often, it is difficult or impossible to solve explicitly for y, and implicit differentiation is the only feasible method of differentiation. An example is the equation y 5 − y = x . {\displaystyle y^{5}-y=x\,.} It is impossible to algebraically express y explicitly as a function of x, and therefore one cannot find ⁠dy/dx⁠ by explicit differentiation. Using the implicit method, ⁠dy/dx⁠ can be obtained by differentiating the equation to obtain 5 y 4 d y d x − d y d x = d x d x , {\displaystyle 5y^{4}{\frac {dy}{dx}}-{\frac {dy}{dx}}={\frac {dx}{dx}}\,,} where ⁠dx/dx⁠ = 1. Factoring out ⁠dy/dx⁠ shows that ( 5 y 4 − 1 ) d y d x = 1 , {\displaystyle \left(5y^{4}-1\right){\frac {dy}{dx}}=1\,,} which yields the result d y d x = 1 5 y 4 − 1 , {\displaystyle {\frac {dy}{dx}}={\frac {1}{5y^{4}-1}}\,,} which is defined for y ≠ ± 1 5 4 and y ≠ ± i 5 4 . {\displaystyle y\neq \pm {\frac {1}{\sqrt[{4}]{5}}}\quad {\text{and}}\quad y\neq \pm {\frac {i}{\sqrt[{4}]{5}}}\,.} === General formula for derivative of implicit function === If R(x, y) = 0, the derivative of the implicit function y(x) is given by: §11.5  d y d x = − ∂ R ∂ x ∂ R ∂ y = − R x R y , {\displaystyle {\frac {dy}{dx}}=-{\frac {\,{\frac {\partial R}{\partial x}}\,}{\frac {\partial R}{\partial y}}}=-{\frac {R_{x}}{R_{y}}}\,,} where Rx and Ry indicate the partial derivatives of R with respect to x and y. The above formula comes from using the generalized chain rule to obtain the total derivative — with respect to x — of both sides of R(x, y) = 0: ∂ R ∂ x d x d x + ∂ R ∂ y d y d x = 0 , {\displaystyle {\frac {\partial R}{\partial x}}{\frac {dx}{dx}}+{\frac {\partial R}{\partial y}}{\frac {dy}{dx}}=0\,,} hence ∂ R ∂ x + ∂ R ∂ y d y d x = 0 , {\displaystyle {\frac {\partial R}{\partial x}}+{\frac {\partial R}{\partial y}}{\frac {dy}{dx}}=0\,,} which, when solved for ⁠dy/dx⁠, gives the expression above. == Implicit function theorem == Let R(x, y) be a differentiable function of two variables, and (a, b) be a pair of real numbers such that R(a, b) = 0. If ⁠∂R/∂y⁠ ≠ 0, then R(x, y) = 0 defines an implicit function that is differentiable in some small enough neighbourhood of (a, b); in other words, there is a differentiable function f that is defined and differentiable in some neighbourhood of a, such that R(x, f(x)) = 0 for x in this neighbourhood. The condition ⁠∂R/∂y⁠ ≠ 0 means that (a, b) is a regular point of the implicit curve of implicit equation R(x, y) = 0 where the tangent is not vertical. In a less technical language, implicit functions exist and can be differentiated, if the curve has a non-vertical tangent.: §11.5  == In algebraic geometry == Consider a relation of the form R(x1, …, xn) = 0, where R is a multivariable polynomial. The set of the values of the variables that satisfy this relation is called an implicit curve if n = 2 and an implicit surface if n = 3. The implicit equations are the basis of algebraic geometry, whose basic subjects of study are the simultaneous solutions of several implicit equations whose left-hand sides are polynomials. These sets of simultaneous solutions are called affine algebraic sets. == In differential equations == The solutions of differential equations generally appear expressed by an implicit function. == Applications in economics == === Marginal rate of substitution === In economics, when the level set R(x, y) = 0 is an indifference curve for the quantities x and y consumed of two goods, the absolute value of the implicit derivative ⁠dy/dx⁠ is interpreted as the marginal rate of substitution of the two goods: how much more of y one must receive in order to be indifferent to a loss of one unit of x. === Marginal rate of technical substitution === Similarly, sometimes the level set R(L, K) is an isoquant showing various combinations of utilized quantities L of labor and K of physical capital each of which would result in the production of the same given quantity of output of some good. In this case the absolute value of the implicit derivative ⁠dK/dL⁠ is interpreted as the marginal rate of technical substitution between the two factors of production: how much more capital the firm must use to produce the same amount of output with one less unit of labor. === Optimization === Often in economic theory, some function such as a utility function or a profit function is to be maximized with respect to a choice vector x even though the objective function has not been restricted to any specific functional form. The implicit function theorem guarantees that the first-order conditions of the optimization define an implicit function for each element of the optimal vector x* of the choice vector x. When profit is being maximized, typically the resulting implicit functions are the labor demand function and the supply functions of various goods. When utility is being maximized, typically the resulting implicit functions are the labor supply function and the demand functions for various goods. Moreover, the influence of the problem's parameters on x* — the partial derivatives of the implicit function — can be expressed as total derivatives of the system of first-order conditions found using total differentiation. == See also == == References == == Further reading == Binmore, K. G. (1983). "Implicit Functions". Calculus. New York: Cambridge University Press. pp. 198–211. ISBN 0-521-28952-1. Rudin, Walter (1976). Principles of Mathematical Analysis. Boston: McGraw-Hill. pp. 223–228. ISBN 0-07-054235-X. Simon, Carl P.; Blume, Lawrence (1994). "Implicit Functions and Their Derivatives". Mathematics for Economists. New York: W. W. Norton. pp. 334–371. ISBN 0-393-95733-0. == External links == Archived at Ghostarchive and the Wayback Machine: "Implicit Differentiation, What's Going on Here?". 3Blue1Brown. Essence of Calculus. May 3, 2017 – via YouTube.
Wikipedia/Implicit_and_explicit_functions
In mathematics, the Laplace transform is a powerful integral transform used to switch a function from the time domain to the s-domain. The Laplace transform can be used in some cases to solve linear differential equations with given initial conditions. == Approach == First consider the following property of the Laplace transform: L { f ′ } = s L { f } − f ( 0 ) {\displaystyle {\mathcal {L}}\{f'\}=s{\mathcal {L}}\{f\}-f(0)} L { f ″ } = s 2 L { f } − s f ( 0 ) − f ′ ( 0 ) {\displaystyle {\mathcal {L}}\{f''\}=s^{2}{\mathcal {L}}\{f\}-sf(0)-f'(0)} One can prove by induction that L { f ( n ) } = s n L { f } − ∑ i = 1 n s n − i f ( i − 1 ) ( 0 ) {\displaystyle {\mathcal {L}}\{f^{(n)}\}=s^{n}{\mathcal {L}}\{f\}-\sum _{i=1}^{n}s^{n-i}f^{(i-1)}(0)} Now we consider the following differential equation: ∑ i = 0 n a i f ( i ) ( t ) = ϕ ( t ) {\displaystyle \sum _{i=0}^{n}a_{i}f^{(i)}(t)=\phi (t)} with given initial conditions f ( i ) ( 0 ) = c i {\displaystyle f^{(i)}(0)=c_{i}} Using the linearity of the Laplace transform it is equivalent to rewrite the equation as ∑ i = 0 n a i L { f ( i ) ( t ) } = L { ϕ ( t ) } {\displaystyle \sum _{i=0}^{n}a_{i}{\mathcal {L}}\{f^{(i)}(t)\}={\mathcal {L}}\{\phi (t)\}} obtaining L { f ( t ) } ∑ i = 0 n a i s i − ∑ i = 1 n ∑ j = 1 i a i s i − j f ( j − 1 ) ( 0 ) = L { ϕ ( t ) } {\displaystyle {\mathcal {L}}\{f(t)\}\sum _{i=0}^{n}a_{i}s^{i}-\sum _{i=1}^{n}\sum _{j=1}^{i}a_{i}s^{i-j}f^{(j-1)}(0)={\mathcal {L}}\{\phi (t)\}} Solving the equation for L { f ( t ) } {\displaystyle {\mathcal {L}}\{f(t)\}} and substituting f ( i ) ( 0 ) {\displaystyle f^{(i)}(0)} with c i {\displaystyle c_{i}} one obtains L { f ( t ) } = L { ϕ ( t ) } + ∑ i = 1 n ∑ j = 1 i a i s i − j c j − 1 ∑ i = 0 n a i s i {\displaystyle {\mathcal {L}}\{f(t)\}={\frac {{\mathcal {L}}\{\phi (t)\}+\sum _{i=1}^{n}\sum _{j=1}^{i}a_{i}s^{i-j}c_{j-1}}{\sum _{i=0}^{n}a_{i}s^{i}}}} The solution for f(t) is obtained by applying the inverse Laplace transform to L { f ( t ) } . {\displaystyle {\mathcal {L}}\{f(t)\}.} Note that if the initial conditions are all zero, i.e. f ( i ) ( 0 ) = c i = 0 ∀ i ∈ { 0 , 1 , 2 , . . . n } {\displaystyle f^{(i)}(0)=c_{i}=0\quad \forall i\in \{0,1,2,...\ n\}} then the formula simplifies to f ( t ) = L − 1 { L { ϕ ( t ) } ∑ i = 0 n a i s i } {\displaystyle f(t)={\mathcal {L}}^{-1}\left\{{{\mathcal {L}}\{\phi (t)\} \over \sum _{i=0}^{n}a_{i}s^{i}}\right\}} == An example == We want to solve f ″ ( t ) + 4 f ( t ) = sin ⁡ ( 2 t ) {\displaystyle f''(t)+4f(t)=\sin(2t)} with initial conditions f(0) = 0 and f′(0)=0. We note that ϕ ( t ) = sin ⁡ ( 2 t ) {\displaystyle \phi (t)=\sin(2t)} and we get L { ϕ ( t ) } = 2 s 2 + 4 {\displaystyle {\mathcal {L}}\{\phi (t)\}={\frac {2}{s^{2}+4}}} The equation is then equivalent to s 2 L { f ( t ) } − s f ( 0 ) − f ′ ( 0 ) + 4 L { f ( t ) } = L { ϕ ( t ) } {\displaystyle s^{2}{\mathcal {L}}\{f(t)\}-sf(0)-f'(0)+4{\mathcal {L}}\{f(t)\}={\mathcal {L}}\{\phi (t)\}} We deduce L { f ( t ) } = 2 ( s 2 + 4 ) 2 {\displaystyle {\mathcal {L}}\{f(t)\}={\frac {2}{(s^{2}+4)^{2}}}} Now we apply the Laplace inverse transform to get f ( t ) = 1 8 sin ⁡ ( 2 t ) − t 4 cos ⁡ ( 2 t ) {\displaystyle f(t)={\frac {1}{8}}\sin(2t)-{\frac {t}{4}}\cos(2t)} == Bibliography == A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, Boca Raton, 2002. ISBN 1-58488-299-9
Wikipedia/Laplace_transform_applied_to_differential_equations
An inexact differential equation is a differential equation of the form: M ( x , y ) d x + N ( x , y ) d y = 0 {\displaystyle M(x,y)\,dx+N(x,y)\,dy=0} satisfying the condition ∂ M ∂ y ≠ ∂ N ∂ x {\displaystyle {\frac {\partial M}{\partial y}}\neq {\frac {\partial N}{\partial x}}} Leonhard Euler invented the integrating factor in 1739 to solve these equations. == Solution method == To solve an inexact differential equation, it may be transformed into an exact differential equation by finding an integrating factor μ {\displaystyle \mu } . Multiplying the original equation by the integrating factor gives: μ M d x + μ N d y = 0 {\displaystyle \mu M\,dx+\mu N\,dy=0} . For this equation to be exact, μ {\displaystyle \mu } must satisfy the condition: ∂ μ M ∂ y = ∂ μ N ∂ x {\textstyle {\frac {\partial \mu M}{\partial y}}={\frac {\partial \mu N}{\partial x}}} . Expanding this condition gives: M μ y − N μ x + ( M y − N x ) μ = 0. {\displaystyle M\mu _{y}-N\mu _{x}+(M_{y}-N_{x})\mu =0.} Since this is a partial differential equation, it is generally difficult. However in some cases where μ {\displaystyle \mu } depends only on x {\displaystyle x} or y {\displaystyle y} , the problem reduces to a separable first-order linear differential equation. The solutions for such cases are: μ ( y ) = e ∫ N x − M y M d y {\displaystyle \mu (y)=e^{\int {{\frac {N_{x}-M_{y}}{M}}\,dy}}} or μ ( x ) = e ∫ M y − N x N d x . {\displaystyle \mu (x)=e^{\int {{\frac {M_{y}-N_{x}}{N}}\,dx}}.} == See Also == Inexact differential Exact differential equation == References == == Further reading == Tenenbaum, Morris; Pollard, Harry (1963). "Recognizable Exact Differential Equations". Ordinary Differential Equations: An Elementary Textbook for Students of Mathematics, Engineering, and the Sciences. New York: Dover. pp. 80–91. ISBN 0-486-64940-7. {{cite book}}: ISBN / Date incompatibility (help) == External links == A solution for an inexact differential equation from Stack Exchange a guide for non-partial inexact differential equations at SOS math
Wikipedia/Inexact_differential_equation
In mathematics, the adjective trivial is often used to refer to a claim or a case which can be readily obtained from context, or a particularly simple object possessing a given structure (e.g., group, topological space). The noun triviality usually refers to a simple technical aspect of some proof or definition. The origin of the term in mathematical language comes from the medieval trivium curriculum, which distinguishes from the more difficult quadrivium curriculum. The opposite of trivial is nontrivial, which is commonly used to indicate that an example or a solution is not simple, or that a statement or a theorem is not easy to prove. Triviality does not have a rigorous definition in mathematics. It is subjective, and often determined in a given situation by the knowledge and experience of those considering the case. == Trivial and nontrivial solutions == In mathematics, the term "trivial" is often used to refer to objects (e.g., groups, topological spaces) with a very simple structure. These include, among others: Empty set: the set containing no or null members Trivial group: the mathematical group containing only the identity element Trivial ring: a ring defined on a singleton set "Trivial" can also be used to describe solutions to an equation that have a very simple structure, but for the sake of completeness cannot be omitted. These solutions are called the trivial solutions. For example, consider the differential equation y ′ = y {\displaystyle y'=y} where y = y ( x ) {\displaystyle y=y(x)} is a function whose derivative is y ′ {\displaystyle y'} . The trivial solution is the zero function y ( x ) = 0 {\displaystyle y(x)=0} while a nontrivial solution is the exponential function y ( x ) = e x . {\displaystyle y(x)=e^{x}.} The differential equation f ″ ( x ) = − λ f ( x ) {\displaystyle f''(x)=-\lambda f(x)} with boundary conditions f ( 0 ) = f ( L ) = 0 {\displaystyle f(0)=f(L)=0} is important in mathematics and physics, as it could be used to describe a particle in a box in quantum mechanics, or a standing wave on a string. It always includes the solution f ( x ) = 0 {\displaystyle f(x)=0} , which is considered obvious and hence is called the "trivial" solution. In some cases, there may be other solutions (sinusoids), which are called "nontrivial" solutions. Similarly, mathematicians often describe Fermat's last theorem as asserting that there are no nontrivial integer solutions to the equation a n + b n = c n {\displaystyle a^{n}+b^{n}=c^{n}} , where n is greater than 2. Clearly, there are some solutions to the equation. For example, a = b = c = 0 {\displaystyle a=b=c=0} is a solution for any n, but such solutions are obvious and obtainable with little effort, and hence "trivial". == In mathematical reasoning == Trivial may also refer to any easy case of a proof, which for the sake of completeness cannot be ignored. For instance, proofs by mathematical induction have two parts: the "base case" which shows that the theorem is true for a particular initial value (such as n = 0 or n = 1), and the inductive step which shows that if the theorem is true for a certain value of n, then it is also true for the value n + 1. The base case is often trivial and is identified as such, although there are situations where the base case is difficult but the inductive step is trivial. Similarly, one might want to prove that some property is possessed by all the members of a certain set. The main part of the proof will consider the case of a nonempty set, and examine the members in detail; in the case where the set is empty, the property is trivially possessed by all the members of the empty set, since there are none (see vacuous truth for more). The judgement of whether a situation under consideration is trivial or not depends on who considers it since the situation is obviously true for someone who has sufficient knowledge or experience of it while to someone who has never seen this, it may be even hard to be understood so not trivial at all. And there can be an argument about how quickly and easily a problem should be recognized for the problem to be treated as trivial. The following examples show the subjectivity and ambiguity of the triviality judgement. Triviality also depends on context. A proof in functional analysis would probably, given a number, trivially assume the existence of a larger number. However, when proving basic results about the natural numbers in elementary number theory, the proof may very well hinge on the remark that any natural number has a successor – a statement which should itself be proved or be taken as an axiom so is not trivial (for more, see Peano's axioms). === Trivial proofs === In some texts, a trivial proof refers to a statement involving a material implication P→Q, where the consequent Q, is always true. Here, the proof follows immediately by virtue of the definition of material implication in which as the implication is true regardless of the truth value of the antecedent P if the consequent is fixed as true. A related concept is a vacuous truth, where the antecedent P in a material implication P→Q is false. In this case, the implication is always true regardless of the truth value of the consequent Q – again by virtue of the definition of material implication. == Humor == A common joke in the mathematical community is to say that "trivial" is synonymous with "proved"—that is, any theorem can be considered "trivial" once it is known to be proved as true. Two mathematicians who are discussing a theorem: the first mathematician says that the theorem is "trivial". In response to the other's request for an explanation, he then proceeds with twenty minutes of exposition. At the end of the explanation, the second mathematician agrees that the theorem is trivial. But can we say that this theorem is trivial even if it takes a lot of time and effort to prove it? When a mathematician says that a theorem is trivial, but he is unable to prove it by himself at the moment that he pronounces it as trivial, is the theorem trivial? Often, as a joke, a problem is referred to as "intuitively obvious". For example, someone experienced in calculus would consider the following statement trivial: ∫ 0 1 x 2 d x = 1 3 . {\displaystyle \int _{0}^{1}x^{2}\,dx={\frac {1}{3}}.} However, to someone with no knowledge of integral calculus, this is not obvious, so it is not trivial. == Examples == In number theory, it is often important to find factors of an integer number N. Any number N has four obvious factors: ±1 and ±N. These are called "trivial factors". Any other factor, if it exists, would be called "nontrivial". The homogeneous matrix equation A x = 0 {\displaystyle A\mathbf {x} =\mathbf {0} } , where A {\displaystyle A} is a fixed matrix, x {\displaystyle \mathbf {x} } is an unknown vector, and 0 {\displaystyle \mathbf {0} } is the zero vector, has an obvious solution x = 0 {\displaystyle \mathbf {x} =\mathbf {0} } . This is called the "trivial solution". Any other solutions, with x ≠ 0 {\displaystyle \mathbf {x} \neq \mathbf {0} } , are called "nontrivial". In group theory, there is a very simple group with just one element in it; this is often called the "trivial group". All other groups, which are more complicated, are called "nontrivial". In graph theory, the trivial graph is a graph which has only 1 vertex and no edge. Database theory has a concept called functional dependency, written X → Y {\displaystyle X\to Y} . The dependence X → Y {\displaystyle X\to Y} is true if Y is a subset of X, so this type of dependence is called "trivial". All other dependences, which are less obvious, are called "nontrivial". It can be shown that Riemann's zeta function has zeros at the negative even numbers −2, −4, … Though the proof is comparatively easy, this result would still not normally be called trivial; however, it is in this case, for its other zeros are generally unknown and have important applications and involve open questions (such as the Riemann hypothesis). Accordingly, the negative even numbers are called the trivial zeros of the function, while any other zeros are considered to be non-trivial. == See also == Degeneracy Initial and terminal objects List of mathematical jargon Pathological Trivialism Trivial measure Trivial representation Trivial topology == References == == External links == Trivial entry at MathWorld
Wikipedia/Trivial_solution
In mathematics, contact geometry is the study of a geometric structure on smooth manifolds given by a hyperplane distribution in the tangent bundle satisfying a condition called 'complete non-integrability'. Equivalently, such a distribution may be given (at least locally) as the kernel of a differential one-form, and the non-integrability condition translates into a maximal non-degeneracy condition on the form. These conditions are opposite to two equivalent conditions for 'complete integrability' of a hyperplane distribution, i.e. that it be tangent to a codimension one foliation on the manifold, whose equivalence is the content of the Frobenius theorem. Contact geometry is in many ways an odd-dimensional counterpart of symplectic geometry, a structure on certain even-dimensional manifolds. Both contact and symplectic geometry are motivated by the mathematical formalism of classical mechanics, where one can consider either the even-dimensional phase space of a mechanical system or constant-energy hypersurface, which, being codimension one, has odd dimension. == Applications == Like symplectic geometry, contact geometry has broad applications in physics, e.g. geometrical optics, classical mechanics, thermodynamics, geometric quantization, integrable systems and to control theory. Contact geometry also has applications to low-dimensional topology; for example, it has been used by Kronheimer and Mrowka to prove the property P conjecture, by Michael Hutchings to define an invariant of smooth three-manifolds, and by Lenhard Ng to define invariants of knots. It was also used by Yakov Eliashberg to derive a topological characterization of Stein manifolds of dimension at least six. Contact geometry has been used to describe the visual cortex. == Contact forms and structures == A contact structure on an odd dimensional manifold is a smoothly varying family of codimension one subspaces of each tangent space of the manifold, satisfying a non-integrability condition. The family may be described as a section of a bundle as follows: Given an n-dimensional smooth manifold M, and a point p ∈ M, a contact element of M with contact point p is an (n − 1)-dimensional linear subspace of the tangent space to M at p. A contact element can be given by the kernel of a linear function on the tangent space to M at p. However, if a subspace is given by the kernel of a linear function ω, then it will also be given by the zeros of λω where λ ≠ 0 is any nonzero real number. Thus, the kernels of { λω : λ ≠ 0 } all give the same contact element. It follows that the space of all contact elements of M can be identified with a quotient of the cotangent bundle T*M (with the zero section 0 M {\displaystyle 0_{M}} removed), namely: PT ∗ M = ( T ∗ M − 0 M ) / ∼ where, for ω i ∈ T p ∗ M , ω 1 ∼ ω 2 ⟺ ∃ λ ≠ 0 : ω 1 = λ ω 2 . {\displaystyle {\text{PT}}^{*}M=({\text{T}}^{*}M-{0_{M}})/{\sim }\ {\text{ where, for }}\omega _{i}\in {\text{T}}_{p}^{*}M,\ \ \omega _{1}\sim \omega _{2}\ \iff \ \exists \ \lambda \neq 0\ :\ \omega _{1}=\lambda \omega _{2}.} A contact structure on an odd dimensional manifold M, of dimension 2k + 1, is a smooth distribution of contact elements, denoted by ξ, which is generic at each point. The genericity condition is that ξ is non-integrable. Assume that we have a smooth distribution of contact elements, ξ, given locally by a differential 1-form α; i.e. a smooth section of the cotangent bundle. The non-integrability condition can be given explicitly as: α ∧ ( d α ) k ≠ 0 where ( d α ) k = d α ∧ … ∧ d α ⏟ k -times . {\displaystyle \alpha \wedge ({\text{d}}\alpha )^{k}\neq 0\ {\text{where}}\ ({\text{d}}\alpha )^{k}=\underbrace {{\text{d}}\alpha \wedge \ldots \wedge {\text{d}}\alpha } _{k{\text{-times}}}.} Notice that if ξ is given by the differential 1-form α, then the same distribution is given locally by β = ƒ⋅α, where ƒ is a non-zero smooth function. If ξ is co-orientable then α is defined globally. === Properties === It follows from the Frobenius theorem on integrability that the contact field ξ is completely nonintegrable. This property of the contact field is roughly the opposite of being a field formed from the tangent planes of a family of nonoverlapping hypersurfaces in M. In particular, you cannot find a hypersurface in M whose tangent spaces agree with ξ, even locally. In fact, there is no submanifold of dimension greater than k whose tangent spaces lie in ξ. === Relation with symplectic structures === A consequence of the definition is that the restriction of the 2-form ω = dα to a hyperplane in ξ is a nondegenerate 2-form. This construction provides any contact manifold M with a natural symplectic bundle of rank one smaller than the dimension of M. Note that a symplectic vector space is always even-dimensional, while contact manifolds need to be odd-dimensional. The cotangent bundle T*N of any n-dimensional manifold N is itself a manifold (of dimension 2n) and supports naturally an exact symplectic structure ω = dλ. (This 1-form λ is sometimes called the Liouville form). There are several ways to construct an associated contact manifold, some of dimension 2n − 1, some of dimension 2n + 1. Projectivization Let M be the projectivization of the cotangent bundle of N: thus M is fiber bundle over N whose fiber at a point x is the space of lines in T*N, or, equivalently, the space of hyperplanes in TN. The 1-form λ does not descend to a genuine 1-form on M. However, it is homogeneous of degree 1, and so it defines a 1-form with values in the line bundle O(1), which is the dual of the fibrewise tautological line bundle of M. The kernel of this 1-form defines a contact distribution. Energy surfaces Suppose that H is a smooth function on T*N, that E is a regular value for H, so that the level set L = { ( q , p ) ∈ T ∗ N ∣ H ( q , p ) = E } {\displaystyle L=\{(q,p)\in T^{*}N\mid H(q,p)=E\}} is a smooth submanifold of codimension 1. A vector field Y is called an Euler (or Liouville) vector field if it is transverse to L and conformally symplectic, meaning that the Lie derivative of dλ with respect to Y is a multiple of dλ in a neighborhood of L. Then the restriction of i Y d λ {\displaystyle i_{Y}\,d\lambda } to L is a contact form on L. This construction originates in Hamiltonian mechanics, where H is a Hamiltonian of a mechanical system with the configuration space N and the phase space T*N, and E is the value of the energy. The unit cotangent bundle Choose a Riemannian metric on the manifold N and let H be the associated kinetic energy. Then the level set H = 1/2 is the unit cotangent bundle of N, a smooth manifold of dimension 2n − 1 fibering over N with fibers being spheres. Then the Liouville form restricted to the unit cotangent bundle is a contact structure. This corresponds to a special case of the second construction, where the flow of the Euler vector field Y corresponds to linear scaling of momenta ps, leaving the qs fixed. The vector field R, defined by the equalities λ(R) = 1 and dλ(R, A) = 0 for all vector fields A, is called the Reeb vector field, and it generates the geodesic flow of the Riemannian metric. More precisely, using the Riemannian metric, one can identify each point of the cotangent bundle of N with a point of the tangent bundle of N, and then the value of R at that point of the (unit) cotangent bundle is the corresponding (unit) vector parallel to N. First jet bundle On the other hand, one can build a contact manifold M of dimension 2n + 1 by considering the first jet bundle of the real valued functions on N. This bundle is isomorphic to T*N×R using the exterior derivative of a function. With coordinates (x, t), M has a contact structure α = dt + λ. Conversely, given any contact manifold M, the product M×R has a natural structure of a symplectic manifold. If α is a contact form on M, then ω = d(etα) is a symplectic form on M×R, where t denotes the variable in the R-direction. This new manifold is called the symplectization (sometimes symplectification in the literature) of the contact manifold M. === Examples === As a prime example, consider R3, endowed with coordinates (x,y,z) and the one-form dz − y dx. The contact plane ξ at a point (x,y,z) is spanned by the vectors X1 = ∂y and X2 = ∂x + y ∂z. By replacing the single variables x and y with the multivariables x1, ..., xn, y1, ..., yn, one can generalize this example to any R2n+1. By a theorem of Darboux, every contact structure on a manifold looks locally like this particular contact structure on the (2n + 1)-dimensional vector space. The Sasakian manifolds comprise an important class of contact manifolds. Every connected compact orientable three-dimensional manifold admits a contact structure. This result generalises to any compact almost-contact manifold. == Legendrian submanifolds and knots == The most interesting subspaces of a contact manifold are its Legendrian submanifolds. The non-integrability of the contact hyperplane field on a (2n + 1)-dimensional manifold means that no 2n-dimensional submanifold has it as its tangent bundle, even locally. However, it is in general possible to find n-dimensional (embedded or immersed) submanifolds whose tangent spaces lie inside the contact field: these are called Legendrian submanifolds. Legendrian submanifolds are analogous to Lagrangian submanifolds of symplectic manifolds. There is a precise relation: the lift of a Legendrian submanifold in a symplectization of a contact manifold is a Lagrangian submanifold. The simplest example of Legendrian submanifolds are Legendrian knots inside a contact three-manifold. Inequivalent Legendrian knots may be equivalent as smooth knots; that is, there are knots which are smoothly isotopic where the isotopy cannot be chosen to be a path of Legendrian knots. Legendrian submanifolds are very rigid objects; typically there are infinitely many Legendrian isotopy classes of embeddings which are all smoothly isotopic. Symplectic field theory provides invariants of Legendrian submanifolds called relative contact homology that can sometimes distinguish distinct Legendrian submanifolds that are topologically identical (i.e. smoothly isotopic). == Reeb vector field == If α is a contact form for a given contact structure, the Reeb vector field R can be defined as the unique element of the (one-dimensional) kernel of dα such that α(R) = 1. If a contact manifold arises as a constant-energy hypersurface inside a symplectic manifold, then the Reeb vector field is the restriction to the submanifold of the Hamiltonian vector field associated to the energy function. (The restriction yields a vector field on the contact hypersurface because the Hamiltonian vector field preserves energy levels.) The dynamics of the Reeb field can be used to study the structure of the contact manifold or even the underlying manifold using techniques of Floer homology such as symplectic field theory and, in three dimensions, embedded contact homology. Different contact forms whose kernels give the same contact structure will yield different Reeb vector fields, whose dynamics are in general very different. The various flavors of contact homology depend a priori on the choice of a contact form, and construct algebraic structures the closed trajectories of their Reeb vector fields; however, these algebraic structures turn out to be independent of the contact form, i.e. they are invariants of the underlying contact structure, so that in the end, the contact form may be seen as an auxiliary choice. In the case of embedded contact homology, one obtains an invariant of the underlying three-manifold, i.e. the embedded contact homology is independent of contact structure; this allows one to obtain results that hold for any Reeb vector field on the manifold. The Reeb field is named after Georges Reeb. == Some historical remarks == The roots of contact geometry appear in work of Christiaan Huygens, Isaac Barrow, and Isaac Newton. The theory of contact transformations (i.e. transformations preserving a contact structure) was developed by Sophus Lie, with the dual aims of studying differential equations (e.g. the Legendre transformation or canonical transformation) and describing the 'change of space element', familiar from projective duality. The first known use of the term "contact manifold" appears in a paper of 1958 == See also == Floer homology, some flavors of which give invariants of contact manifolds and their Legendrian submanifolds Sub-Riemannian geometry == References == === Introductions to contact geometry === Etnyre, J. (2003). "Introductory lectures on contact geometry". Proc. Sympos. Pure Math. Proceedings of Symposia in Pure Mathematics. 71: 81–107. arXiv:math/0111118. doi:10.1090/pspum/071/2024631. ISBN 9780821835074. S2CID 6174175. Geiges, H. (2003). "Contact Geometry". arXiv:math/0307242. Geiges, Hansjörg (2008). An Introduction to Contact Topology. Cambridge University Press. ISBN 978-1-139-46795-7. Aebischer (1994). Symplectic geometry. Birkhäuser. ISBN 3-7643-5064-4. Arnold, V.I. (1989). Mathematical Methods of Classical Mechanics. Springer-Verlag. ISBN 0-387-96890-3. === Applications to differential equations === Arnold, V.I. (1988). Geometrical Methods In The Theory Of Ordinary Differential Equations. Springer-Verlag. ISBN 0-387-96649-8. === Contact three-manifolds and Legendrian knots === Thurston, William (1997). Three-Dimensional Geometry and Topology. Princeton University Press. ISBN 0-691-08304-5. === Information on the history of contact geometry === Lutz, R. (1988). "Quelques remarques historiques et prospectives sur la géométrie de contact". Conference on Differential Geometry and Topology (Sardinia, 1988). Rend. Fac. Sci. Univ. Cagliari. Vol. 58 suppl. pp. 361–393. MR 1122864. Geiges, H. (2001). "A Brief History of Contact Geometry and Topology". Expo. Math. 19: 25–53. doi:10.1016/S0723-0869(01)80014-1. Arnold, Vladimir I. (2012) [1990]. Huygens and Barrow, Newton and Hooke: Pioneers in mathematical analysis and catastrophe theory from evolvents to quasicrystals. Birkhäuser. ISBN 978-3-0348-9129-5. Contact geometry Theme on arxiv.org == External links == Contact manifold at the Manifold Atlas
Wikipedia/Contact_transformation
The approximation error in a given data value represents the significant discrepancy that arises when an exact, true value is compared against some approximation derived for it. This inherent error in approximation can be quantified and expressed in two principal ways: as an absolute error, which denotes the direct numerical magnitude of this discrepancy irrespective of the true value's scale, or as a relative error, which provides a scaled measure of the error by considering the absolute error in proportion to the exact data value, thus offering a context-dependent assessment of the error's significance. An approximation error can manifest due to a multitude of diverse reasons. Prominent among these are limitations related to computing machine precision, where digital systems cannot represent all real numbers with perfect accuracy, leading to unavoidable truncation or rounding. Another common source is inherent measurement error, stemming from the practical limitations of instruments, environmental factors, or observational processes (for instance, if the actual length of a piece of paper is precisely 4.53 cm, but the measuring ruler only permits an estimation to the nearest 0.1 cm, this constraint could lead to a recorded measurement of 4.5 cm, thereby introducing an error). In the mathematical field of numerical analysis, the crucial concept of numerical stability associated with an algorithm serves to indicate the extent to which initial errors or perturbations present in the input data of the algorithm are likely to propagate and potentially amplify into substantial errors in the final output. Algorithms that are characterized as numerically stable are robust in the sense that they do not yield a significantly magnified error in their output even when the input is slightly malformed or contains minor inaccuracies; conversely, numerically unstable algorithms may exhibit dramatic error growth from small input changes, rendering their results unreliable. == Formal definition == Given some true or exact value v, we formally state that an approximation vapprox estimates or represents v where the magnitude of the absolute error is bounded by a positive value ε (i.e., ε>0), if the following inequality holds: | v − v approx | ≤ ε {\displaystyle |v-v_{\text{approx}}|\leq \varepsilon } where the vertical bars, | |, unambiguously denote the absolute value of the difference between the true value v and its approximation vapprox. This mathematical operation signifies the magnitude of the error, irrespective of whether the approximation is an overestimate or an underestimate. Similarly, we state that vapprox approximates the value v where the magnitude of the relative error is bounded by a positive value η (i.e., η>0), provided v is not zero (v ≠ 0), if the subsequent inequality is satisfied: | v − v approx | ≤ η ⋅ | v | {\displaystyle |v-v_{\text{approx}}|\leq \eta \cdot |v|} .This definition ensures that η acts as an upper bound on the ratio of the absolute error to the magnitude of the true value. If v ≠ 0, then the actual relative error, often also denoted by η in context (representing the calculated value rather than a bound), is precisely calculated as: η = | v − v approx | | v | = | v − v approx v | = | 1 − v approx v | {\displaystyle \eta ={\frac {|v-v_{\text{approx}}|}{|v|}}=\left|{\frac {v-v_{\text{approx}}}{v}}\right|=\left|1-{\frac {v_{\text{approx}}}{v}}\right|} . Note that the first term in the equation above implicitly defines `ε` as `|v-v_approx|` if `η` is `ε/|v|`. The percent error, often denoted as δ, is a common and intuitive way of expressing the relative error, effectively scaling the relative error value to a percentage for easier interpretation and comparison across different contexts: δ = 100 % × η = 100 % × | v − v approx v | . {\displaystyle \delta =100\%\times \eta =100\%\times \left|{\frac {v-v_{\text{approx}}}{v}}\right|.} An error bound rigorously defines an established upper limit on either the relative or the absolute magnitude of an approximation error. Such a bound thereby provides a formal guarantee on the maximum possible deviation of the approximation from the true value, which is critical in applications requiring known levels of precision. == Examples == To illustrate these concepts with a numerical example, consider an instance where the exact, accepted value is 50, and its corresponding approximation is determined to be 49.9. In this particular scenario, the absolute error is precisely 0.1 (calculated as |50 − 49.9|), and the relative error is calculated as the absolute error 0.1 divided by the true value 50, which equals 0.002. This relative error can also be expressed as 0.2%. In a more practical setting, such as when measuring the volume of liquid in a 6 mL beaker, if the instrument reading indicates 5 mL while the true volume is actually 6 mL, the percent error for this particular measurement situation is, when rounded to one decimal place, approximately 16.7% (calculated as |(6 mL − 5 mL) / 6 mL| × 100%). The utility of relative error becomes particularly evident when it is employed to compare the quality of approximations for numbers that possess widely differing magnitudes; for example, approximating the number 1,000 with an absolute error of 3 results in a relative error of 0.003 (or 0.3%). This is, within the context of most scientific or engineering applications, considered a significantly less accurate approximation than approximating the much larger number 1,000,000 with an identical absolute error of 3. In the latter case, the relative error is a mere 0.000003 (or 0.0003%). In the first case, the relative error is 0.003, whereas in the second, more favorable scenario, it is a substantially smaller value of only 0.000003. This comparison clearly highlights how relative error provides a more meaningful and contextually appropriate assessment of precision, especially when dealing with values across different orders of magnitude. There are two crucial features or caveats associated with the interpretation and application of relative error that should always be kept in mind. Firstly, relative error becomes mathematically undefined whenever the true value (v) is zero, because this true value appears in the denominator of its calculation (as detailed in the formal definition provided above), and division by zero is an undefined operation. Secondly, the concept of relative error is most truly meaningful and consistently interpretable only when the measurements under consideration are performed on a ratio scale. This type of scale is characterized by possessing a true, non-arbitrary zero point, which signifies the complete absence of the quantity being measured. If this condition of a ratio scale is not met (e.g., when using interval scales like Celsius temperature), the calculated relative error can become highly sensitive to the choice of measurement units, potentially leading to misleading interpretations. For example, when an absolute error in a temperature measurement given in the Celsius scale is 1 °C, and the true value is 2 °C, the relative error is 0.5 (or 50%, calculated as |1°C / 2°C|). However, if this exact same approximation, representing the same physical temperature difference, is made using the Kelvin scale (which is a ratio scale where 0 K represents absolute zero), a 1 K absolute error (equivalent in magnitude to a 1 °C error) with the same true value of 275.15 K (which is equivalent to 2 °C) gives a markedly different relative error of approximately 0.00363, or about 3.63×10−3 (calculated as |1 K / 275.15 K|). This disparity underscores the importance of the underlying measurement scale. == Comparison == When comparing the behavior and intrinsic characteristics of these two fundamental error types, it is important to recognize their differing sensitivities to common arithmetic operations. Specifically, statements and conclusions made about relative errors are notably sensitive to the addition of a non-zero constant to the underlying true and approximated values, as such an addition alters the base value against which the error is relativized, thereby changing the ratio. However, relative errors remain unaffected by the multiplication of both the true and approximated values by the same non-zero constant, because this constant would appear in both the numerator (of the absolute error) and the denominator (the true value) of the relative error calculation, and would consequently cancel out, leaving the relative error unchanged. Conversely, for absolute errors, the opposite relationship holds true: absolute errors are directly sensitive to the multiplication of the underlying values by a constant (as this scales the magnitude of the difference itself), but they are largely insensitive to the addition of a constant to these values (since adding the same constant to both the true value and its approximation does not change the difference between them: (v+c) − (vapprox+c) = v − vapprox).: 34  == Polynomial-time approximation of real numbers == In the realm of computational complexity theory, we define that a real value v is polynomially computable with absolute error from a given input if, for any specified rational number ε > 0 representing the desired maximum permissible absolute error, it is algorithmically possible to compute a rational number vapprox such that vapprox approximates v with an absolute error no greater than ε (formally, |v − vapprox| ≤ ε). Crucially, this computation must be achievable within a time duration that is polynomial in terms of the size of the input data and the encoding size of ε (the latter typically being of the order O(log(1/ε)) bits, reflecting the number of bits needed to represent the precision). Analogously, the value v is considered polynomially computable with relative error if, for any specified rational number η > 0 representing the desired maximum permissible relative error, it is possible to compute a rational number vapprox that approximates v with a relative error no greater than η (formally, |(v − vapprox)/v| ≤ η, assuming v ≠ 0). This computation, similar to the absolute error case, must likewise be achievable in an amount of time that is polynomial in the size of the input data and the encoding size of η (which is typically O(log(1/η)) bits). It can be demonstrated that if a value v is polynomially computable with relative error (utilizing an algorithm that we can designate as REL), then it is consequently also polynomially computable with absolute error. Proof sketch: Let ε > 0 be the target maximum absolute error that we wish to achieve. The procedure commences by invoking the REL algorithm with a chosen relative error bound of, for example, η = 1/2. This initial step aims to find a rational number approximation r1 such that the inequality |v − r1| ≤ |v|/2 holds true. From this relationship, by applying the reverse triangle inequality (|v| − |r1| ≤ |v − r1|), we can deduce that |v| ≤ 2|r1| (this holds assuming r1 ≠ 0; if r1 = 0, then the relative error condition implies v must also be 0, in which case the problem of achieving any absolute error ε > 0 is trivial, as vapprox = 0 works, and we are done). Given that the REL algorithm operates in polynomial time, the encoding length of the computed r1 will necessarily be polynomial with respect to the input size. Subsequently, the REL algorithm is invoked a second time, now with a new, typically much smaller, relative error target set to η' = ε / (2|r1|) (this step also assumes r1 is non-zero, which we can ensure or handle as a special case). This second application of REL yields another rational number approximation, r2, that satisfies the condition |v − r2| ≤ η'|v|. Substituting the expression for η' gives |v − r2| ≤ (ε / (2|r1|)) |v|. Now, using the previously derived inequality |v| ≤ 2|r1|, we can bound the term: |v − r2| ≤ (ε / (2|r1|)) × (2|r1|) = ε. Thus, the approximation r2 successfully approximates v with the desired absolute error ε, demonstrating that polynomial computability with relative error implies polynomial computability with absolute error.: 34  The reverse implication, namely that polynomial computability with absolute error implies polynomial computability with relative error, is generally not true without imposing additional conditions or assumptions. However, a significant special case exists: if one can assume that some positive lower bound b on the magnitude of v (i.e., |v| > b > 0) can itself be computed in polynomial time, and if v is also known to be polynomially computable with absolute error (perhaps via an algorithm designated as ABS), then v also becomes polynomially computable with relative error. This is because one can simply invoke the ABS algorithm with a carefully chosen target absolute error, specifically εtarget = ηb, where η is the desired relative error. The resulting approximation vapprox would satisfy |v − vapprox| ≤ ηb. To see the implication for relative error, we divide by |v| (which is non-zero): |(v − vapprox)/v| ≤ (ηb)/|v|. Since we have the condition |v| > b, it follows that b/|v| < 1. Therefore, the relative error is bounded by η × (b/|v|) < η × 1 = η, which is the desired outcome for polynomial computability with relative error. An algorithm that, for every given rational number η > 0, successfully computes a rational number vapprox that approximates v with a relative error no greater than η, and critically, does so in a time complexity that is polynomial in both the size of the input and in the reciprocal of the relative error, 1/η (rather than being polynomial merely in log(1/η), which typically allows for faster computation when η is extremely small), is known as a Fully Polynomial-Time Approximation Scheme (FPTAS). The dependence on 1/η rather than log(1/η) is a defining characteristic of FPTAS and distinguishes it from weaker approximation schemes. == Instruments == In the context of most indicating measurement instruments, such as analog or digital voltmeters, pressure gauges, and thermometers, the specified accuracy is frequently guaranteed by their manufacturers as a certain percentage of the instrument's full-scale reading capability, rather than as a percentage of the actual reading. The defined boundaries or limits of these permissible deviations from the true or specified values under operational conditions are commonly referred to as limiting errors or, alternatively, guarantee errors. This method of specifying accuracy implies that the maximum possible absolute error can be larger when measuring values towards the higher end of the instrument's scale, while the relative error with respect to the full-scale value itself remains constant across the range. Consequently, the relative error with respect to the actual measured value can become quite large for readings at the lower end of the instrument's scale. == Generalizations == The fundamental definitions of absolute and relative error, as presented primarily for scalar (one-dimensional) values, can be naturally and rigorously extended to more complex scenarios where the quantity of interest v {\displaystyle v} and its corresponding approximation v approx {\displaystyle v_{\text{approx}}} are n-dimensional vectors, matrices, or, more generally, elements of a normed vector space. This important generalization is typically achieved by systematically replacing the absolute value function (which effectively measures magnitude or "size" for scalar numbers) with an appropriate vector n-norm or matrix norm. Common examples of such norms include the L1 norm (sum of absolute component values), the L2 norm (Euclidean norm, or square root of the sum of squared components), and the L∞ norm (maximum absolute component value). These norms provide a way to quantify the "distance" or "difference" between the true vector (or matrix) and its approximation in a multi-dimensional space, thereby allowing for analogous definitions of absolute and relative error in these higher-dimensional contexts. == See also == Accepted and experimental value Condition number Errors and residuals in statistics Experimental uncertainty analysis Machine epsilon Measurement error Measurement uncertainty Propagation of uncertainty Quantization error Relative difference Round-off error Uncertainty == References == == External links == Weisstein, Eric W. "Percentage error". MathWorld.
Wikipedia/Approximation_error
In mathematics, Spouge's approximation is a formula for computing an approximation of the gamma function. It was named after John L. Spouge, who defined the formula in a 1994 paper. The formula is a modification of Stirling's approximation, and has the form Γ ( z + 1 ) = ( z + a ) z + 1 2 e − z − a ( c 0 + ∑ k = 1 a − 1 c k z + k + ε a ( z ) ) {\displaystyle \Gamma (z+1)=(z+a)^{z+{\frac {1}{2}}}e^{-z-a}\left(c_{0}+\sum _{k=1}^{a-1}{\frac {c_{k}}{z+k}}+\varepsilon _{a}(z)\right)} where a is an arbitrary positive integer and the coefficients are given by c 0 = 2 π c k = ( − 1 ) k − 1 ( k − 1 ) ! ( − k + a ) k − 1 2 e − k + a k ∈ { 1 , 2 , … , a − 1 } . {\displaystyle {\begin{aligned}c_{0}&={\sqrt {2\pi }}\\c_{k}&={\frac {(-1)^{k-1}}{(k-1)!}}(-k+a)^{k-{\frac {1}{2}}}e^{-k+a}\qquad k\in \{1,2,\dots ,a-1\}.\end{aligned}}} Spouge has proved that, if Re(z) > 0 and a > 2, the relative error in discarding εa(z) is bounded by a − 1 2 ( 2 π ) − a − 1 2 . {\displaystyle a^{-{\frac {1}{2}}}(2\pi )^{-a-{\frac {1}{2}}}.} The formula is similar to the Lanczos approximation, but has some distinct features. Whereas the Lanczos formula exhibits faster convergence, Spouge's coefficients are much easier to calculate and the error can be set arbitrarily low. The formula is therefore feasible for arbitrary-precision evaluation of the gamma function. However, special care must be taken to use sufficient precision when computing the sum due to the large size of the coefficients ck, as well as their alternating sign. For example, for a = 49, one must compute the sum using about 65 decimal digits of precision in order to obtain the promised 40 decimal digits of accuracy. == See also == Stirling's approximation Lanczos approximation == References ==
Wikipedia/Spouge's_approximation
In real analysis, a branch of mathematics, a slowly varying function is a function of a real variable whose behaviour at infinity is in some sense similar to the behaviour of a function converging at infinity. Similarly, a regularly varying function is a function of a real variable whose behaviour at infinity is similar to the behaviour of a power law function (like a polynomial) near infinity. These classes of functions were both introduced by Jovan Karamata, and have found several important applications, for example in probability theory. == Basic definitions == Definition 1. A measurable function L : (0, +∞) → (0, +∞) is called slowly varying (at infinity) if for all a > 0, lim x → ∞ L ( a x ) L ( x ) = 1. {\displaystyle \lim _{x\to \infty }{\frac {L(ax)}{L(x)}}=1.} Definition 2. Let L : (0, +∞) → (0, +∞). Then L is a regularly varying function if and only if ∀ a > 0 , g L ( a ) = lim x → ∞ L ( a x ) L ( x ) ∈ R + {\displaystyle \forall a>0,g_{L}(a)=\lim _{x\to \infty }{\frac {L(ax)}{L(x)}}\in \mathbb {R} ^{+}} . In particular, the limit must be finite. These definitions are due to Jovan Karamata. == Basic properties == Regularly varying functions have some important properties: a partial list of them is reported below. More extensive analyses of the properties characterizing regular variation are presented in the monograph by Bingham, Goldie & Teugels (1987). === Uniformity of the limiting behaviour === Theorem 1. The limit in definitions 1 and 2 is uniform if a is restricted to a compact interval. === Karamata's characterization theorem === Theorem 2. Every regularly varying function f : (0, +∞) → (0, +∞) is of the form f ( x ) = x β L ( x ) {\displaystyle f(x)=x^{\beta }L(x)} where β is a real number, L is a slowly varying function. Note. This implies that the function g(a) in definition 2 has necessarily to be of the following form g ( a ) = a ρ {\displaystyle g(a)=a^{\rho }} where the real number ρ is called the index of regular variation. === Karamata representation theorem === Theorem 3. A function L is slowly varying if and only if there exists B > 0 such that for all x ≥ B the function can be written in the form L ( x ) = exp ⁡ ( η ( x ) + ∫ B x ε ( t ) t d t ) {\displaystyle L(x)=\exp \left(\eta (x)+\int _{B}^{x}{\frac {\varepsilon (t)}{t}}\,dt\right)} where η(x) is a bounded measurable function of a real variable converging to a finite number as x goes to infinity ε(x) is a bounded measurable function of a real variable converging to zero as x goes to infinity. == Examples == If L is a measurable function and has a limit lim x → ∞ L ( x ) = b ∈ ( 0 , ∞ ) , {\displaystyle \lim _{x\to \infty }L(x)=b\in (0,\infty ),} then L is a slowly varying function. For any β ∈ R, the function L(x) = log β x is slowly varying. The function L(x) = x is not slowly varying, nor is L(x) = x β for any real β ≠ 0. However, these functions are regularly varying. == See also == Analytic number theory Hardy–Littlewood tauberian theorem and its treatment by Karamata == Notes == == References == Bingham, N.H. (2001) [1994], "Karamata theory", Encyclopedia of Mathematics, EMS Press Bingham, N. H.; Goldie, C. M.; Teugels, J. L. (1987), Regular Variation, Encyclopedia of Mathematics and its Applications, vol. 27, Cambridge: Cambridge University Press, ISBN 0-521-30787-2, MR 0898871, Zbl 0617.26001 Galambos, J.; Seneta, E. (1973), "Regularly Varying Sequences", Proceedings of the American Mathematical Society, 41 (1): 110–116, doi:10.2307/2038824, ISSN 0002-9939, JSTOR 2038824.
Wikipedia/Slowly_varying_function
In mathematics, the Lanczos approximation is a method for computing the gamma function numerically, published by Cornelius Lanczos in 1964. It is a practical alternative to the more popular Stirling's approximation for calculating the gamma function with fixed precision. == Introduction == The Lanczos approximation consists of the formula Γ ( z + 1 ) = 2 π ( z + g + 1 2 ) z + 1 / 2 e − ( z + g + 1 / 2 ) A g ( z ) {\displaystyle \Gamma (z+1)={\sqrt {2\pi }}{\left(z+g+{\tfrac {1}{2}}\right)}^{z+1/2}e^{-(z+g+1/2)}A_{g}(z)} for the gamma function, with A g ( z ) = 1 2 p 0 ( g ) + p 1 ( g ) z z + 1 + p 2 ( g ) z ( z − 1 ) ( z + 1 ) ( z + 2 ) + ⋯ . {\displaystyle A_{g}(z)={\frac {1}{2}}p_{0}(g)+p_{1}(g){\frac {z}{z+1}}+p_{2}(g){\frac {z(z-1)}{(z+1)(z+2)}}+\cdots .} Here g is a real constant that may be chosen arbitrarily subject to the restriction that Re(z+g+⁠1/2⁠) > 0. The coefficients p, which depend on g, are slightly more difficult to calculate (see below). Although the formula as stated here is only valid for arguments in the right complex half-plane, it can be extended to the entire complex plane by the reflection formula, Γ ( 1 − z ) Γ ( z ) = π sin ⁡ π z . {\displaystyle \Gamma (1-z)\;\Gamma (z)={\pi \over \sin \pi z}.} The series A is convergent, and may be truncated to obtain an approximation with the desired precision. By choosing an appropriate g (typically a small integer), only some 5–10 terms of the series are needed to compute the gamma function with typical single or double floating-point precision. If a fixed g is chosen, the coefficients can be calculated in advance and, thanks to partial fraction decomposition, the sum is recast into the following form: A g ( z ) = c 0 + ∑ k = 1 N c k z + k {\displaystyle A_{g}(z)=c_{0}+\sum _{k=1}^{N}{\frac {c_{k}}{z+k}}} Thus computing the gamma function becomes a matter of evaluating only a small number of elementary functions and multiplying by stored constants. The Lanczos approximation was popularized by Numerical Recipes, according to which computing the gamma function becomes "not much more difficult than other built-in functions that we take for granted, such as sin x or ex." The method is also implemented in the GNU Scientific Library, Boost, CPython and musl. == Coefficients == The coefficients are given by p k ( g ) = 2 π ∑ ℓ = 0 k C 2 k + 1 , 2 ℓ + 1 ( ℓ − 1 2 ) ! ( ℓ + g + 1 2 ) − ( ℓ + 1 / 2 ) e ℓ + g + 1 / 2 {\displaystyle p_{k}(g)={\frac {\sqrt {2\,}}{\pi }}\sum _{\ell =0}^{k}C_{2k+1,\,2\ell +1}\left(\ell -{\tfrac {1}{2}}\right)!{\left(\ell +g+{\tfrac {1}{2}}\right)}^{-(\ell +1/2)}e^{\ell +g+1/2}} where C n , m {\displaystyle C_{n,m}} represents the (n, m)th element of the matrix of coefficients for the Chebyshev polynomials, which can be calculated recursively from these identities: C 1 , 1 = 1 C 2 , 2 = 1 C n + 1 , 1 = − C n − 1 , 1 for n = 2 , 3 , 4 … C n + 1 , n + 1 = 2 C n , n for n = 2 , 3 , 4 … C n + 1 , m + 1 = 2 C n , m − C n − 1 , m + 1 for n > m = 1 , 2 , 3 … {\displaystyle {\begin{aligned}C_{1,\,1}&=1\\[5px]C_{2,\,2}&=1\\[5px]C_{n+1,\,1}&=-\,C_{n-1,\,1}&{\text{ for }}n&=2,3,4\,\dots \\[5px]C_{n+1,\,n+1}&=2\,C_{n,\,n}&{\text{ for }}n&=2,3,4\,\dots \\[5px]C_{n+1,\,m+1}&=2\,C_{n,\,m}-C_{n-1,\,m+1}&{\text{ for }}n&>m=1,2,3\,\dots \end{aligned}}} Godfrey (2001) describes how to obtain the coefficients and also the value of the truncated series A as a matrix product. == Derivation == Lanczos derived the formula from Leonhard Euler's integral Γ ( z + 1 ) = ∫ 0 ∞ t z e − t d t , {\displaystyle \Gamma (z+1)=\int _{0}^{\infty }t^{z}\,e^{-t}\,dt,} performing a sequence of basic manipulations to obtain Γ ( z + 1 ) = ( z + g + 1 ) z + 1 e − ( z + g + 1 ) ∫ 0 e ( v ( 1 − log ⁡ v ) ) z v g d v , {\displaystyle \Gamma (z+1)=(z+g+1)^{z+1}e^{-(z+g+1)}\int _{0}^{e}{\Big (}v(1-\log v){\Big )}^{z}v^{g}\,dv,} and deriving a series for the integral. == Simple implementation == The following implementation in the Python programming language works for complex arguments and typically gives 13 correct decimal places. Note that omitting the smallest coefficients (in pursuit of speed, for example) gives totally inaccurate results; the coefficients must be recomputed from scratch for an expansion with fewer terms. == See also == Stirling's approximation Spouge's approximation == References == Godfrey, Paul (2001). "Lanczos Implementation of the Gamma Function". Lanczos, Cornelius (1964). "A Precision Approximation of the Gamma Function". Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis. 1 (1): 86–96. Bibcode:1964SJNA....1...86L. doi:10.1137/0701008. ISSN 0887-459X. JSTOR 2949767. Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007), "Section 6.1. Gamma Function", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 Pugh, Glendon (2004). An analysis of the Lanczos Gamma approximation (PDF) (PhD thesis). Toth, Viktor (2005). "Programmable Calculators: The Lanczos Approximation". Weisstein, Eric W. "Lanczos Approximation". MathWorld.
Wikipedia/Lanczos_approximation
In mathematics, the Riemann–Siegel theta function is defined in terms of the gamma function as θ ( t ) = arg ⁡ ( Γ ( 1 4 + i t 2 ) ) − log ⁡ π 2 t {\displaystyle \theta (t)=\arg \left(\Gamma \left({\frac {1}{4}}+{\frac {it}{2}}\right)\right)-{\frac {\log \pi }{2}}t} for real values of t. Here the argument is chosen in such a way that a continuous function is obtained and θ ( 0 ) = 0 {\displaystyle \theta (0)=0} holds, i.e., in the same way that the principal branch of the log-gamma function is defined. It has an asymptotic expansion θ ( t ) ∼ t 2 log ⁡ t 2 π − t 2 − π 8 + 1 48 t + 7 5760 t 3 + ⋯ {\displaystyle \theta (t)\sim {\frac {t}{2}}\log {\frac {t}{2\pi }}-{\frac {t}{2}}-{\frac {\pi }{8}}+{\frac {1}{48t}}+{\frac {7}{5760t^{3}}}+\cdots } which is not convergent, but whose first few terms give a good approximation for t ≫ 1 {\displaystyle t\gg 1} . Its Taylor-series at 0 which converges for | t | < 1 / 2 {\displaystyle |t|<1/2} is θ ( t ) = − t 2 log ⁡ π + ∑ k = 0 ∞ ( − 1 ) k ψ ( 2 k ) ( 1 4 ) ( 2 k + 1 ) ! ( t 2 ) 2 k + 1 {\displaystyle \theta (t)=-{\frac {t}{2}}\log \pi +\sum _{k=0}^{\infty }{\frac {(-1)^{k}\psi ^{(2k)}\left({\frac {1}{4}}\right)}{(2k+1)!}}\left({\frac {t}{2}}\right)^{2k+1}} where ψ ( 2 k ) {\displaystyle \psi ^{(2k)}} denotes the polygamma function of order 2 k {\displaystyle 2k} . The Riemann–Siegel theta function is of interest in studying the Riemann zeta function, since it can rotate the Riemann zeta function such that it becomes the totally real valued Z function on the critical line s = 1 / 2 + i t {\displaystyle s=1/2+it} . == Curve discussion == The Riemann–Siegel theta function is an odd real analytic function for real values of t {\displaystyle t} with three roots at 0 {\displaystyle 0} and ± 17.8455995405 … {\displaystyle \pm 17.8455995405\ldots } . It is an increasing function for | t | > 6.29 {\displaystyle |t|>6.29} , and has local extrema at ± 6.289835988 … {\displaystyle \pm 6.289835988\ldots } , with value ∓ 3.530972829 … {\displaystyle \mp 3.530972829\ldots } . It has a single inflection point at t = 0 {\displaystyle t=0} with θ ′ ( 0 ) = − ln ⁡ π + γ + π / 2 + 3 ln ⁡ 2 2 = − 2.6860917 … {\displaystyle \theta ^{\prime }(0)=-{\frac {\ln \pi +\gamma +\pi /2+3\ln 2}{2}}=-2.6860917\ldots } , which is the minimum of its derivative. == Theta as a function of a complex variable == We have an infinite series expression for the log-gamma function log ⁡ Γ ( z ) = − γ z − log ⁡ z + ∑ n = 1 ∞ ( z n − log ⁡ ( 1 + z n ) ) , {\displaystyle \log \Gamma \left(z\right)=-\gamma z-\log z+\sum _{n=1}^{\infty }\left({\frac {z}{n}}-\log \left(1+{\frac {z}{n}}\right)\right),} where γ is Euler's constant. Substituting ( 2 i t + 1 ) / 4 {\displaystyle (2it+1)/4} for z and taking the imaginary part termwise gives the following series for θ(t) θ ( t ) = − γ + log ⁡ π 2 t − arctan ⁡ 2 t + ∑ n = 1 ∞ ( t 2 n − arctan ⁡ ( 2 t 4 n + 1 ) ) . {\displaystyle \theta (t)=-{\frac {\gamma +\log \pi }{2}}t-\arctan 2t+\sum _{n=1}^{\infty }\left({\frac {t}{2n}}-\arctan \left({\frac {2t}{4n+1}}\right)\right).} For values with imaginary part between −1 and 1, the arctangent function is holomorphic, and it is easily seen that the series converges uniformly on compact sets in the region with imaginary part between −1/2 and 1/2, leading to a holomorphic function on this domain. It follows that the Z function is also holomorphic in this region, which is the critical strip. We may use the identities arg ⁡ z = log ⁡ z − log ⁡ z ¯ 2 i and Γ ( z ) ¯ = Γ ( z ¯ ) {\displaystyle \arg z={\frac {\log z-\log {\bar {z}}}{2i}}\quad {\text{and}}\quad {\overline {\Gamma (z)}}=\Gamma ({\bar {z}})} to obtain the closed-form expression θ ( t ) = log ⁡ Γ ( 2 i t + 1 4 ) − log ⁡ Γ ( − 2 i t + 1 4 ) 2 i − log ⁡ π 2 t = − i 2 ( ln ⁡ Γ ( 1 4 + i t 2 ) − ln ⁡ Γ ( 1 4 − i t 2 ) ) − ln ⁡ ( π ) t 2 {\displaystyle \theta (t)={\frac {\log \Gamma \left({\frac {2it+1}{4}}\right)-\log \Gamma \left({\frac {-2it+1}{4}}\right)}{2i}}-{\frac {\log \pi }{2}}t=-{\frac {i}{2}}\left(\ln \Gamma \left({\frac {1}{4}}+{\frac {it}{2}}\right)-\ln \Gamma \left({\frac {1}{4}}-{\frac {it}{2}}\right)\right)-{\frac {\ln(\pi )t}{2}}} which extends our original definition to a holomorphic function of t. Since the principal branch of log Γ has a single branch cut along the negative real axis, θ(t) in this definition inherits branch cuts along the imaginary axis above i/2 and below −i/2. == Gram points == The Riemann zeta function on the critical line can be written ζ ( 1 2 + i t ) = e − i θ ( t ) Z ( t ) , {\displaystyle \zeta \left({\frac {1}{2}}+it\right)=e^{-i\theta (t)}Z(t),} Z ( t ) = e i θ ( t ) ζ ( 1 2 + i t ) . {\displaystyle Z(t)=e^{i\theta (t)}\zeta \left({\frac {1}{2}}+it\right).} If t {\displaystyle t} is a real number, then the Z function Z ( t ) {\displaystyle Z(t)} returns real values. Hence the zeta function on the critical line will be real either at a zero, corresponding to Z ( t ) = 0 {\displaystyle Z(t)=0} , or when sin ⁡ ( θ ( t ) ) = 0 {\displaystyle \sin \left(\,\theta (t)\,\right)=0} . Positive real values of t {\displaystyle t} where the latter case occurs are called Gram points, after J. P. Gram, and can of course also be described as the points where θ ( t ) π {\displaystyle {\frac {\theta (t)}{\pi }}} is an integer. A Gram point is a solution g n {\displaystyle g_{n}} of θ ( g n ) = n π . {\displaystyle \theta (g_{n})=n\pi .} These solutions are approximated by the sequence: g n ′ = 2 π ( n + 1 − 7 8 ) W ( 1 e ( n + 1 − 7 8 ) ) , {\displaystyle g'_{n}={\frac {2\pi \left(n+1-{\frac {7}{8}}\right)}{W\left({\frac {1}{e}}\left(n+1-{\frac {7}{8}}\right)\right)}},} where W {\displaystyle W} is the Lambert W function. Here are the smallest non negative Gram points The choice of the index n is a bit crude. It is historically chosen in such a way that the index is 0 at the first value which is larger than the smallest positive zero (at imaginary part 14.13472515 ...) of the Riemann zeta function on the critical line. Notice, this θ {\displaystyle \theta } -function oscillates for absolute-small real arguments and therefore is not uniquely invertible in the interval [−24,24]. Thus the odd theta-function has its symmetric Gram point with value 0 at index −3. Gram points are useful when computing the zeros of Z ( t ) {\displaystyle Z\left(t\right)} . At a Gram point g n , {\displaystyle g_{n},} ζ ( 1 2 + i g n ) = cos ⁡ ( θ ( g n ) ) Z ( g n ) = ( − 1 ) n Z ( g n ) , {\displaystyle \zeta \left({\frac {1}{2}}+ig_{n}\right)=\cos(\theta (g_{n}))Z(g_{n})=(-1)^{n}Z(g_{n}),} and if this is positive at two successive Gram points, Z ( t ) {\displaystyle Z\left(t\right)} must have a zero in the interval. According to Gram’s law, the real part is usually positive while the imaginary part alternates with the Gram points, between positive and negative values at somewhat regular intervals. ( − 1 ) n Z ( g n ) > 0 {\displaystyle (-1)^{n}Z(g_{n})>0} The number of roots, N ( T ) {\displaystyle N(T)} , in the strip from 0 to T, can be found by N ( T ) = θ ( T ) π + 1 + S ( T ) , {\displaystyle N(T)={\frac {\theta (T)}{\pi }}+1+S(T),} where S ( T ) {\displaystyle S(T)} is an error term which grows asymptotically like log ⁡ T {\displaystyle \log T} . Only if g n {\displaystyle g_{n}} would obey Gram’s law, then finding the number of roots in the strip simply becomes N ( g n ) = n + 1. {\displaystyle N(g_{n})=n+1.} Today we know, that in the long run, Gram's law fails for about 1/4 of all Gram-intervals to contain exactly 1 zero of the Riemann zeta-function. Gram was afraid that it may fail for larger indices (the first miss is at index 126 before the 127th zero) and thus claimed this only for not too high indices. Later Hutchinson coined the phrase Gram's law for the (false) statement that all zeroes on the critical line would be separated by Gram points. == See also == Z function == References == Edwards, H. M. (1974), Riemann's Zeta Function, New York: Dover Publications, ISBN 978-0-486-41740-0, MR 0466039 Gabcke, W. (1979), Neue Herleitung und explizierte Restabschätzung der Riemann-Siegel-Formel. Thesis, University of Göttingen. Revised version (eDiss Göttingen 2015) Gram, J. P. (1903), "Note sur les zéros de la fonction ζ(s) de Riemann", Acta Mathematica, 27 (1): 289–304, doi:10.1007/BF02421310 == External links == Weisstein, Eric W. "Riemann-Siegel Functions". MathWorld. Wolfram Research – Riemann-Siegel Theta function (includes function plotting and evaluation)
Wikipedia/Riemann–Siegel_theta_function
In physics, time is defined by its measurement: time is what a clock reads. In classical, non-relativistic physics, it is a scalar quantity (often denoted by the symbol t {\displaystyle t} ) and, like length, mass, and charge, is usually described as a fundamental quantity. Time can be combined mathematically with other physical quantities to derive other concepts such as motion, kinetic energy and time-dependent fields. Timekeeping is a complex of technological and scientific issues, and part of the foundation of recordkeeping. == Markers of time == Before there were clocks, time was measured by those physical processes which were understandable to each epoch of civilization: the first appearance (see: heliacal rising) of Sirius to mark the flooding of the Nile each year the periodic succession of night and day, seemingly eternally the position on the horizon of the first appearance of the sun at dawn the position of the sun in the sky the marking of the moment of noontime during the day the length of the shadow cast by a gnomon Eventually, it became possible to characterize the passage of time with instrumentation, using operational definitions. Simultaneously, our conception of time has evolved, as shown below. == Unit of measurement of time == In the International System of Units (SI), the unit of time is the second (symbol: s). It has been defined since 1967 as "the duration of 9192631770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom", and is an SI base unit. This definition is based on the operation of a caesium atomic clock. These clocks became practical for use as primary reference standards after about 1955, and have been in use ever since. === State of the art in timekeeping === The UTC timestamp in use worldwide is an atomic time standard. The relative accuracy of such a time standard is currently on the order of 10−15 (corresponding to 1 second in approximately 30 million years). The smallest time step considered theoretically observable is called the Planck time, which is approximately 5.391×10−44 seconds – many orders of magnitude below the resolution of current time standards. The caesium atomic clock became practical after 1950, when advances in electronics enabled reliable measurement of the microwave frequencies it generates. As further advances occurred, atomic clock research has progressed to ever-higher frequencies, which can provide higher accuracy and higher precision. Clocks based on these techniques have been developed, but are not yet in use as primary reference standards. == Conceptions of time == Galileo, Newton, and most people up until the 20th century thought that time was the same for everyone everywhere. This is the basis for timelines, where time is a parameter. The modern understanding of time is based on Einstein's theory of relativity, in which rates of time run differently depending on relative motion, and space and time are merged into spacetime, where we live on a world line rather than a timeline. In this view time is a coordinate. According to the prevailing cosmological model of the Big Bang theory, time itself began as part of the entire Universe about 13.8 billion years ago. === Regularities in nature === In order to measure time, one can record the number of occurrences (events) of some periodic phenomenon. The regular recurrences of the seasons, the motions of the sun, moon and stars were noted and tabulated for millennia, before the laws of physics were formulated. The sun was the arbiter of the flow of time, but time was known only to the hour for millennia, hence, the use of the gnomon was known across most of the world, especially Eurasia, and at least as far southward as the jungles of Southeast Asia. In particular, the astronomical observatories maintained for religious purposes became accurate enough to ascertain the regular motions of the stars, and even some of the planets. At first, timekeeping was done by hand by priests, and then for commerce, with watchmen to note time as part of their duties. The tabulation of the equinoxes, the sandglass, and the water clock became more and more accurate, and finally reliable. For ships at sea, marine sandglasses were used. These devices allowed sailors to call the hours, and to calculate sailing velocity. ==== Mechanical clocks ==== Richard of Wallingford (1292–1336), abbot of St. Albans Abbey, famously built a mechanical clock as an astronomical orrery about 1330. By the time of Richard of Wallingford, the use of ratchets and gears allowed the towns of Europe to create mechanisms to display the time on their respective town clocks; by the time of the scientific revolution, the clocks became miniaturized enough for families to share a personal clock, or perhaps a pocket watch. At first, only kings could afford them. Pendulum clocks were widely used in the 18th and 19th century. They have largely been replaced in general use by quartz and digital clocks. Atomic clocks can theoretically keep accurate time for millions of years. They are appropriate for standards and scientific use. === Galileo: the flow of time === In 1583, Galileo Galilei (1564–1642) discovered that a pendulum's harmonic motion has a constant period, which he learned by timing the motion of a swaying lamp in harmonic motion at mass at the cathedral of Pisa, with his pulse. In his Two New Sciences (1638), Galileo used a water clock to measure the time taken for a bronze ball to roll a known distance down an inclined plane; this clock was: ...a large vessel of water placed in an elevated position; to the bottom of this vessel was soldered a pipe of small diameter giving a thin jet of water, which we collected in a small glass during the time of each descent, whether for the whole length of the channel or for a part of its length; the water thus collected was weighed, after each descent, on a very accurate balance; the differences and ratios of these weights gave us the differences and ratios of the times, and this with such accuracy that although the operation was repeated many, many times, there was no appreciable discrepancy in the results. Galileo's experimental setup to measure the literal flow of time, in order to describe the motion of a ball, preceded Isaac Newton's statement in his Principia, "I do not define time, space, place and motion, as being well known to all." The Galilean transformations assume that time is the same for all reference frames. === Newtonian physics: linear time === In or around 1665, when Isaac Newton (1643–1727) derived the motion of objects falling under gravity, the first clear formulation for mathematical physics of a treatment of time began: linear time, conceived as a universal clock. Absolute, true, and mathematical time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called duration: relative, apparent, and common time, is some sensible and external (whether accurate or unequable) measure of duration by the means of motion, which is commonly used instead of true time; such as an hour, a day, a month, a year. The water clock mechanism described by Galileo was engineered to provide laminar flow of the water during the experiments, thus providing a constant flow of water for the durations of the experiments, and embodying what Newton called duration. In this section, the relationships listed below treat time as a parameter which serves as an index to the behavior of the physical system under consideration. Because Newton's fluents treat a linear flow of time (what he called mathematical time), time could be considered to be a linearly varying parameter, an abstraction of the march of the hours on the face of a clock. Calendars and ship's logs could then be mapped to the march of the hours, days, months, years and centuries. === Thermodynamics and the paradox of irreversibility === By 1798, Benjamin Thompson (1753–1814) had discovered that work could be transformed to heat without limit – a precursor of the conservation of energy or 1st law of thermodynamics In 1824 Sadi Carnot (1796–1832) scientifically analyzed the steam engine with his Carnot cycle, an abstract engine. Rudolf Clausius (1822–1888) noted a measure of disorder, or entropy, which affects the continually decreasing amount of free energy which is available to a Carnot engine in the: 2nd law of thermodynamics Thus the continual march of a thermodynamic system, from lesser to greater entropy, at any given temperature, defines an arrow of time. In particular, Stephen Hawking identifies three arrows of time: Psychological arrow of time – our perception of an inexorable flow. Thermodynamic arrow of time – distinguished by the growth of entropy. Cosmological arrow of time – distinguished by the expansion of the universe. With time, entropy increases in an isolated thermodynamic system. In contrast, Erwin Schrödinger (1887–1961) pointed out that life depends on a "negative entropy flow". Ilya Prigogine (1917–2003) stated that other thermodynamic systems which, like life, are also far from equilibrium, can also exhibit stable spatio-temporal structures that reminisce life. Soon afterward, the Belousov–Zhabotinsky reactions were reported, which demonstrate oscillating colors in a chemical solution. These nonequilibrium thermodynamic branches reach a bifurcation point, which is unstable, and another thermodynamic branch becomes stable in its stead. === Electromagnetism and the speed of light === In 1864, James Clerk Maxwell (1831–1879) presented a combined theory of electricity and magnetism. He combined all the laws then known relating to those two phenomenon into four equations. These equations are known as Maxwell's equations for electromagnetism; they allow for solutions in the form of electromagnetic waves and propagate at a fixed speed, c, regardless of the velocity of the electric charge that generated them. The fact that light is predicted to always travel at speed c would be incompatible with Galilean relativity if Maxwell's equations were assumed to hold in any inertial frame (reference frame with constant velocity), because the Galilean transformations predict the speed to decrease (or increase) in the reference frame of an observer traveling parallel (or antiparallel) to the light. It was expected that there was one absolute reference frame, that of the luminiferous aether, in which Maxwell's equations held unmodified in the known form. The Michelson–Morley experiment failed to detect any difference in the relative speed of light due to the motion of the Earth relative to the luminiferous aether, suggesting that Maxwell's equations did, in fact, hold in all frames. In 1875, Hendrik Lorentz (1853–1928) discovered Lorentz transformations, which left Maxwell's equations unchanged, allowing Michelson and Morley's negative result to be explained. Henri Poincaré (1854–1912) noted the importance of Lorentz's transformation and popularized it. In particular, the railroad car description can be found in Science and Hypothesis, which was published before Einstein's articles of 1905. The Lorentz transformation predicted space contraction and time dilation; until 1905, the former was interpreted as a physical contraction of objects moving with respect to the aether, due to the modification of the intermolecular forces (of electric nature), while the latter was thought to be just a mathematical stipulation. === Relativistic physics: spacetime === Albert Einstein's 1905 special relativity challenged the notion of absolute time, and could only formulate a definition of synchronization for clocks that mark a linear flow of time: If at the point A of space there is a clock, an observer at A can determine the time values of events in the immediate proximity of A by finding the positions of the hands which are simultaneous with these events. If there is at the point B of space another clock in all respects resembling the one at A, it is possible for an observer at B to determine the time values of events in the immediate neighbourhood of B. But it is not possible without further assumption to compare, in respect of time, an event at A with an event at B. We have so far defined only an "A time" and a "B time". We have not defined a common "time" for A and B, for the latter cannot be defined at all unless we establish by definition that the "time" required by light to travel from A to B equals the "time" it requires to travel from B to A. Let a ray of light start at the "A time" tA from A towards B, let it at the "B time" tB be reflected at B in the direction of A, and arrive again at A at the “A time” t′A. In accordance with definition the two clocks synchronize if t B − t A = t A ′ − t B . {\displaystyle t_{\text{B}}-t_{\text{A}}=t'_{\text{A}}-t_{\text{B}}{\text{.}}\,\!} We assume that this definition of synchronism is free from contradictions, and possible for any number of points; and that the following relations are universally valid:— If the clock at B synchronizes with the clock at A, the clock at A synchronizes with the clock at B. If the clock at A synchronizes with the clock at B and also with the clock at C, the clocks at B and C also synchronize with each other. Einstein showed that if the speed of light is not changing between reference frames, space and time must be so that the moving observer will measure the same speed of light as the stationary one because velocity is defined by space and time: v = d r d t , {\displaystyle \mathbf {v} ={d\mathbf {r} \over dt}{\text{,}}} where r is position and t is time. Indeed, the Lorentz transformation (for two reference frames in relative motion, whose x axis is directed in the direction of the relative velocity) { t ′ = γ ( t − v x / c 2 ) where γ = 1 / 1 − v 2 / c 2 x ′ = γ ( x − v t ) y ′ = y z ′ = z {\displaystyle {\begin{cases}t'&=\gamma (t-vx/c^{2}){\text{ where }}\gamma =1/{\sqrt {1-v^{2}/c^{2}}}\\x'&=\gamma (x-vt)\\y'&=y\\z'&=z\end{cases}}} can be said to "mix" space and time in a way similar to the way a Euclidean rotation around the z axis mixes x and y coordinates. Consequences of this include relativity of simultaneity. More specifically, the Lorentz transformation is a hyperbolic rotation ( c t ′ x ′ ) = ( cosh ⁡ ϕ − sinh ⁡ ϕ − sinh ⁡ ϕ cosh ⁡ ϕ ) ( c t x ) where ϕ = artanh v c , {\displaystyle {\begin{pmatrix}ct'\\x'\end{pmatrix}}={\begin{pmatrix}\cosh \phi &-\sinh \phi \\-\sinh \phi &\cosh \phi \end{pmatrix}}{\begin{pmatrix}ct\\x\end{pmatrix}}{\text{ where }}\phi =\operatorname {artanh} \,{\frac {v}{c}}{\text{,}}} which is a change of coordinates in the four-dimensional Minkowski space, a dimension of which is ct. (In Euclidean space an ordinary rotation ( x ′ y ′ ) = ( cos ⁡ θ − sin ⁡ θ sin ⁡ θ cos ⁡ θ ) ( x y ) {\displaystyle {\begin{pmatrix}x'\\y'\end{pmatrix}}={\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{pmatrix}}{\begin{pmatrix}x\\y\end{pmatrix}}} is the corresponding change of coordinates.) The speed of light c can be seen as just a conversion factor needed because we measure the dimensions of spacetime in different units; since the metre is currently defined in terms of the second, it has the exact value of 299 792 458 m/s. We would need a similar factor in Euclidean space if, for example, we measured width in nautical miles and depth in feet. In physics, sometimes units of measurement in which c = 1 are used to simplify equations. Time in a "moving" reference frame is shown to run more slowly than in a "stationary" one by the following relation (which can be derived by the Lorentz transformation by putting ∆x′ = 0, ∆τ = ∆t′): Δ t = Δ τ 1 − v 2 / c 2 {\displaystyle \Delta t={{\Delta \tau } \over {\sqrt {1-v^{2}/c^{2}}}}} where: Δ τ {\displaystyle \Delta \tau } is the time between two events as measured in the moving reference frame in which they occur at the same place (e.g. two ticks on a moving clock); it is called the proper time between the two events; Δ {\displaystyle \Delta } t is the time between these same two events, but as measured in the stationary reference frame; v is the speed of the moving reference frame relative to the stationary one; c is the speed of light. Moving objects therefore are said to show a slower passage of time. This is known as time dilation. These transformations are only valid for two frames at constant relative velocity. Naively applying them to other situations gives rise to such paradoxes as the twin paradox. That paradox can be resolved using for instance Einstein's General theory of relativity, which uses Riemannian geometry, geometry in accelerated, noninertial reference frames. Employing the metric tensor which describes Minkowski space: [ ( d x 1 ) 2 + ( d x 2 ) 2 + ( d x 3 ) 2 − c ( d t ) 2 ) ] , {\displaystyle \left[(dx^{1})^{2}+(dx^{2})^{2}+(dx^{3})^{2}-c(dt)^{2})\right],} Einstein developed a geometric solution to Lorentz's transformation that preserves Maxwell's equations. His field equations give an exact relationship between the measurements of space and time in a given region of spacetime and the energy density of that region. Einstein's equations predict that time should be altered by the presence of gravitational fields (see the Schwarzschild metric): T = d t ( 1 − 2 G M r c 2 ) d t 2 − 1 c 2 ( 1 − 2 G M r c 2 ) − 1 d r 2 − r 2 c 2 d θ 2 − r 2 c 2 sin 2 ⁡ θ d ϕ 2 {\displaystyle T={\frac {dt}{\sqrt {\left(1-{\frac {2GM}{rc^{2}}}\right)dt^{2}-{\frac {1}{c^{2}}}\left(1-{\frac {2GM}{rc^{2}}}\right)^{-1}dr^{2}-{\frac {r^{2}}{c^{2}}}d\theta ^{2}-{\frac {r^{2}}{c^{2}}}\sin ^{2}\theta \;d\phi ^{2}}}}} where: T {\displaystyle T} is the gravitational time dilation of an object at a distance of r {\displaystyle r} . d t {\displaystyle dt} is the change in coordinate time, or the interval of coordinate time. G {\displaystyle G} is the gravitational constant M {\displaystyle M} is the mass generating the field ( 1 − 2 G M r c 2 ) d t 2 − 1 c 2 ( 1 − 2 G M r c 2 ) − 1 d r 2 − r 2 c 2 d θ 2 − r 2 c 2 sin 2 ⁡ θ d ϕ 2 {\displaystyle {\sqrt {\left(1-{\frac {2GM}{rc^{2}}}\right)dt^{2}-{\frac {1}{c^{2}}}\left(1-{\frac {2GM}{rc^{2}}}\right)^{-1}dr^{2}-{\frac {r^{2}}{c^{2}}}d\theta ^{2}-{\frac {r^{2}}{c^{2}}}\sin ^{2}\theta \;d\phi ^{2}}}} is the change in proper time d τ {\displaystyle d\tau } , or the interval of proper time. Or one could use the following simpler approximation: d t d τ = 1 1 − ( 2 G M r c 2 ) . {\displaystyle {\frac {dt}{d\tau }}={\frac {1}{\sqrt {1-\left({\frac {2GM}{rc^{2}}}\right)}}}.} That is, the stronger the gravitational field (and, thus, the larger the acceleration), the more slowly time runs. The predictions of time dilation are confirmed by particle acceleration experiments and cosmic ray evidence, where moving particles decay more slowly than their less energetic counterparts. Gravitational time dilation gives rise to the phenomenon of gravitational redshift and Shapiro signal travel time delays near massive objects such as the sun. The Global Positioning System must also adjust signals to account for this effect. According to Einstein's general theory of relativity, a freely moving particle traces a history in spacetime that maximises its proper time. This phenomenon is also referred to as the principle of maximal aging, and was described by Taylor and Wheeler as: "Principle of Extremal Aging: The path a free object takes between two events in spacetime is the path for which the time lapse between these events, recorded on the object's wristwatch, is an extremum." Einstein's theory was motivated by the assumption that every point in the universe can be treated as a 'center', and that correspondingly, physics must act the same in all reference frames. His simple and elegant theory shows that time is relative to an inertial frame. In an inertial frame, Newton's first law holds; it has its own local geometry, and therefore its own measurements of space and time; there is no 'universal clock'. An act of synchronization must be performed between two systems, at the least. === Time in quantum mechanics === There is a time parameter in the equations of quantum mechanics. The Schrödinger equation is H ( t ) | ψ ( t ) ⟩ = i ℏ ∂ ∂ t | ψ ( t ) ⟩ {\displaystyle H(t)\left|\psi (t)\right\rangle =i\hbar {\partial \over \partial t}\left|\psi (t)\right\rangle } One solution can be | ψ e ( t ) ⟩ = e − i H t / ℏ | ψ e ( 0 ) ⟩ {\displaystyle |\psi _{e}(t)\rangle =e^{-iHt/\hbar }|\psi _{e}(0)\rangle } . where e − i H t / ℏ {\displaystyle e^{-iHt/\hbar }} is called the time evolution operator, and H is the Hamiltonian. But the Schrödinger picture shown above is equivalent to the Heisenberg picture, which enjoys a similarity to the Poisson brackets of classical mechanics. The Poisson brackets are superseded by a nonzero commutator, say [H, A] for observable A, and Hamiltonian H: d d t A = ( i ℏ ) − 1 [ A , H ] + ( ∂ A ∂ t ) c l a s s i c a l . {\displaystyle {\frac {d}{dt}}A=(i\hbar )^{-1}[A,H]+\left({\frac {\partial A}{\partial t}}\right)_{\mathrm {classical} }.} This equation denotes an uncertainty relation in quantum physics. For example, with time (the observable A), the energy E (from the Hamiltonian H) gives: Δ E Δ T ≥ ℏ 2 {\displaystyle \Delta E\Delta T\geq {\frac {\hbar }{2}}} where Δ E {\displaystyle \Delta E} is the uncertainty in energy Δ T {\displaystyle \Delta T} is the uncertainty in time ℏ {\displaystyle \hbar } is the reduced Planck constant The more precisely one measures the duration of a sequence of events, the less precisely one can measure the energy associated with that sequence, and vice versa. This equation is different from the standard uncertainty principle, because time is not an operator in quantum mechanics. Corresponding commutator relations also hold for momentum p and position q, which are conjugate variables of each other, along with a corresponding uncertainty principle in momentum and position, similar to the energy and time relation above. Quantum mechanics explains the properties of the periodic table of the elements. Starting with Otto Stern's and Walter Gerlach's experiment with molecular beams in a magnetic field, Isidor Rabi (1898–1988), was able to modulate the magnetic resonance of the beam. In 1945 Rabi then suggested that this technique be the basis of a clock using the resonant frequency of an atomic beam. In 2021 Jun Ye of JILA in Boulder Colorado observed time dilatation in the difference in the rate of optical lattice clock ticks at the top of a cloud of strontium atoms, than at the bottom of that cloud, a column one millimeter tall, under the influence of gravity. == Dynamical systems == One could say that time is a parameterization of a dynamical system that allows the geometry of the system to be manifested and operated on. It has been asserted that time is an implicit consequence of chaos (i.e. nonlinearity/irreversibility): the characteristic time, or rate of information entropy production, of a system. Mandelbrot introduces intrinsic time in his book Multifractals and 1/f noise. === Time crystals === Khemani, Moessner, and Sondhi define a time crystal as a "stable, conservative, macroscopic clock".: 7  == Signalling == Signalling is one application of the electromagnetic waves described above. In general, a signal is part of communication between parties and places. One example might be a yellow ribbon tied to a tree, or the ringing of a church bell. A signal can be part of a conversation, which involves a protocol. Another signal might be the position of the hour hand on a town clock or a railway station. An interested party might wish to view that clock, to learn the time. See: Time ball, an early form of Time signal. We as observers can still signal different parties and places as long as we live within their past light cone. But we cannot receive signals from those parties and places outside our past light cone. Along with the formulation of the equations for the electromagnetic wave, the field of telecommunication could be founded. In 19th century telegraphy, electrical circuits, some spanning continents and oceans, could transmit codes - simple dots, dashes and spaces. From this, a series of technical issues have emerged; see Category:Synchronization. But it is safe to say that our signalling systems can be only approximately synchronized, a plesiochronous condition, from which jitter need be eliminated. That said, systems can be synchronized (at an engineering approximation), using technologies like GPS. The GPS satellites must account for the effects of gravitation and other relativistic factors in their circuitry. See: Self-clocking signal. == Technology for timekeeping standards == The primary time standard in the U.S. is currently NIST-F1, a laser-cooled Cs fountain, the latest in a series of time and frequency standards, from the ammonia-based atomic clock (1949) to the caesium-based NBS-1 (1952) to NIST-7 (1993). The respective clock uncertainty declined from 10,000 nanoseconds per day to 0.5 nanoseconds per day in 5 decades. In 2001 the clock uncertainty for NIST-F1 was 0.1 nanoseconds/day. Development of increasingly accurate frequency standards is underway. In this time and frequency standard, a population of caesium atoms is laser-cooled to temperatures of one microkelvin. The atoms collect in a ball shaped by six lasers, two for each spatial dimension, vertical (up/down), horizontal (left/right), and back/forth. The vertical lasers push the caesium ball through a microwave cavity. As the ball is cooled, the caesium population cools to its ground state and emits light at its natural frequency, stated in the definition of second above. Eleven physical effects are accounted for in the emissions from the caesium population, which are then controlled for in the NIST-F1 clock. These results are reported to BIPM. Additionally, a reference hydrogen maser is also reported to BIPM as a frequency standard for TAI (international atomic time). The measurement of time is overseen by BIPM (Bureau International des Poids et Mesures), located in Sèvres, France, which ensures uniformity of measurements and their traceability to the International System of Units (SI) worldwide. BIPM operates under authority of the Metre Convention, a diplomatic treaty between fifty-one nations, the Member States of the Convention, through a series of Consultative Committees, whose members are the respective national metrology laboratories. == Time in cosmology == The equations of general relativity predict a non-static universe. However, Einstein accepted only a static universe, and modified the Einstein field equation to reflect this by adding the cosmological constant, which he later described as his "biggest blunder". But in 1927, Georges Lemaître (1894–1966) argued, on the basis of general relativity, that the universe originated in a primordial explosion. At the fifth Solvay conference, that year, Einstein brushed him off with "Vos calculs sont corrects, mais votre physique est abominable." (“Your math is correct, but your physics is abominable”). In 1929, Edwin Hubble (1889–1953) announced his discovery of the expanding universe. The current generally accepted cosmological model, the Lambda-CDM model, has a positive cosmological constant and thus not only an expanding universe but an accelerating expanding universe. If the universe were expanding, then it must have been much smaller and therefore hotter and denser in the past. George Gamow (1904–1968) hypothesized that the abundance of the elements in the Periodic Table of the Elements, might be accounted for by nuclear reactions in a hot dense universe. He was disputed by Fred Hoyle (1915–2001), who invented the term 'Big Bang' to disparage it. Fermi and others noted that this process would have stopped after only the light elements were created, and thus did not account for the abundance of heavier elements. Gamow's prediction was a 5–10-kelvin black-body radiation temperature for the universe, after it cooled during the expansion. This was corroborated by Penzias and Wilson in 1965. Subsequent experiments arrived at a 2.7 kelvins temperature, corresponding to an age of the universe of 13.8 billion years after the Big Bang. This dramatic result has raised issues: what happened between the singularity of the Big Bang and the Planck time, which, after all, is the smallest observable time. When might have time separated out from the spacetime foam; there are only hints based on broken symmetries (see Spontaneous symmetry breaking, Timeline of the Big Bang, and the articles in Category:Physical cosmology). General relativity gave us our modern notion of the expanding universe that started in the Big Bang. Using relativity and quantum theory we have been able to roughly reconstruct the history of the universe. In our epoch, during which electromagnetic waves can propagate without being disturbed by conductors or charges, we can see the stars, at great distances from us, in the night sky. (Before this epoch, there was a time, before the universe cooled enough for electrons and nuclei to combine into atoms about 377,000 years after the Big Bang, during which starlight would not have been visible over large distances.) == Reprise == Ilya Prigogine's reprise is "Time precedes existence". In contrast to the views of Newton, of Einstein, and of quantum physics, which offer a symmetric view of time (as discussed above), Prigogine points out that statistical and thermodynamic physics can explain irreversible phenomena, as well as the arrow of time and the Big Bang. == See also == Relativistic dynamics Category:systems of units Time in astronomy == References == == Further reading == Boorstein, Daniel J., The Discoverers. Vintage. February 12, 1985. ISBN 0-394-72625-1 Dieter Zeh, H., The physical basis of the direction of time. Springer. ISBN 978-3-540-42081-1 Kuhn, Thomas S., The Structure of Scientific Revolutions. ISBN 0-226-45808-3 Mandelbrot, Benoît, Multifractals and 1/f noise. Springer Verlag. February 1999. ISBN 0-387-98539-5 Prigogine, Ilya (1984), Order out of Chaos. ISBN 0-394-54204-5 Serres, Michel, et al., "Conversations on Science, Culture, and Time (Studies in Literature and Science)". March, 1995. ISBN 0-472-06548-3 Stengers, Isabelle, and Ilya Prigogine, Theory Out of Bounds. University of Minnesota Press. November 1997. ISBN 0-8166-2517-4 == External links == Media related to Time in physics at Wikimedia Commons
Wikipedia/Time_in_physics
In mathematics, integrals of inverse functions can be computed by means of a formula that expresses the antiderivatives of the inverse f − 1 {\displaystyle f^{-1}} of a continuous and invertible function f {\displaystyle f} , in terms of f − 1 {\displaystyle f^{-1}} and an antiderivative of f {\displaystyle f} . This formula was published in 1905 by Charles-Ange Laisant. == Statement of the theorem == Let I 1 {\displaystyle I_{1}} and I 2 {\displaystyle I_{2}} be two intervals of R {\displaystyle \mathbb {R} } . Assume that f : I 1 → I 2 {\displaystyle f:I_{1}\to I_{2}} is a continuous and invertible function. It follows from the intermediate value theorem that f {\displaystyle f} is strictly monotone. Consequently, f {\displaystyle f} maps intervals to intervals, so is an open map and thus a homeomorphism. Since f {\displaystyle f} and the inverse function f − 1 : I 2 → I 1 {\displaystyle f^{-1}:I_{2}\to I_{1}} are continuous, they have antiderivatives by the fundamental theorem of calculus. Laisant proved that if F {\displaystyle F} is an antiderivative of f {\displaystyle f} , then the antiderivatives of f − 1 {\displaystyle f^{-1}} are: ∫ f − 1 ( y ) d y = y f − 1 ( y ) − F ∘ f − 1 ( y ) + C , {\displaystyle \int f^{-1}(y)\,dy=yf^{-1}(y)-F\circ f^{-1}(y)+C,} where C {\displaystyle C} is an arbitrary real number. Note that it is not assumed that f − 1 {\displaystyle f^{-1}} is differentiable. In his 1905 article, Laisant gave three proofs. === First proof === First, under the additional hypothesis that f − 1 {\displaystyle f^{-1}} is differentiable, one may differentiate the above formula, which completes the proof immediately. === Second proof === His second proof was geometric. If f ( a ) = c {\displaystyle f(a)=c} and f ( b ) = d {\displaystyle f(b)=d} , the theorem can be written: ∫ c d f − 1 ( y ) d y + ∫ a b f ( x ) d x = b d − a c . {\displaystyle \int _{c}^{d}f^{-1}(y)\,dy+\int _{a}^{b}f(x)\,dx=bd-ac.} The figure on the right is a proof without words of this formula. Laisant does not discuss the hypotheses necessary to make this proof rigorous, but this can be proved if f {\displaystyle f} is just assumed to be strictly monotone (but not necessarily continuous, let alone differentiable). In this case, both f {\displaystyle f} and f − 1 {\displaystyle f^{-1}} are Riemann integrable and the identity follows from a bijection between lower/upper Darboux sums of f {\displaystyle f} and upper/lower Darboux sums of f − 1 {\displaystyle f^{-1}} . The antiderivative version of the theorem then follows from the fundamental theorem of calculus in the case when f {\displaystyle f} is also assumed to be continuous. === Third proof === Laisant's third proof uses the additional hypothesis that f {\displaystyle f} is differentiable. Beginning with f − 1 ( f ( x ) ) = x {\displaystyle f^{-1}(f(x))=x} , one multiplies by f ′ ( x ) {\displaystyle f'(x)} and integrates both sides. The right-hand side is calculated using integration by parts to be x f ( x ) − ∫ f ( x ) d x {\textstyle xf(x)-\int f(x)\,dx} , and the formula follows. === Details === One may also think as follows when f {\displaystyle f} is differentiable. As f {\displaystyle f} is continuous at any x {\displaystyle x} , F := ∫ 0 x f {\displaystyle F:=\int _{0}^{x}f} is differentiable at all x {\displaystyle x} by the fundamental theorem of calculus. Since f {\displaystyle f} is invertible, its derivative would vanish in at most countably many points. Sort these points by . . . < t − 1 < t 0 < t 1 < . . . {\displaystyle ...<t_{-1}<t_{0}<t_{1}<...} . Since g ( y ) := y f − 1 ( y ) − F ∘ f − 1 ( y ) + C {\displaystyle g(y):=yf^{-1}(y)-F\circ f^{-1}(y)+C} is a composition of differentiable functions on each interval ( t i , t i + 1 ) {\displaystyle (t_{i},t_{i+1})} , chain rule could be applied g ′ ( y ) = f − 1 ( y ) + y / f ′ ( y ) − f ∘ f − 1 ( y ) .1 / f ′ ( y ) + 0 = f − 1 ( y ) {\displaystyle g'(y)=f^{-1}(y)+y/f'(y)-f\circ f^{-1}(y).1/f'(y)+0=f^{-1}(y)} to see g | ( t i , t i + 1 ) {\displaystyle \left.g\right|_{(t_{i},t_{i+1})}} is an antiderivative for f | ( t i , t i + 1 ) {\displaystyle \left.f\right|_{(t_{i},t_{i+1})}} . We claim g {\displaystyle g} is also differentiable on each of t i {\displaystyle t_{i}} and does not go unbounded if I 2 {\displaystyle I_{2}} is compact. In such a case f − 1 {\displaystyle f^{-1}} is continuous and bounded. By continuity and the fundamental theorem of calculus, G ( y ) := C + ∫ 0 y f − 1 {\displaystyle G(y):=C+\int _{0}^{y}f^{-1}} where C {\displaystyle C} is a constant, is a differentiable extension of g {\displaystyle g} . But g {\displaystyle g} is continuous as it's the composition of continuous functions. So is G {\displaystyle G} by differentiability. Therefore, G = g {\displaystyle G=g} . One can now use the fundamental theorem of calculus to compute ∫ I 2 f − 1 {\displaystyle \int _{I_{2}}f^{-1}} . Nevertheless, it can be shown that this theorem holds even if f {\displaystyle f} or f − 1 {\displaystyle f^{-1}} is not differentiable: it suffices, for example, to use the Stieltjes integral in the previous argument. On the other hand, even though general monotonic functions are differentiable almost everywhere, the proof of the general formula does not follow, unless f − 1 {\displaystyle f^{-1}} is absolutely continuous. It is also possible to check that for every y {\displaystyle y} in I 2 {\displaystyle I_{2}} , the derivative of the function y ↦ y f − 1 ( y ) − F ( f − 1 ( y ) ) {\displaystyle y\mapsto yf^{-1}(y)-F(f^{-1}(y))} is equal to f − 1 ( y ) {\displaystyle f^{-1}(y)} . In other words: ∀ x ∈ I 1 lim h → 0 ( x + h ) f ( x + h ) − x f ( x ) − ( F ( x + h ) − F ( x ) ) f ( x + h ) − f ( x ) = x . {\displaystyle \forall x\in I_{1}\quad \lim _{h\to 0}{\frac {(x+h)f(x+h)-xf(x)-\left(F(x+h)-F(x)\right)}{f(x+h)-f(x)}}=x.} To this end, it suffices to apply the mean value theorem to F {\displaystyle F} between x {\displaystyle x} and x + h {\displaystyle x+h} , taking into account that f {\displaystyle f} is monotonic. == Examples == Assume that f ( x ) = exp ⁡ ( x ) {\displaystyle f(x)=\exp(x)} , hence f − 1 ( y ) = ln ⁡ ( y ) {\displaystyle f^{-1}(y)=\ln(y)} . The formula above gives immediately ∫ ln ⁡ ( y ) d y = y ln ⁡ ( y ) − exp ⁡ ( ln ⁡ ( y ) ) + C = y ln ⁡ ( y ) − y + C . {\displaystyle \int \ln(y)\,dy=y\ln(y)-\exp(\ln(y))+C=y\ln(y)-y+C.} Similarly, with f ( x ) = cos ⁡ ( x ) {\displaystyle f(x)=\cos(x)} and f − 1 ( y ) = arccos ⁡ ( y ) {\displaystyle f^{-1}(y)=\arccos(y)} , ∫ arccos ⁡ ( y ) d y = y arccos ⁡ ( y ) − sin ⁡ ( arccos ⁡ ( y ) ) + C . {\displaystyle \int \arccos(y)\,dy=y\arccos(y)-\sin(\arccos(y))+C.} With f ( x ) = tan ⁡ ( x ) {\displaystyle f(x)=\tan(x)} and f − 1 ( y ) = arctan ⁡ ( y ) {\displaystyle f^{-1}(y)=\arctan(y)} , ∫ arctan ⁡ ( y ) d y = y arctan ⁡ ( y ) + ln ⁡ | cos ⁡ ( arctan ⁡ ( y ) ) | + C . {\displaystyle \int \arctan(y)\,dy=y\arctan(y)+\ln \left|\cos(\arctan(y))\right|+C.} == History == Apparently, this theorem of integration was discovered for the first time in 1905 by Charles-Ange Laisant, who "could hardly believe that this theorem is new", and hoped its use would henceforth spread out among students and teachers. This result was published independently in 1912 by an Italian engineer, Alberto Caprilli, in an opuscule entitled "Nuove formole d'integrazione". It was rediscovered in 1955 by Parker, and by a number of mathematicians following him. Nevertheless, they all assume that f or f−1 is differentiable. The general version of the theorem, free from this additional assumption, was proposed by Michael Spivak in 1965, as an exercise in the Calculus, and a fairly complete proof following the same lines was published by Eric Key in 1994. This proof relies on the very definition of the Darboux integral, and consists in showing that the upper Darboux sums of the function f are in 1-1 correspondence with the lower Darboux sums of f−1. In 2013, Michael Bensimhoun, estimating that the general theorem was still insufficiently known, gave two other proofs: The second proof, based on the Stieltjes integral and on its formulae of integration by parts and of homeomorphic change of variables, is the most suitable to establish more complex formulae. == Generalization to holomorphic functions == The above theorem generalizes in the obvious way to holomorphic functions: Let U {\displaystyle U} and V {\displaystyle V} be two open and simply connected sets of C {\displaystyle \mathbb {C} } , and assume that f : U → V {\displaystyle f:U\to V} is a biholomorphism. Then f {\displaystyle f} and f − 1 {\displaystyle f^{-1}} have antiderivatives, and if F {\displaystyle F} is an antiderivative of f {\displaystyle f} , the general antiderivative of f − 1 {\displaystyle f^{-1}} is G ( z ) = z f − 1 ( z ) − F ∘ f − 1 ( z ) + C . {\displaystyle G(z)=zf^{-1}(z)-F\circ f^{-1}(z)+C.} Because all holomorphic functions are differentiable, the proof is immediate by complex differentiation. == See also == Integration by parts Legendre transformation Young's inequality for products == References ==
Wikipedia/Integration_of_inverse_functions
In mathematics, the Legendre transformation (or Legendre transform), first introduced by Adrien-Marie Legendre in 1787 when studying the minimal surface problem, is an involutive transformation on real-valued functions that are convex on a real variable. Specifically, if a real-valued multivariable function is convex on one of its independent real variables, then the Legendre transform with respect to this variable is applicable to the function. In physical problems, the Legendre transform is used to convert functions of one quantity (such as position, pressure, or temperature) into functions of the conjugate quantity (momentum, volume, and entropy, respectively). In this way, it is commonly used in classical mechanics to derive the Hamiltonian formalism out of the Lagrangian formalism (or vice versa) and in thermodynamics to derive the thermodynamic potentials, as well as in the solution of differential equations of several variables. For sufficiently smooth functions on the real line, the Legendre transform f ∗ {\displaystyle f^{*}} of a function f {\displaystyle f} can be specified, up to an additive constant, by the condition that the functions' first derivatives are inverse functions of each other. This can be expressed in Euler's derivative notation as D f ( ⋅ ) = ( D f ∗ ) − 1 ( ⋅ ) , {\displaystyle Df(\cdot )=\left(Df^{*}\right)^{-1}(\cdot )~,} where D {\displaystyle D} is an operator of differentiation, ⋅ {\displaystyle \cdot } represents an argument or input to the associated function, ( ϕ ) − 1 ( ⋅ ) {\displaystyle (\phi )^{-1}(\cdot )} is an inverse function such that ( ϕ ) − 1 ( ϕ ( x ) ) = x {\displaystyle (\phi )^{-1}(\phi (x))=x} , or equivalently, as f ′ ( f ∗ ′ ( x ∗ ) ) = x ∗ {\displaystyle f'(f^{*\prime }(x^{*}))=x^{*}} and f ∗ ′ ( f ′ ( x ) ) = x {\displaystyle f^{*\prime }(f'(x))=x} in Lagrange's notation. The generalization of the Legendre transformation to affine spaces and non-convex functions is known as the convex conjugate (also called the Legendre–Fenchel transformation), which can be used to construct a function's convex hull. == Definition == === Definition in one-dimensional real space === Let I ⊂ R {\displaystyle I\subset \mathbb {R} } be an interval, and f : I → R {\displaystyle f:I\to \mathbb {R} } a convex function; then the Legendre transform of f {\displaystyle f} is the function f ∗ : I ∗ → R {\displaystyle f^{*}:I^{*}\to \mathbb {R} } defined by f ∗ ( x ∗ ) = sup x ∈ I ( x ∗ x − f ( x ) ) , I ∗ = { x ∗ ∈ R : sup x ∈ I ( x ∗ x − f ( x ) ) < ∞ } {\displaystyle f^{*}(x^{*})=\sup _{x\in I}(x^{*}x-f(x)),\ \ \ \ I^{*}=\left\{x^{*}\in \mathbb {R} :\sup _{x\in I}(x^{*}x-f(x))<\infty \right\}} where sup {\textstyle \sup } denotes the supremum over I {\displaystyle I} , e.g., x {\textstyle x} in I {\textstyle I} is chosen such that x ∗ x − f ( x ) {\textstyle x^{*}x-f(x)} is maximized at each x ∗ {\textstyle x^{*}} , or x ∗ {\textstyle x^{*}} is such that x ∗ x − f ( x ) {\displaystyle x^{*}x-f(x)} has a bounded value throughout I {\textstyle I} (e.g., when f ( x ) {\displaystyle f(x)} is a linear function). The function f ∗ {\displaystyle f^{*}} is called the convex conjugate function of f {\displaystyle f} . For historical reasons (rooted in analytic mechanics), the conjugate variable is often denoted p {\displaystyle p} , instead of x ∗ {\displaystyle x^{*}} . If the convex function f {\displaystyle f} is defined on the whole line and is everywhere differentiable, then f ∗ ( p ) = sup x ∈ I ( p x − f ( x ) ) = ( p x − f ( x ) ) | x = ( f ′ ) − 1 ( p ) {\displaystyle f^{*}(p)=\sup _{x\in I}(px-f(x))=\left(px-f(x)\right)|_{x=(f')^{-1}(p)}} can be interpreted as the negative of the y {\displaystyle y} -intercept of the tangent line to the graph of f {\displaystyle f} that has slope p {\displaystyle p} . === Definition in n-dimensional real space === The generalization to convex functions f : X → R {\displaystyle f:X\to \mathbb {R} } on a convex set X ⊂ R n {\displaystyle X\subset \mathbb {R} ^{n}} is straightforward: f ∗ : X ∗ → R {\displaystyle f^{*}:X^{*}\to \mathbb {R} } has domain X ∗ = { x ∗ ∈ R n : sup x ∈ X ( ⟨ x ∗ , x ⟩ − f ( x ) ) < ∞ } {\displaystyle X^{*}=\left\{x^{*}\in \mathbb {R} ^{n}:\sup _{x\in X}(\langle x^{*},x\rangle -f(x))<\infty \right\}} and is defined by f ∗ ( x ∗ ) = sup x ∈ X ( ⟨ x ∗ , x ⟩ − f ( x ) ) , x ∗ ∈ X ∗ , {\displaystyle f^{*}(x^{*})=\sup _{x\in X}(\langle x^{*},x\rangle -f(x)),\quad x^{*}\in X^{*}~,} where ⟨ x ∗ , x ⟩ {\displaystyle \langle x^{*},x\rangle } denotes the dot product of x ∗ {\displaystyle x^{*}} and x {\displaystyle x} . The Legendre transformation is an application of the duality relationship between points and lines. The functional relationship specified by f {\displaystyle f} can be represented equally well as a set of ( x , y ) {\displaystyle (x,y)} points, or as a set of tangent lines specified by their slope and intercept values. === Understanding the Legendre transform in terms of derivatives === For a differentiable convex function f {\displaystyle f} on the real line with the first derivative f ′ {\displaystyle f'} and its inverse ( f ′ ) − 1 {\displaystyle (f')^{-1}} , the Legendre transform of f {\displaystyle f} , f ∗ {\displaystyle f^{*}} , can be specified, up to an additive constant, by the condition that the functions' first derivatives are inverse functions of each other, i.e., f ′ = ( ( f ∗ ) ′ ) − 1 {\displaystyle f'=((f^{*})')^{-1}} and ( f ∗ ) ′ = ( f ′ ) − 1 {\displaystyle (f^{*})'=(f')^{-1}} . To see this, first note that if f {\displaystyle f} as a convex function on the real line is differentiable and x ¯ {\displaystyle {\overline {x}}} is a critical point of the function of x ↦ p ⋅ x − f ( x ) {\displaystyle x\mapsto p\cdot x-f(x)} , then the supremum is achieved at x ¯ {\textstyle {\overline {x}}} (by convexity, see the first figure in this Wikipedia page). Therefore, the Legendre transform of f {\displaystyle f} is f ∗ ( p ) = p ⋅ x ¯ − f ( x ¯ ) {\displaystyle f^{*}(p)=p\cdot {\overline {x}}-f({\overline {x}})} . Then, suppose that the first derivative f ′ {\displaystyle f'} is invertible and let the inverse be g = ( f ′ ) − 1 {\displaystyle g=(f')^{-1}} . Then for each p {\textstyle p} , the point g ( p ) {\displaystyle g(p)} is the unique critical point x ¯ {\textstyle {\overline {x}}} of the function x ↦ p x − f ( x ) {\displaystyle x\mapsto px-f(x)} (i.e., x ¯ = g ( p ) {\displaystyle {\overline {x}}=g(p)} ) because f ′ ( g ( p ) ) = p {\displaystyle f'(g(p))=p} and the function's first derivative with respect to x {\displaystyle x} at g ( p ) {\displaystyle g(p)} is p − f ′ ( g ( p ) ) = 0 {\displaystyle p-f'(g(p))=0} . Hence we have f ∗ ( p ) = p ⋅ g ( p ) − f ( g ( p ) ) {\displaystyle f^{*}(p)=p\cdot g(p)-f(g(p))} for each p {\textstyle p} . By differentiating with respect to p {\textstyle p} , we find ( f ∗ ) ′ ( p ) = g ( p ) + p ⋅ g ′ ( p ) − f ′ ( g ( p ) ) ⋅ g ′ ( p ) . {\displaystyle (f^{*})'(p)=g(p)+p\cdot g'(p)-f'(g(p))\cdot g'(p).} Since f ′ ( g ( p ) ) = p {\displaystyle f'(g(p))=p} this simplifies to ( f ∗ ) ′ ( p ) = g ( p ) = ( f ′ ) − 1 ( p ) {\displaystyle (f^{*})'(p)=g(p)=(f')^{-1}(p)} . In other words, ( f ∗ ) ′ {\displaystyle (f^{*})'} and f ′ {\displaystyle f'} are inverses to each other. In general, if h ′ = ( f ′ ) − 1 {\displaystyle h'=(f')^{-1}} as the inverse of f ′ , {\displaystyle f',} then h ′ = ( f ∗ ) ′ {\displaystyle h'=(f^{*})'} so integration gives f ∗ = h + c . {\displaystyle f^{*}=h+c.} with a constant c . {\displaystyle c.} In practical terms, given f ( x ) , {\displaystyle f(x),} the parametric plot of x f ′ ( x ) − f ( x ) {\displaystyle xf'(x)-f(x)} versus f ′ ( x ) {\displaystyle f'(x)} amounts to the graph of f ∗ ( p ) {\displaystyle f^{*}(p)} versus p . {\displaystyle p.} In some cases (e.g. thermodynamic potentials, below), a non-standard requirement is used, amounting to an alternative definition of f * with a minus sign, f ( x ) − f ∗ ( p ) = x p . {\displaystyle f(x)-f^{*}(p)=xp.} === Formal definition in physics context === In analytical mechanics and thermodynamics, Legendre transformation is usually defined as follows: suppose f {\displaystyle f} is a function of x {\displaystyle x} ; then we have d f = d f d x d x . {\displaystyle \mathrm {d} f={\frac {\mathrm {d} f}{\mathrm {d} x}}\mathrm {d} x.} Performing the Legendre transformation on this function means that we take p = d f d x {\displaystyle p={\frac {\mathrm {d} f}{\mathrm {d} x}}} as the independent variable, so that the above expression can be written as d f = p d x , {\displaystyle \mathrm {d} f=p\mathrm {d} x,} and according to Leibniz's rule d ( u v ) = u d v + v d u , {\displaystyle \mathrm {d} (uv)=u\mathrm {d} v+v\mathrm {d} u,} we then have d ( x p − f ) = x d p + p d x − d f = x d p , {\displaystyle \mathrm {d} \left(xp-f\right)=x\mathrm {d} p+p\mathrm {d} x-\mathrm {d} f=x\mathrm {d} p,} and taking f ∗ = x p − f , {\displaystyle f^{*}=xp-f,} we have d f ∗ = x d p , {\displaystyle \mathrm {d} f^{*}=x\mathrm {d} p,} which means d f ∗ d p = x . {\displaystyle {\frac {\mathrm {d} f^{*}}{\mathrm {d} p}}=x.} When f {\displaystyle f} is a function of n {\displaystyle n} variables x 1 , x 2 , ⋯ , x n {\displaystyle x_{1},x_{2},\cdots ,x_{n}} , then we can perform the Legendre transformation on each one or several variables: we have d f = p 1 d x 1 + p 2 d x 2 + ⋯ + p n d x n , {\displaystyle \mathrm {d} f=p_{1}\mathrm {d} x_{1}+p_{2}\mathrm {d} x_{2}+\cdots +p_{n}\mathrm {d} x_{n},} where p i = ∂ f ∂ x i . {\displaystyle p_{i}={\frac {\partial f}{\partial x_{i}}}.} Then if we want to perform the Legendre transformation on, e.g. x 1 {\displaystyle x_{1}} , then we take p 1 {\displaystyle p_{1}} together with x 2 , ⋯ , x n {\displaystyle x_{2},\cdots ,x_{n}} as independent variables, and with Leibniz's rule we have d ( f − x 1 p 1 ) = − x 1 d p 1 + p 2 d x 2 + ⋯ + p n d x n . {\displaystyle \mathrm {d} (f-x_{1}p_{1})=-x_{1}\mathrm {d} p_{1}+p_{2}\mathrm {d} x_{2}+\cdots +p_{n}\mathrm {d} x_{n}.} So for the function φ ( p 1 , x 2 , ⋯ , x n ) = f ( x 1 , x 2 , ⋯ , x n ) − x 1 p 1 , {\displaystyle \varphi (p_{1},x_{2},\cdots ,x_{n})=f(x_{1},x_{2},\cdots ,x_{n})-x_{1}p_{1},} we have ∂ φ ∂ p 1 = − x 1 , ∂ φ ∂ x 2 = p 2 , ⋯ , ∂ φ ∂ x n = p n . {\displaystyle {\frac {\partial \varphi }{\partial p_{1}}}=-x_{1},\quad {\frac {\partial \varphi }{\partial x_{2}}}=p_{2},\quad \cdots ,\quad {\frac {\partial \varphi }{\partial x_{n}}}=p_{n}.} We can also do this transformation for variables x 2 , ⋯ , x n {\displaystyle x_{2},\cdots ,x_{n}} . If we do it to all the variables, then we have d φ = − x 1 d p 1 − x 2 d p 2 − ⋯ − x n d p n {\displaystyle \mathrm {d} \varphi =-x_{1}\mathrm {d} p_{1}-x_{2}\mathrm {d} p_{2}-\cdots -x_{n}\mathrm {d} p_{n}} where φ = f − x 1 p 1 − x 2 p 2 − ⋯ − x n p n . {\displaystyle \varphi =f-x_{1}p_{1}-x_{2}p_{2}-\cdots -x_{n}p_{n}.} In analytical mechanics, people perform this transformation on variables q ˙ 1 , q ˙ 2 , ⋯ , q ˙ n {\displaystyle {\dot {q}}_{1},{\dot {q}}_{2},\cdots ,{\dot {q}}_{n}} of the Lagrangian L ( q 1 , ⋯ , q n , q ˙ 1 , ⋯ , q ˙ n ) {\displaystyle L(q_{1},\cdots ,q_{n},{\dot {q}}_{1},\cdots ,{\dot {q}}_{n})} to get the Hamiltonian: H ( q 1 , ⋯ , q n , p 1 , ⋯ , p n ) = ∑ i = 1 n p i q ˙ i − L ( q 1 , ⋯ , q n , q ˙ 1 ⋯ , q ˙ n ) . {\displaystyle H(q_{1},\cdots ,q_{n},p_{1},\cdots ,p_{n})=\sum _{i=1}^{n}p_{i}{\dot {q}}_{i}-L(q_{1},\cdots ,q_{n},{\dot {q}}_{1}\cdots ,{\dot {q}}_{n}).} In thermodynamics, people perform this transformation on variables according to the type of thermodynamic system they want; for example, starting from the cardinal function of state, the internal energy U ( S , V ) {\displaystyle U(S,V)} , we have d U = T d S − p d V , {\displaystyle \mathrm {d} U=T\mathrm {d} S-p\mathrm {d} V,} so we can perform the Legendre transformation on either or both of S , V {\displaystyle S,V} to yield d H = d ( U + p V ) = T d S + V d p {\displaystyle \mathrm {d} H=\mathrm {d} (U+pV)\ \ \ \ \ \ \ \ \ \ =\ \ \ \ T\mathrm {d} S+V\mathrm {d} p} d F = d ( U − T S ) = − S d T − p d V {\displaystyle \mathrm {d} F=\mathrm {d} (U-TS)\ \ \ \ \ \ \ \ \ \ =-S\mathrm {d} T-p\mathrm {d} V} d G = d ( U − T S + p V ) = − S d T + V d p , {\displaystyle \mathrm {d} G=\mathrm {d} (U-TS+pV)=-S\mathrm {d} T+V\mathrm {d} p,} and each of these three expressions has a physical meaning. This definition of the Legendre transformation is the one originally introduced by Legendre in his work in 1787, and is still applied by physicists nowadays. Indeed, this definition can be mathematically rigorous if we treat all the variables and functions defined above: for example, f , x 1 , ⋯ , x n , p 1 , ⋯ , p n , {\displaystyle f,x_{1},\cdots ,x_{n},p_{1},\cdots ,p_{n},} as differentiable functions defined on an open set of R n {\displaystyle \mathbb {R} ^{n}} or on a differentiable manifold, and d f , d x i , d p i {\displaystyle \mathrm {d} f,\mathrm {d} x_{i},\mathrm {d} p_{i}} their differentials (which are treated as cotangent vector field in the context of differentiable manifold). This definition is equivalent to the modern mathematicians' definition as long as f {\displaystyle f} is differentiable and convex for the variables x 1 , x 2 , ⋯ , x n . {\displaystyle x_{1},x_{2},\cdots ,x_{n}.} == Properties == The Legendre transform of a convex function, of which double derivative values are all positive, is also a convex function of which double derivative values are all positive.Proof. Let us show this with a doubly differentiable function f ( x ) {\displaystyle f(x)} with all positive double derivative values and with a bijective (invertible) derivative. For a fixed p {\displaystyle p} , let x ¯ {\displaystyle {\bar {x}}} maximize or make the function p x − f ( x ) {\displaystyle px-f(x)} bounded over x {\displaystyle x} . Then the Legendre transformation of f {\displaystyle f} is f ∗ ( p ) = p x ¯ − f ( x ¯ ) {\displaystyle f^{*}(p)=p{\bar {x}}-f({\bar {x}})} , thus, f ′ ( x ¯ ) = p {\displaystyle f'({\bar {x}})=p} by the maximizing or bounding condition d d x ( p x − f ( x ) ) = p − f ′ ( x ) = 0 {\displaystyle {\frac {d}{dx}}(px-f(x))=p-f'(x)=0} . Note that x ¯ {\displaystyle {\bar {x}}} depends on p {\displaystyle p} . (This can be visually shown in the 1st figure of this page above.) Thus x ¯ = g ( p ) {\displaystyle {\bar {x}}=g(p)} where g ≡ ( f ′ ) − 1 {\displaystyle g\equiv (f')^{-1}} , meaning that g {\displaystyle g} is the inverse of f ′ {\displaystyle f'} that is the derivative of f {\displaystyle f} (so f ′ ( g ( p ) ) = p {\displaystyle f'(g(p))=p} ). Note that g {\displaystyle g} is also differentiable with the following derivative (Inverse function rule), d g ( p ) d p = 1 f ″ ( g ( p ) ) . {\displaystyle {\frac {dg(p)}{dp}}={\frac {1}{f''(g(p))}}~.} Thus, the Legendre transformation f ∗ ( p ) = p g ( p ) − f ( g ( p ) ) {\displaystyle f^{*}(p)=pg(p)-f(g(p))} is the composition of differentiable functions, hence it is differentiable. Applying the product rule and the chain rule with the found equality x ¯ = g ( p ) {\displaystyle {\bar {x}}=g(p)} yields d ( f ∗ ) d p = g ( p ) + ( p − f ′ ( g ( p ) ) ) ⋅ d g ( p ) d p = g ( p ) , {\displaystyle {\frac {d(f^{*})}{dp}}=g(p)+\left(p-f'(g(p))\right)\cdot {\frac {dg(p)}{dp}}=g(p),} giving d 2 ( f ∗ ) d p 2 = d g ( p ) d p = 1 f ″ ( g ( p ) ) > 0 , {\displaystyle {\frac {d^{2}(f^{*})}{dp^{2}}}={\frac {dg(p)}{dp}}={\frac {1}{f''(g(p))}}>0,} so f ∗ {\displaystyle f^{*}} is convex with its double derivatives are all positive. The Legendre transformation is an involution, i.e., f ∗ ∗ = f {\displaystyle f^{**}=f~} . Proof. By using the above identities as f ′ ( x ¯ ) = p {\displaystyle f'({\bar {x}})=p} , x ¯ = g ( p ) {\displaystyle {\bar {x}}=g(p)} , f ∗ ( p ) = p x ¯ − f ( x ¯ ) {\displaystyle f^{*}(p)=p{\bar {x}}-f({\bar {x}})} and its derivative ( f ∗ ) ′ ( p ) = g ( p ) {\displaystyle (f^{*})'(p)=g(p)} , f ∗ ∗ ( y ) = ( y ⋅ p ¯ − f ∗ ( p ¯ ) ) | ( f ∗ ) ′ ( p ¯ ) = y = g ( p ¯ ) ⋅ p ¯ − f ∗ ( p ¯ ) = g ( p ¯ ) ⋅ p ¯ − ( p ¯ g ( p ¯ ) − f ( g ( p ¯ ) ) ) = f ( g ( p ¯ ) ) = f ( y ) . {\displaystyle {\begin{aligned}f^{**}(y)&{}=\left(y\cdot {\bar {p}}-f^{*}({\bar {p}})\right)|_{(f^{*})'({\bar {p}})=y}\\[5pt]&{}=g({\bar {p}})\cdot {\bar {p}}-f^{*}({\bar {p}})\\[5pt]&{}=g({\bar {p}})\cdot {\bar {p}}-({\bar {p}}g({\bar {p}})-f(g({\bar {p}})))\\[5pt]&{}=f(g({\bar {p}}))\\[5pt]&{}=f(y)~.\end{aligned}}} Note that this derivation does not require the condition to have all positive values in double derivative of the original function f {\displaystyle f} . == Identities == As shown above, for a convex function f ( x ) {\displaystyle f(x)} , with x = x ¯ {\displaystyle x={\bar {x}}} maximizing or making p x − f ( x ) {\displaystyle px-f(x)} bounded at each p {\displaystyle p} to define the Legendre transform f ∗ ( p ) = p x ¯ − f ( x ¯ ) {\displaystyle f^{*}(p)=p{\bar {x}}-f({\bar {x}})} and with g ≡ ( f ′ ) − 1 {\displaystyle g\equiv (f')^{-1}} , the following identities hold. f ′ ( x ¯ ) = p {\displaystyle f'({\bar {x}})=p} , x ¯ = g ( p ) {\displaystyle {\bar {x}}=g(p)} , ( f ∗ ) ′ ( p ) = g ( p ) {\displaystyle (f^{*})'(p)=g(p)} . == Examples == === Example 1 === Consider the exponential function f ( x ) = e x , {\displaystyle f(x)=e^{x},} which has the domain I = R {\displaystyle I=\mathbb {R} } . From the definition, the Legendre transform is f ∗ ( x ∗ ) = sup x ∈ R ( x ∗ x − e x ) , x ∗ ∈ I ∗ {\displaystyle f^{*}(x^{*})=\sup _{x\in \mathbb {R} }(x^{*}x-e^{x}),\quad x^{*}\in I^{*}} where I ∗ {\displaystyle I^{*}} remains to be determined. To evaluate the supremum, compute the derivative of x ∗ x − e x {\displaystyle x^{*}x-e^{x}} with respect to x {\displaystyle x} and set equal to zero: d d x ( x ∗ x − e x ) = x ∗ − e x = 0. {\displaystyle {\frac {d}{dx}}(x^{*}x-e^{x})=x^{*}-e^{x}=0.} The second derivative − e x {\displaystyle -e^{x}} is negative everywhere, so the maximal value is achieved at x = ln ⁡ ( x ∗ ) {\displaystyle x=\ln(x^{*})} . Thus, the Legendre transform is f ∗ ( x ∗ ) = x ∗ ln ⁡ ( x ∗ ) − e ln ⁡ ( x ∗ ) = x ∗ ( ln ⁡ ( x ∗ ) − 1 ) {\displaystyle f^{*}(x^{*})=x^{*}\ln(x^{*})-e^{\ln(x^{*})}=x^{*}(\ln(x^{*})-1)} and has domain I ∗ = ( 0 , ∞ ) . {\displaystyle I^{*}=(0,\infty ).} This illustrates that the domains of a function and its Legendre transform can be different. To find the Legendre transformation of the Legendre transformation of f {\displaystyle f} , f ∗ ∗ ( x ) = sup x ∗ ∈ R ( x x ∗ − x ∗ ( ln ⁡ ( x ∗ ) − 1 ) ) , x ∈ I , {\displaystyle f^{**}(x)=\sup _{x^{*}\in \mathbb {R} }(xx^{*}-x^{*}(\ln(x^{*})-1)),\quad x\in I,} where a variable x {\displaystyle x} is intentionally used as the argument of the function f ∗ ∗ {\displaystyle f^{**}} to show the involution property of the Legendre transform as f ∗ ∗ = f {\displaystyle f^{**}=f} . we compute 0 = d d x ∗ ( x x ∗ − x ∗ ( ln ⁡ ( x ∗ ) − 1 ) ) = x − ln ⁡ ( x ∗ ) {\displaystyle {\begin{aligned}0&={\frac {d}{dx^{*}}}{\big (}xx^{*}-x^{*}(\ln(x^{*})-1){\big )}=x-\ln(x^{*})\end{aligned}}} thus the maximum occurs at x ∗ = e x {\displaystyle x^{*}=e^{x}} because the second derivative d 2 d x ∗ 2 f ∗ ∗ ( x ) = − 1 x ∗ < 0 {\displaystyle {\frac {d^{2}}{{dx^{*}}^{2}}}f^{**}(x)=-{\frac {1}{x^{*}}}<0} over the domain of f ∗ ∗ {\displaystyle f^{**}} as I ∗ = ( 0 , ∞ ) . {\displaystyle I^{*}=(0,\infty ).} As a result, f ∗ ∗ {\displaystyle f^{**}} is found as f ∗ ∗ ( x ) = x e x − e x ( ln ⁡ ( e x ) − 1 ) = e x , {\displaystyle {\begin{aligned}f^{**}(x)&=xe^{x}-e^{x}(\ln(e^{x})-1)=e^{x},\end{aligned}}} thereby confirming that f = f ∗ ∗ , {\displaystyle f=f^{**},} as expected. === Example 2 === Let f(x) = cx2 defined on R, where c > 0 is a fixed constant. For x* fixed, the function of x, x*x − f(x) = x*x − cx2 has the first derivative x* − 2cx and second derivative −2c; there is one stationary point at x = x*/2c, which is always a maximum. Thus, I* = R and f ∗ ( x ∗ ) = x ∗ 2 4 c . {\displaystyle f^{*}(x^{*})={\frac {{x^{*}}^{2}}{4c}}~.} The first derivatives of f, 2cx, and of f *, x*/(2c), are inverse functions to each other. Clearly, furthermore, f ∗ ∗ ( x ) = 1 4 ( 1 / 4 c ) x 2 = c x 2 , {\displaystyle f^{**}(x)={\frac {1}{4(1/4c)}}x^{2}=cx^{2}~,} namely f ** = f. === Example 3 === Let f(x) = x2 for x ∈ (I = [2, 3]). For x* fixed, x*x − f(x) is continuous on I compact, hence it always takes a finite maximum on it; it follows that the domain of the Legendre transform of f {\displaystyle f} is I* = R. The stationary point at x = x*/2 (found by setting that the first derivative of x*x − f(x) with respect to x {\displaystyle x} equal to zero) is in the domain [2, 3] if and only if 4 ≤ x* ≤ 6. Otherwise the maximum is taken either at x = 2 or x = 3 because the second derivative of x*x − f(x) with respect to x {\displaystyle x} is negative as − 2 {\displaystyle -2} ; for a part of the domain x ∗ < 4 {\displaystyle x^{*}<4} the maximum that x*x − f(x) can take with respect to x ∈ [ 2 , 3 ] {\displaystyle x\in [2,3]} is obtained at x = 2 {\displaystyle x=2} while for x ∗ > 6 {\displaystyle x^{*}>6} it becomes the maximum at x = 3 {\displaystyle x=3} . Thus, it follows that f ∗ ( x ∗ ) = { 2 x ∗ − 4 , x ∗ < 4 x ∗ 2 4 , 4 ≤ x ∗ ≤ 6 , 3 x ∗ − 9 , x ∗ > 6. {\displaystyle f^{*}(x^{*})={\begin{cases}2x^{*}-4,&x^{*}<4\\{\frac {{x^{*}}^{2}}{4}},&4\leq x^{*}\leq 6,\\3x^{*}-9,&x^{*}>6.\end{cases}}} === Example 4 === The function f(x) = cx is convex, for every x (strict convexity is not required for the Legendre transformation to be well defined). Clearly x*x − f(x) = (x* − c)x is never bounded from above as a function of x, unless x* − c = 0. Hence f* is defined on I* = {c} and f*(c) = 0. (The definition of the Legendre transform requires the existence of the supremum, that requires upper bounds.) One may check involutivity: of course, x*x − f*(x*) is always bounded as a function of x*∈{c}, hence I** = R. Then, for all x one has sup x ∗ ∈ { c } ( x x ∗ − f ∗ ( x ∗ ) ) = x c , {\displaystyle \sup _{x^{*}\in \{c\}}(xx^{*}-f^{*}(x^{*}))=xc,} and hence f **(x) = cx = f(x). === Example 5 === As an example of a convex continuous function that is not everywhere differentiable, consider f ( x ) = | x | {\displaystyle f(x)=|x|} . This gives f ∗ ( x ∗ ) = sup x ( x x ∗ − | x | ) = max ( sup x ≥ 0 x ( x ∗ − 1 ) , sup x ≤ 0 x ( x ∗ + 1 ) ) , {\displaystyle f^{*}(x^{*})=\sup _{x}(xx^{*}-|x|)=\max \left(\sup _{x\geq 0}x(x^{*}-1),\,\sup _{x\leq 0}x(x^{*}+1)\right),} and thus f ∗ ( x ∗ ) = 0 {\displaystyle f^{*}(x^{*})=0} on its domain I ∗ = [ − 1 , 1 ] {\displaystyle I^{*}=[-1,1]} . === Example 6: several variables === Let f ( x ) = ⟨ x , A x ⟩ + c {\displaystyle f(x)=\langle x,Ax\rangle +c} be defined on X = Rn, where A is a real, positive definite matrix. Then f is convex, and ⟨ p , x ⟩ − f ( x ) = ⟨ p , x ⟩ − ⟨ x , A x ⟩ − c , {\displaystyle \langle p,x\rangle -f(x)=\langle p,x\rangle -\langle x,Ax\rangle -c,} has gradient p − 2Ax and Hessian −2A, which is negative; hence the stationary point x = A−1p/2 is a maximum. We have X* = Rn, and f ∗ ( p ) = 1 4 ⟨ p , A − 1 p ⟩ − c . {\displaystyle f^{*}(p)={\frac {1}{4}}\langle p,A^{-1}p\rangle -c.} == Behavior of differentials under Legendre transforms == The Legendre transform is linked to integration by parts, p dx = d(px) − x dp. Let f(x,y) be a function of two independent variables x and y, with the differential d f = ∂ f ∂ x d x + ∂ f ∂ y d y = p d x + v d y . {\displaystyle df={\frac {\partial f}{\partial x}}\,dx+{\frac {\partial f}{\partial y}}\,dy=p\,dx+v\,dy.} Assume that the function f is convex in x for all y, so that one may perform the Legendre transform on f in x, with p the variable conjugate to x (for information, there is a relation ∂ f ∂ x | x ¯ = p {\displaystyle {\frac {\partial f}{\partial x}}|_{\bar {x}}=p} where x ¯ {\displaystyle {\bar {x}}} is a point in x maximizing or making p x − f ( x , y ) {\displaystyle px-f(x,y)} bounded for given p and y). Since the new independent variable of the transform with respect to f is p, the differentials dx and dy in df devolve to dp and dy in the differential of the transform, i.e., we build another function with its differential expressed in terms of the new basis dp and dy. We thus consider the function g(p, y) = f − px so that d g = d f − p d x − x d p = − x d p + v d y {\displaystyle dg=df-p\,dx-x\,dp=-x\,dp+v\,dy} x = − ∂ g ∂ p {\displaystyle x=-{\frac {\partial g}{\partial p}}} v = ∂ g ∂ y . {\displaystyle v={\frac {\partial g}{\partial y}}.} The function −g(p, y) is the Legendre transform of f(x, y), where only the independent variable x has been supplanted by p. This is widely used in thermodynamics, as illustrated below. == Applications == === Analytical mechanics === A Legendre transform is used in classical mechanics to derive the Hamiltonian formulation from the Lagrangian formulation, and conversely. A typical Lagrangian has the form L ( v , q ) = 1 2 ⟨ v , M v ⟩ − V ( q ) , {\displaystyle L(v,q)={\tfrac {1}{2}}\langle v,Mv\rangle -V(q),} where ( v , q ) {\displaystyle (v,q)} are coordinates on Rn × Rn, M is a positive definite real matrix, and ⟨ x , y ⟩ = ∑ j x j y j . {\displaystyle \langle x,y\rangle =\sum _{j}x_{j}y_{j}.} For every q fixed, L ( v , q ) {\displaystyle L(v,q)} is a convex function of v {\displaystyle v} , while V ( q ) {\displaystyle V(q)} plays the role of a constant. Hence the Legendre transform of L ( v , q ) {\displaystyle L(v,q)} as a function of v {\displaystyle v} is the Hamiltonian function, H ( p , q ) = 1 2 ⟨ p , M − 1 p ⟩ + V ( q ) . {\displaystyle H(p,q)={\tfrac {1}{2}}\langle p,M^{-1}p\rangle +V(q).} In a more general setting, ( v , q ) {\displaystyle (v,q)} are local coordinates on the tangent bundle T M {\displaystyle T{\mathcal {M}}} of a manifold M {\displaystyle {\mathcal {M}}} . For each q, L ( v , q ) {\displaystyle L(v,q)} is a convex function of the tangent space Vq. The Legendre transform gives the Hamiltonian H ( p , q ) {\displaystyle H(p,q)} as a function of the coordinates (p, q) of the cotangent bundle T ∗ M {\displaystyle T^{*}{\mathcal {M}}} ; the inner product used to define the Legendre transform is inherited from the pertinent canonical symplectic structure. In this abstract setting, the Legendre transformation corresponds to the tautological one-form. === Thermodynamics === The strategy behind the use of Legendre transforms in thermodynamics is to shift from a function that depends on a variable to a new (conjugate) function that depends on a new variable, the conjugate of the original one. The new variable is the partial derivative of the original function with respect to the original variable. The new function is the difference between the original function and the product of the old and new variables. Typically, this transformation is useful because it shifts the dependence of, e.g., the energy from an extensive variable to its conjugate intensive variable, which can often be controlled more easily in a physical experiment. For example, the internal energy U is an explicit function of the extensive variables entropy S, volume V, and chemical composition Ni (e.g., i = 1 , 2 , 3 , … {\displaystyle i=1,2,3,\ldots } ) U = U ( S , V , { N i } ) , {\displaystyle U=U\left(S,V,\{N_{i}\}\right),} which has a total differential d U = T d S − P d V + ∑ μ i d N i {\displaystyle dU=T\,dS-P\,dV+\sum \mu _{i}\,dN_{i}} where T = ∂ U ∂ S | V , N i f o r a l l i v a l u e s , P = − ∂ U ∂ V | S , N i f o r a l l i v a l u e s , μ i = ∂ U ∂ N i | S , V , N j f o r a l l j ≠ i {\displaystyle T=\left.{\frac {\partial U}{\partial S}}\right\vert _{V,N_{i\ for\ all\ i\ values}},P=\left.-{\frac {\partial U}{\partial V}}\right\vert _{S,N_{i\ for\ all\ i\ values}},\mu _{i}=\left.{\frac {\partial U}{\partial N_{i}}}\right\vert _{S,V,N_{j\ for\ all\ j\neq i}}} . (Subscripts are not necessary by the definition of partial derivatives but left here for clarifying variables.) Stipulating some common reference state, by using the (non-standard) Legendre transform of the internal energy U with respect to volume V, the enthalpy H may be obtained as the following. To get the (standard) Legendre transform U ∗ {\textstyle U^{*}} of the internal energy U with respect to volume V, the function u ( p , S , V , { N i } ) = p V − U {\textstyle u\left(p,S,V,\{{{N}_{i}}\}\right)=pV-U} is defined first, then it shall be maximized or bounded by V. To do this, the condition ∂ u ∂ V = p − ∂ U ∂ V = 0 → p = ∂ U ∂ V {\textstyle {\frac {\partial u}{\partial V}}=p-{\frac {\partial U}{\partial V}}=0\to p={\frac {\partial U}{\partial V}}} needs to be satisfied, so U ∗ = ∂ U ∂ V V − U {\textstyle U^{*}={\frac {\partial U}{\partial V}}V-U} is obtained. This approach is justified because U is a linear function with respect to V (so a convex function on V) by the definition of extensive variables. The non-standard Legendre transform here is obtained by negating the standard version, so − U ∗ = H = U − ∂ U ∂ V V = U + P V {\textstyle -U^{*}=H=U-{\frac {\partial U}{\partial V}}V=U+PV} . H is definitely a state function as it is obtained by adding PV (P and V as state variables) to a state function U = U ( S , V , { N i } ) {\textstyle U=U\left(S,V,\{N_{i}\}\right)} , so its differential is an exact differential. Because of d H = T d S + V d P + ∑ μ i d N i {\textstyle dH=T\,dS+V\,dP+\sum \mu _{i}\,dN_{i}} and the fact that it must be an exact differential, H = H ( S , P , { N i } ) {\displaystyle H=H(S,P,\{N_{i}\})} . The enthalpy is suitable for description of processes in which the pressure is controlled from the surroundings. It is likewise possible to shift the dependence of the energy from the extensive variable of entropy, S, to the (often more convenient) intensive variable T, resulting in the Helmholtz and Gibbs free energies. The Helmholtz free energy A, and Gibbs energy G, are obtained by performing Legendre transforms of the internal energy and enthalpy, respectively, A = U − T S , {\displaystyle A=U-TS~,} G = H − T S = U + P V − T S . {\displaystyle G=H-TS=U+PV-TS~.} The Helmholtz free energy is often the most useful thermodynamic potential when temperature and volume are controlled from the surroundings, while the Gibbs energy is often the most useful when temperature and pressure are controlled from the surroundings. === Variable capacitor === As another example from physics, consider a parallel conductive plate capacitor, in which the plates can move relative to one another. Such a capacitor would allow transfer of the electric energy which is stored in the capacitor into external mechanical work, done by the force acting on the plates. One may think of the electric charge as analogous to the "charge" of a gas in a cylinder, with the resulting mechanical force exerted on a piston. Compute the force on the plates as a function of x, the distance which separates them. To find the force, compute the potential energy, and then apply the definition of force as the gradient of the potential energy function. The electrostatic potential energy stored in a capacitor of the capacitance C(x) and a positive electric charge +Q or negative charge -Q on each conductive plate is (with using the definition of the capacitance as C = Q V {\textstyle C={\frac {Q}{V}}} ), U ( Q , x ) = 1 2 Q V ( Q , x ) = 1 2 Q 2 C ( x ) , {\displaystyle U(Q,\mathbf {x} )={\frac {1}{2}}QV(Q,\mathbf {x} )={\frac {1}{2}}{\frac {Q^{2}}{C(\mathbf {x} )}},~} where the dependence on the area of the plates, the dielectric constant of the insulation material between the plates, and the separation x are abstracted away as the capacitance C(x). (For a parallel plate capacitor, this is proportional to the area of the plates and inversely proportional to the separation.) The force F between the plates due to the electric field created by the charge separation is then F ( x ) = − d U d x . {\displaystyle \mathbf {F} (\mathbf {x} )=-{\frac {dU}{d\mathbf {x} }}~.} If the capacitor is not connected to any electric circuit, then the electric charges on the plates remain constant and the voltage varies when the plates move with respect to each other, and the force is the negative gradient of the electrostatic potential energy as F ( x ) = 1 2 d C ( x ) d x Q 2 C ( x ) 2 = 1 2 d C ( x ) d x V ( x ) 2 {\displaystyle \mathbf {F} (\mathbf {x} )={\frac {1}{2}}{\frac {dC(\mathbf {x} )}{d\mathbf {x} }}{\frac {Q^{2}}{{C(\mathbf {x} )}^{2}}}={\frac {1}{2}}{\frac {dC(\mathbf {x} )}{d\mathbf {x} }}V(\mathbf {x} )^{2}} where V ( Q , x ) = V ( x ) {\textstyle V(Q,\mathbf {x} )=V(\mathbf {x} )} as the charge is fixed in this configuration. However, instead, suppose that the voltage between the plates V is maintained constant as the plate moves by connection to a battery, which is a reservoir for electric charges at a constant potential difference. Then the amount of charges Q {\textstyle Q} is a variable instead of the voltage; Q {\textstyle Q} and V {\textstyle V} are the Legendre conjugate to each other. To find the force, first compute the non-standard Legendre transform U ∗ {\textstyle U^{*}} with respect to Q {\textstyle Q} (also with using C = Q V {\textstyle C={\frac {Q}{V}}} ), U ∗ = U − ∂ U ∂ Q | x ⋅ Q = U − 1 2 C ( x ) ∂ Q 2 ∂ Q | x ⋅ Q = U − Q V = 1 2 Q V − Q V = − 1 2 Q V = − 1 2 V 2 C ( x ) . {\displaystyle U^{*}=U-\left.{\frac {\partial U}{\partial Q}}\right|_{\mathbf {x} }\cdot Q=U-{\frac {1}{2C(\mathbf {x} )}}\left.{\frac {\partial Q^{2}}{\partial Q}}\right|_{\mathbf {x} }\cdot Q=U-QV={\frac {1}{2}}QV-QV=-{\frac {1}{2}}QV=-{\frac {1}{2}}V^{2}C(\mathbf {x} ).} This transformation is possible because U {\textstyle U} is now a linear function of Q {\textstyle Q} so is convex on it. The force now becomes the negative gradient of this Legendre transform, resulting in the same force obtained from the original function U {\textstyle U} , F ( x ) = − d U ∗ d x = 1 2 d C ( x ) d x V 2 . {\displaystyle \mathbf {F} (\mathbf {x} )=-{\frac {dU^{*}}{d\mathbf {x} }}={\frac {1}{2}}{\frac {dC(\mathbf {x} )}{d\mathbf {x} }}V^{2}.} The two conjugate energies U {\textstyle U} and U ∗ {\textstyle U^{*}} happen to stand opposite to each other (their signs are opposite), only because of the linearity of the capacitance—except now Q is no longer a constant. They reflect the two different pathways of storing energy into the capacitor, resulting in, for instance, the same "pull" between a capacitor's plates. === Probability theory === In large deviations theory, the rate function is defined as the Legendre transformation of the logarithm of the moment generating function of a random variable. An important application of the rate function is in the calculation of tail probabilities of sums of i.i.d. random variables, in particular in Cramér's theorem. If X n {\displaystyle X_{n}} are i.i.d. random variables, let S n = X 1 + ⋯ + X n {\displaystyle S_{n}=X_{1}+\cdots +X_{n}} be the associated random walk and M ( ξ ) {\displaystyle M(\xi )} the moment generating function of X 1 {\displaystyle X_{1}} . For ξ ∈ R {\displaystyle \xi \in \mathbb {R} } , E [ e ξ S n ] = M ( ξ ) n {\displaystyle E[e^{\xi S_{n}}]=M(\xi )^{n}} . Hence, by Markov's inequality, one has for ξ ≥ 0 {\displaystyle \xi \geq 0} and a ∈ R {\displaystyle a\in \mathbb {R} } P ( S n / n > a ) ≤ e − n ξ a M ( ξ ) n = exp ⁡ [ − n ( ξ a − Λ ( ξ ) ) ] {\displaystyle P(S_{n}/n>a)\leq e^{-n\xi a}M(\xi )^{n}=\exp[-n(\xi a-\Lambda (\xi ))]} where Λ ( ξ ) = log ⁡ M ( ξ ) {\displaystyle \Lambda (\xi )=\log M(\xi )} . Since the left-hand side is independent of ξ {\displaystyle \xi } , we may take the infimum of the right-hand side, which leads one to consider the supremum of ξ a − Λ ( ξ ) {\displaystyle \xi a-\Lambda (\xi )} , i.e., the Legendre transform of Λ {\displaystyle \Lambda } , evaluated at x = a {\displaystyle x=a} . === Microeconomics === Legendre transformation arises naturally in microeconomics in the process of finding the supply S(P) of some product given a fixed price P on the market knowing the cost function C(Q), i.e. the cost for the producer to make/mine/etc. Q units of the given product. A simple theory explains the shape of the supply curve based solely on the cost function. Let us suppose the market price for a one unit of our product is P. For a company selling this good, the best strategy is to adjust the production Q so that its profit is maximized. We can maximize the profit profit = revenue − costs = P Q − C ( Q ) {\displaystyle {\text{profit}}={\text{revenue}}-{\text{costs}}=PQ-C(Q)} by differentiating with respect to Q and solving P − C ′ ( Q opt ) = 0. {\displaystyle P-C'(Q_{\text{opt}})=0.} Qopt represents the optimal quantity Q of goods that the producer is willing to supply, which is indeed the supply itself: S ( P ) = Q opt ( P ) = ( C ′ ) − 1 ( P ) . {\displaystyle S(P)=Q_{\text{opt}}(P)=(C')^{-1}(P).} If we consider the maximal profit as a function of price, profit max ( P ) {\displaystyle {\text{profit}}_{\text{max}}(P)} , we see that it is the Legendre transform of the cost function C ( Q ) {\displaystyle C(Q)} . == Geometric interpretation == For a strictly convex function, the Legendre transformation can be interpreted as a mapping between the graph of the function and the family of tangents of the graph. (For a function of one variable, the tangents are well-defined at all but at most countably many points, since a convex function is differentiable at all but at most countably many points.) The equation of a line with slope p {\displaystyle p} and y {\displaystyle y} -intercept b {\displaystyle b} is given by y = p x + b {\displaystyle y=px+b} . For this line to be tangent to the graph of a function f {\displaystyle f} at the point ( x 0 , f ( x 0 ) ) {\displaystyle \left(x_{0},f(x_{0})\right)} requires f ( x 0 ) = p x 0 + b {\displaystyle f(x_{0})=px_{0}+b} and p = f ′ ( x 0 ) . {\displaystyle p=f'(x_{0}).} Being the derivative of a strictly convex function, the function f ′ {\displaystyle f'} is strictly monotone and thus injective. The second equation can be solved for x 0 = f ′ − 1 ( p ) , {\textstyle x_{0}=f^{\prime -1}(p),} allowing elimination of x 0 {\displaystyle x_{0}} from the first, and solving for the y {\displaystyle y} -intercept b {\displaystyle b} of the tangent as a function of its slope p , {\displaystyle p,} b = f ( x 0 ) − p x 0 = f ( f ′ − 1 ( p ) ) − p ⋅ f ′ − 1 ( p ) = − f ⋆ ( p ) {\textstyle b=f(x_{0})-px_{0}=f\left(f^{\prime -1}(p)\right)-p\cdot f^{\prime -1}(p)=-f^{\star }(p)} where f ⋆ {\displaystyle f^{\star }} denotes the Legendre transform of f . {\displaystyle f.} The family of tangent lines of the graph of f {\displaystyle f} parameterized by the slope p {\displaystyle p} is therefore given by y = p x − f ⋆ ( p ) , {\textstyle y=px-f^{\star }(p),} or, written implicitly, by the solutions of the equation F ( x , y , p ) = y + f ⋆ ( p ) − p x = 0 . {\displaystyle F(x,y,p)=y+f^{\star }(p)-px=0~.} The graph of the original function can be reconstructed from this family of lines as the envelope of this family by demanding ∂ F ( x , y , p ) ∂ p = f ⋆ ′ ( p ) − x = 0. {\displaystyle {\frac {\partial F(x,y,p)}{\partial p}}=f^{\star \prime }(p)-x=0.} Eliminating p {\displaystyle p} from these two equations gives y = x ⋅ f ⋆ ′ − 1 ( x ) − f ⋆ ( f ⋆ ′ − 1 ( x ) ) . {\displaystyle y=x\cdot f^{\star \prime -1}(x)-f^{\star }\left(f^{\star \prime -1}(x)\right).} Identifying y {\displaystyle y} with f ( x ) {\displaystyle f(x)} and recognizing the right side of the preceding equation as the Legendre transform of f ⋆ , {\displaystyle f^{\star },} yield f ( x ) = f ⋆ ⋆ ( x ) . {\textstyle f(x)=f^{\star \star }(x)~.} == Legendre transformation in more than one dimension == For a differentiable real-valued function on an open convex subset U of Rn the Legendre conjugate of the pair (U, f) is defined to be the pair (V, g), where V is the image of U under the gradient mapping Df, and g is the function on V given by the formula g ( y ) = ⟨ y , x ⟩ − f ( x ) , x = ( D f ) − 1 ( y ) {\displaystyle g(y)=\left\langle y,x\right\rangle -f(x),\qquad x=\left(Df\right)^{-1}(y)} where ⟨ u , v ⟩ = ∑ k = 1 n u k ⋅ v k {\displaystyle \left\langle u,v\right\rangle =\sum _{k=1}^{n}u_{k}\cdot v_{k}} is the scalar product on Rn. The multidimensional transform can be interpreted as an encoding of the convex hull of the function's epigraph in terms of its supporting hyperplanes. This can be seen as consequence of the following two observations. On the one hand, the hyperplane tangent to the epigraph of f {\displaystyle f} at some point ( x , f ( x ) ) ∈ U × R {\displaystyle (\mathbf {x} ,f(\mathbf {x} ))\in U\times \mathbb {R} } has normal vector ( ∇ f ( x ) , − 1 ) ∈ R n + 1 {\displaystyle (\nabla f(\mathbf {x} ),-1)\in \mathbb {R} ^{n+1}} . On the other hand, any closed convex set C ∈ R m {\displaystyle C\in \mathbb {R} ^{m}} can be characterized via the set of its supporting hyperplanes by the equations x ⋅ n = h C ( n ) {\displaystyle \mathbf {x} \cdot \mathbf {n} =h_{C}(\mathbf {n} )} , where h C ( n ) {\displaystyle h_{C}(\mathbf {n} )} is the support function of C {\displaystyle C} . But the definition of Legendre transform via the maximization matches precisely that of the support function, that is, f ∗ ( x ) = h epi ⁡ ( f ) ( x , − 1 ) {\displaystyle f^{*}(\mathbf {x} )=h_{\operatorname {epi} (f)}(\mathbf {x} ,-1)} . We thus conclude that the Legendre transform characterizes the epigraph in the sense that the tangent plane to the epigraph at any point ( x , f ( x ) ) {\displaystyle (\mathbf {x} ,f(\mathbf {x} ))} is given explicitly by { z ∈ R n + 1 : z ⋅ x = f ∗ ( x ) } . {\displaystyle \{\mathbf {z} \in \mathbb {R} ^{n+1}:\,\,\mathbf {z} \cdot \mathbf {x} =f^{*}(\mathbf {x} )\}.} Alternatively, if X is a vector space and Y is its dual vector space, then for each point x of X and y of Y, there is a natural identification of the cotangent spaces T*Xx with Y and T*Yy with X. If f is a real differentiable function over X, then its exterior derivative, df, is a section of the cotangent bundle T*X and as such, we can construct a map from X to Y. Similarly, if g is a real differentiable function over Y, then dg defines a map from Y to X. If both maps happen to be inverses of each other, we say we have a Legendre transform. The notion of the tautological one-form is commonly used in this setting. When the function is not differentiable, the Legendre transform can still be extended, and is known as the Legendre-Fenchel transformation. In this more general setting, a few properties are lost: for example, the Legendre transform is no longer its own inverse (unless there are extra assumptions, like convexity). == Legendre transformation on manifolds == Let M {\textstyle M} be a smooth manifold, let E {\displaystyle E} and π : E → M {\textstyle \pi :E\to M} be a vector bundle on M {\displaystyle M} and its associated bundle projection, respectively. Let L : E → R {\textstyle L:E\to \mathbb {R} } be a smooth function. We think of L {\textstyle L} as a Lagrangian by analogy with the classical case where M = R {\textstyle M=\mathbb {R} } , E = T M = R × R {\textstyle E=TM=\mathbb {R} \times \mathbb {R} } and L ( x , v ) = 1 2 m v 2 − V ( x ) {\textstyle L(x,v)={\frac {1}{2}}mv^{2}-V(x)} for some positive number m ∈ R {\textstyle m\in \mathbb {R} } and function V : M → R {\textstyle V:M\to \mathbb {R} } . As usual, the dual of E {\textstyle E} is denote by E ∗ {\textstyle E^{*}} . The fiber of π {\textstyle \pi } over x ∈ M {\textstyle x\in M} is denoted E x {\textstyle E_{x}} , and the restriction of L {\textstyle L} to E x {\textstyle E_{x}} is denoted by L | E x : E x → R {\textstyle L|_{E_{x}}:E_{x}\to \mathbb {R} } . The Legendre transformation of L {\textstyle L} is the smooth morphism F L : E → E ∗ {\displaystyle \mathbf {F} L:E\to E^{*}} defined by F L ( v ) = d ( L | E x ) v ∈ E x ∗ {\textstyle \mathbf {F} L(v)=d(L|_{E_{x}})_{v}\in E_{x}^{*}} , where x = π ( v ) {\textstyle x=\pi (v)} . Here we use the fact that since E x {\textstyle E_{x}} is a vector space, T v ( E x ) {\textstyle T_{v}(E_{x})} can be identified with E x {\textstyle E_{x}} . In other words, F L ( v ) ∈ E x ∗ {\textstyle \mathbf {F} L(v)\in E_{x}^{*}} is the covector that sends w ∈ E x {\textstyle w\in E_{x}} to the directional derivative d d t | t = 0 L ( v + t w ) ∈ R {\textstyle \left.{\frac {d}{dt}}\right|_{t=0}L(v+tw)\in \mathbb {R} } . To describe the Legendre transformation locally, let U ⊆ M {\textstyle U\subseteq M} be a coordinate chart over which E {\textstyle E} is trivial. Picking a trivialization of E {\textstyle E} over U {\textstyle U} , we obtain charts E U ≅ U × R r {\textstyle E_{U}\cong U\times \mathbb {R} ^{r}} and E U ∗ ≅ U × R r {\textstyle E_{U}^{*}\cong U\times \mathbb {R} ^{r}} . In terms of these charts, we have F L ( x ; v 1 , … , v r ) = ( x ; p 1 , … , p r ) {\textstyle \mathbf {F} L(x;v_{1},\dotsc ,v_{r})=(x;p_{1},\dotsc ,p_{r})} , where p i = ∂ L ∂ v i ( x ; v 1 , … , v r ) {\displaystyle p_{i}={\frac {\partial L}{\partial v_{i}}}(x;v_{1},\dotsc ,v_{r})} for all i = 1 , … , r {\textstyle i=1,\dots ,r} . If, as in the classical case, the restriction of L : E → R {\textstyle L:E\to \mathbb {R} } to each fiber E x {\textstyle E_{x}} is strictly convex and bounded below by a positive definite quadratic form minus a constant, then the Legendre transform F L : E → E ∗ {\textstyle \mathbf {F} L:E\to E^{*}} is a diffeomorphism. Suppose that F L {\textstyle \mathbf {F} L} is a diffeomorphism and let H : E ∗ → R {\textstyle H:E^{*}\to \mathbb {R} } be the "Hamiltonian" function defined by H ( p ) = p ⋅ v − L ( v ) , {\displaystyle H(p)=p\cdot v-L(v),} where v = ( F L ) − 1 ( p ) {\textstyle v=(\mathbf {F} L)^{-1}(p)} . Using the natural isomorphism E ≅ E ∗ ∗ {\textstyle E\cong E^{**}} , we may view the Legendre transformation of H {\textstyle H} as a map F H : E ∗ → E {\textstyle \mathbf {F} H:E^{*}\to E} . Then we have ( F L ) − 1 = F H . {\displaystyle (\mathbf {F} L)^{-1}=\mathbf {F} H.} == Further properties == === Scaling properties === The Legendre transformation has the following scaling properties: For a > 0, f ( x ) = a ⋅ g ( x ) ⇒ f ⋆ ( p ) = a ⋅ g ⋆ ( p a ) {\displaystyle f(x)=a\cdot g(x)\Rightarrow f^{\star }(p)=a\cdot g^{\star }\left({\frac {p}{a}}\right)} f ( x ) = g ( a ⋅ x ) ⇒ f ⋆ ( p ) = g ⋆ ( p a ) . {\displaystyle f(x)=g(a\cdot x)\Rightarrow f^{\star }(p)=g^{\star }\left({\frac {p}{a}}\right).} It follows that if a function is homogeneous of degree r then its image under the Legendre transformation is a homogeneous function of degree s, where 1/r + 1/s = 1. (Since f(x) = xr/r, with r > 1, implies f*(p) = ps/s.) Thus, the only monomial whose degree is invariant under Legendre transform is the quadratic. === Behavior under translation === f ( x ) = g ( x ) + b ⇒ f ⋆ ( p ) = g ⋆ ( p ) − b {\displaystyle f(x)=g(x)+b\Rightarrow f^{\star }(p)=g^{\star }(p)-b} f ( x ) = g ( x + y ) ⇒ f ⋆ ( p ) = g ⋆ ( p ) − p ⋅ y {\displaystyle f(x)=g(x+y)\Rightarrow f^{\star }(p)=g^{\star }(p)-p\cdot y} === Behavior under inversion === f ( x ) = g − 1 ( x ) ⇒ f ⋆ ( p ) = − p ⋅ g ⋆ ( 1 p ) {\displaystyle f(x)=g^{-1}(x)\Rightarrow f^{\star }(p)=-p\cdot g^{\star }\left({\frac {1}{p}}\right)} === Behavior under linear transformations === Let A : Rn → Rm be a linear transformation. For any convex function f on Rn, one has ( A f ) ⋆ = f ⋆ A ⋆ {\displaystyle (Af)^{\star }=f^{\star }A^{\star }} where A* is the adjoint operator of A defined by ⟨ A x , y ⋆ ⟩ = ⟨ x , A ⋆ y ⋆ ⟩ , {\displaystyle \left\langle Ax,y^{\star }\right\rangle =\left\langle x,A^{\star }y^{\star }\right\rangle ,} and Af is the push-forward of f along A ( A f ) ( y ) = inf { f ( x ) : x ∈ X , A x = y } . {\displaystyle (Af)(y)=\inf\{f(x):x\in X,Ax=y\}.} A closed convex function f is symmetric with respect to a given set G of orthogonal linear transformations, f ( A x ) = f ( x ) , ∀ x , ∀ A ∈ G {\displaystyle f(Ax)=f(x),\;\forall x,\;\forall A\in G} if and only if f* is symmetric with respect to G. === Infimal convolution === The infimal convolution of two functions f and g is defined as ( f ⋆ inf g ) ( x ) = inf { f ( x − y ) + g ( y ) | y ∈ R n } . {\displaystyle \left(f\star _{\inf }g\right)(x)=\inf \left\{f(x-y)+g(y)\,|\,y\in \mathbf {R} ^{n}\right\}.} Let f1, ..., fm be proper convex functions on Rn. Then ( f 1 ⋆ inf ⋯ ⋆ inf f m ) ⋆ = f 1 ⋆ + ⋯ + f m ⋆ . {\displaystyle \left(f_{1}\star _{\inf }\cdots \star _{\inf }f_{m}\right)^{\star }=f_{1}^{\star }+\cdots +f_{m}^{\star }.} === Fenchel's inequality === For any function f and its convex conjugate f * Fenchel's inequality (also known as the Fenchel–Young inequality) holds for every x ∈ X and p ∈ X*, i.e., independent x, p pairs, ⟨ p , x ⟩ ≤ f ( x ) + f ⋆ ( p ) . {\displaystyle \left\langle p,x\right\rangle \leq f(x)+f^{\star }(p).} == See also == Dual curve Projective duality Young's inequality for products Convex conjugate Moreau's theorem Integration by parts Fenchel's duality theorem == References == Courant, Richard; Hilbert, David (2008). Methods of Mathematical Physics. Vol. 2. John Wiley & Sons. ISBN 978-0471504399. Arnol'd, Vladimir Igorevich (1989). Mathematical Methods of Classical Mechanics (2nd ed.). Springer. ISBN 0-387-96890-3. Fenchel, W. (1949). "On conjugate convex functions", Can. J. Math 1: 73-77. Rockafellar, R. Tyrrell (1996) [1970]. Convex Analysis. Princeton University Press. ISBN 0-691-01586-4. Zia, R. K. P.; Redish, E. F.; McKay, S. R. (2009). "Making sense of the Legendre transform". American Journal of Physics. 77 (7): 614. arXiv:0806.1147. Bibcode:2009AmJPh..77..614Z. doi:10.1119/1.3119512. S2CID 37549350. == Further reading == Nielsen, Frank (2010-09-01). "Legendre transformation and information geometry" (PDF). Retrieved 2016-01-24. Touchette, Hugo (2005-07-27). "Legendre-Fenchel transforms in a nutshell" (PDF). Retrieved 2016-01-24. Touchette, Hugo (2006-11-21). "Elements of convex analysis" (PDF). Archived from the original (PDF) on 2016-02-01. Retrieved 2016-01-24. == External links == Legendre transform with figures at maze5.net Legendre and Legendre-Fenchel transforms in a step-by-step explanation at onmyphd.com
Wikipedia/Legendre_transformation
The differentiation of trigonometric functions is the mathematical process of finding the derivative of a trigonometric function, or its rate of change with respect to a variable. For example, the derivative of the sine function is written sin′(a) = cos(a), meaning that the rate of change of sin(x) at a particular angle x = a is given by the cosine of that angle. All derivatives of circular trigonometric functions can be found from those of sin(x) and cos(x) by means of the quotient rule applied to functions such as tan(x) = sin(x)/cos(x). Knowing these derivatives, the derivatives of the inverse trigonometric functions are found using implicit differentiation. == Proofs of derivatives of trigonometric functions == === Limit of sin(θ)/θ as θ tends to 0 === The diagram at right shows a circle with centre O and radius r = 1. Let two radii OA and OB make an arc of θ radians. Since we are considering the limit as θ tends to zero, we may assume θ is a small positive number, say 0 < θ < ⁠1/2⁠ π in the first quadrant. In the diagram, let R1 be the triangle OAB, R2 the circular sector OAB, and R3 the triangle OAC. The area of triangle OAB is: A r e a ( R 1 ) = 1 2 | O A | | O B | sin ⁡ θ = 1 2 sin ⁡ θ . {\displaystyle \mathrm {Area} (R_{1})={\tfrac {1}{2}}\ |OA|\ |OB|\sin \theta ={\tfrac {1}{2}}\sin \theta \,.} The area of the circular sector OAB is: A r e a ( R 2 ) = 1 2 θ . {\displaystyle \mathrm {Area} (R_{2})={\tfrac {1}{2}}\theta \,.} The area of the triangle OAC is given by: A r e a ( R 3 ) = 1 2 | O A | | A C | = 1 2 tan ⁡ θ . {\displaystyle \mathrm {Area} (R_{3})={\tfrac {1}{2}}\ |OA|\ |AC|={\tfrac {1}{2}}\tan \theta \,.} Since each region is contained in the next, one has: Area ( R 1 ) < Area ( R 2 ) < Area ( R 3 ) ⟹ 1 2 sin ⁡ θ < 1 2 θ < 1 2 tan ⁡ θ . {\displaystyle {\text{Area}}(R_{1})<{\text{Area}}(R_{2})<{\text{Area}}(R_{3})\implies {\tfrac {1}{2}}\sin \theta <{\tfrac {1}{2}}\theta <{\tfrac {1}{2}}\tan \theta \,.} Moreover, since sin θ > 0 in the first quadrant, we may divide through by ⁠1/2⁠ sin θ, giving: 1 < θ sin ⁡ θ < 1 cos ⁡ θ ⟹ 1 > sin ⁡ θ θ > cos ⁡ θ . {\displaystyle 1<{\frac {\theta }{\sin \theta }}<{\frac {1}{\cos \theta }}\implies 1>{\frac {\sin \theta }{\theta }}>\cos \theta \,.} In the last step we took the reciprocals of the three positive terms, reversing the inequities. We conclude that for 0 < θ < ⁠1/2⁠ π, the quantity sin(θ)/θ is always less than 1 and always greater than cos(θ). Thus, as θ gets closer to 0, sin(θ)/θ is "squeezed" between a ceiling at height 1 and a floor at height cos θ, which rises towards 1; hence sin(θ)/θ must tend to 1 as θ tends to 0 from the positive side: lim θ → 0 + sin ⁡ θ θ = 1 . {\displaystyle \lim _{\theta \to 0^{+}}{\frac {\sin \theta }{\theta }}=1\,.} For the case where θ is a small negative number –⁠1/2⁠ π < θ < 0, we use the fact that sine is an odd function: lim θ → 0 − sin ⁡ θ θ = lim θ → 0 + sin ⁡ ( − θ ) − θ = lim θ → 0 + − sin ⁡ θ − θ = lim θ → 0 + sin ⁡ θ θ = 1 . {\displaystyle \lim _{\theta \to 0^{-}}\!{\frac {\sin \theta }{\theta }}\ =\ \lim _{\theta \to 0^{+}}\!{\frac {\sin(-\theta )}{-\theta }}\ =\ \lim _{\theta \to 0^{+}}\!{\frac {-\sin \theta }{-\theta }}\ =\ \lim _{\theta \to 0^{+}}\!{\frac {\sin \theta }{\theta }}\ =\ 1\,.} === Limit of (cos(θ)-1)/θ as θ tends to 0 === The last section enables us to calculate this new limit relatively easily. This is done by employing a simple trick. In this calculation, the sign of θ is unimportant. lim θ → 0 cos ⁡ θ − 1 θ = lim θ → 0 ( cos ⁡ θ − 1 θ ) ( cos ⁡ θ + 1 cos ⁡ θ + 1 ) = lim θ → 0 cos 2 θ − 1 θ ( cos ⁡ θ + 1 ) . {\displaystyle \lim _{\theta \to 0}\,{\frac {\cos \theta -1}{\theta }}\ =\ \lim _{\theta \to 0}\left({\frac {\cos \theta -1}{\theta }}\right)\!\!\left({\frac {\cos \theta +1}{\cos \theta +1}}\right)\ =\ \lim _{\theta \to 0}\,{\frac {\cos ^{2}\!\theta -1}{\theta \,(\cos \theta +1)}}.} Using cos2θ – 1 = –sin2θ, the fact that the limit of a product is the product of limits, and the limit result from the previous section, we find that: lim θ → 0 cos ⁡ θ − 1 θ = lim θ → 0 − sin 2 ⁡ θ θ ( cos ⁡ θ + 1 ) = ( − lim θ → 0 sin ⁡ θ θ ) ( lim θ → 0 sin ⁡ θ cos ⁡ θ + 1 ) = ( − 1 ) ( 0 2 ) = 0 . {\displaystyle \lim _{\theta \to 0}\,{\frac {\cos \theta -1}{\theta }}\ =\ \lim _{\theta \to 0}\,{\frac {-\sin ^{2}\theta }{\theta (\cos \theta +1)}}\ =\ \left(-\lim _{\theta \to 0}{\frac {\sin \theta }{\theta }}\right)\!\left(\lim _{\theta \to 0}\,{\frac {\sin \theta }{\cos \theta +1}}\right)\ =\ (-1)\left({\frac {0}{2}}\right)=0\,.} === Limit of tan(θ)/θ as θ tends to 0 === Using the limit for the sine function, the fact that the tangent function is odd, and the fact that the limit of a product is the product of limits, we find: lim θ → 0 tan ⁡ θ θ = ( lim θ → 0 sin ⁡ θ θ ) ( lim θ → 0 1 cos ⁡ θ ) = ( 1 ) ( 1 ) = 1 . {\displaystyle \lim _{\theta \to 0}{\frac {\tan \theta }{\theta }}\ =\ \left(\lim _{\theta \to 0}{\frac {\sin \theta }{\theta }}\right)\!\left(\lim _{\theta \to 0}{\frac {1}{\cos \theta }}\right)\ =\ (1)(1)\ =\ 1\,.} === Derivative of the sine function === We calculate the derivative of the sine function from the limit definition: d d θ sin ⁡ θ = lim δ → 0 sin ⁡ ( θ + δ ) − sin ⁡ θ δ . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\sin \theta =\lim _{\delta \to 0}{\frac {\sin(\theta +\delta )-\sin \theta }{\delta }}.} Using the angle addition formula sin(α+β) = sin α cos β + sin β cos α, we have: d d θ sin ⁡ θ = lim δ → 0 sin ⁡ θ cos ⁡ δ + sin ⁡ δ cos ⁡ θ − sin ⁡ θ δ = lim δ → 0 ( sin ⁡ δ δ cos ⁡ θ + cos ⁡ δ − 1 δ sin ⁡ θ ) . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\sin \theta =\lim _{\delta \to 0}{\frac {\sin \theta \cos \delta +\sin \delta \cos \theta -\sin \theta }{\delta }}=\lim _{\delta \to 0}\left({\frac {\sin \delta }{\delta }}\cos \theta +{\frac {\cos \delta -1}{\delta }}\sin \theta \right).} Using the limits for the sine and cosine functions: d d θ sin ⁡ θ = ( 1 ) cos ⁡ θ + ( 0 ) sin ⁡ θ = cos ⁡ θ . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\sin \theta =(1)\cos \theta +(0)\sin \theta =\cos \theta \,.} === Derivative of the cosine function === ==== From the definition of derivative ==== We again calculate the derivative of the cosine function from the limit definition: d d θ cos ⁡ θ = lim δ → 0 cos ⁡ ( θ + δ ) − cos ⁡ θ δ . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\cos \theta =\lim _{\delta \to 0}{\frac {\cos(\theta +\delta )-\cos \theta }{\delta }}.} Using the angle addition formula cos(α+β) = cos α cos β – sin α sin β, we have: d d θ cos ⁡ θ = lim δ → 0 cos ⁡ θ cos ⁡ δ − sin ⁡ θ sin ⁡ δ − cos ⁡ θ δ = lim δ → 0 ( cos ⁡ δ − 1 δ cos ⁡ θ − sin ⁡ δ δ sin ⁡ θ ) . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\cos \theta =\lim _{\delta \to 0}{\frac {\cos \theta \cos \delta -\sin \theta \sin \delta -\cos \theta }{\delta }}=\lim _{\delta \to 0}\left({\frac {\cos \delta -1}{\delta }}\cos \theta \,-\,{\frac {\sin \delta }{\delta }}\sin \theta \right).} Using the limits for the sine and cosine functions: d d θ cos ⁡ θ = ( 0 ) cos ⁡ θ − ( 1 ) sin ⁡ θ = − sin ⁡ θ . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\cos \theta =(0)\cos \theta -(1)\sin \theta =-\sin \theta \,.} ==== From the chain rule ==== To compute the derivative of the cosine function from the chain rule, first observe the following three facts: cos ⁡ θ = sin ⁡ ( π 2 − θ ) {\displaystyle \cos \theta =\sin \left({\tfrac {\pi }{2}}-\theta \right)} sin ⁡ θ = cos ⁡ ( π 2 − θ ) {\displaystyle \sin \theta =\cos \left({\tfrac {\pi }{2}}-\theta \right)} d d θ sin ⁡ θ = cos ⁡ θ {\displaystyle {\tfrac {\operatorname {d} }{\operatorname {d} \!\theta }}\sin \theta =\cos \theta } The first and the second are trigonometric identities, and the third is proven above. Using these three facts, we can write the following, d d θ cos ⁡ θ = d d θ sin ⁡ ( π 2 − θ ) {\displaystyle {\tfrac {\operatorname {d} }{\operatorname {d} \!\theta }}\cos \theta ={\tfrac {\operatorname {d} }{\operatorname {d} \!\theta }}\sin \left({\tfrac {\pi }{2}}-\theta \right)} We can differentiate this using the chain rule. Letting f ( x ) = sin ⁡ x , g ( θ ) = π 2 − θ {\displaystyle f(x)=\sin x,\ \ g(\theta )={\tfrac {\pi }{2}}-\theta } , we have: d d θ f ( g ( θ ) ) = f ′ ( g ( θ ) ) ⋅ g ′ ( θ ) = cos ⁡ ( π 2 − θ ) ⋅ ( 0 − 1 ) = − sin ⁡ θ {\displaystyle {\tfrac {\operatorname {d} }{\operatorname {d} \!\theta }}f\!\left(g\!\left(\theta \right)\right)=f^{\prime }\!\left(g\!\left(\theta \right)\right)\cdot g^{\prime }\!\left(\theta \right)=\cos \left({\tfrac {\pi }{2}}-\theta \right)\cdot (0-1)=-\sin \theta } . Therefore, we have proven that d d θ cos ⁡ θ = − sin ⁡ θ {\displaystyle {\tfrac {\operatorname {d} }{\operatorname {d} \!\theta }}\cos \theta =-\sin \theta } . === Derivative of the tangent function === ==== From the definition of derivative ==== To calculate the derivative of the tangent function tan θ, we use first principles. By definition: d d θ tan ⁡ θ = lim δ → 0 ( tan ⁡ ( θ + δ ) − tan ⁡ θ δ ) . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\tan \theta =\lim _{\delta \to 0}\left({\frac {\tan(\theta +\delta )-\tan \theta }{\delta }}\right).} Using the well-known angle formula tan(α+β) = (tan α + tan β) / (1 - tan α tan β), we have: d d θ tan ⁡ θ = lim δ → 0 [ tan ⁡ θ + tan ⁡ δ 1 − tan ⁡ θ tan ⁡ δ − tan ⁡ θ δ ] = lim δ → 0 [ tan ⁡ θ + tan ⁡ δ − tan ⁡ θ + tan 2 ⁡ θ tan ⁡ δ δ ( 1 − tan ⁡ θ tan ⁡ δ ) ] . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\tan \theta =\lim _{\delta \to 0}\left[{\frac {{\frac {\tan \theta +\tan \delta }{1-\tan \theta \tan \delta }}-\tan \theta }{\delta }}\right]=\lim _{\delta \to 0}\left[{\frac {\tan \theta +\tan \delta -\tan \theta +\tan ^{2}\theta \tan \delta }{\delta \left(1-\tan \theta \tan \delta \right)}}\right].} Using the fact that the limit of a product is the product of the limits: d d θ tan ⁡ θ = lim δ → 0 tan ⁡ δ δ × lim δ → 0 ( 1 + tan 2 ⁡ θ 1 − tan ⁡ θ tan ⁡ δ ) . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\tan \theta =\lim _{\delta \to 0}{\frac {\tan \delta }{\delta }}\times \lim _{\delta \to 0}\left({\frac {1+\tan ^{2}\theta }{1-\tan \theta \tan \delta }}\right).} Using the limit for the tangent function, and the fact that tan δ tends to 0 as δ tends to 0: d d θ tan ⁡ θ = 1 × 1 + tan 2 ⁡ θ 1 − 0 = 1 + tan 2 ⁡ θ . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\tan \theta =1\times {\frac {1+\tan ^{2}\theta }{1-0}}=1+\tan ^{2}\theta .} We see immediately that: d d θ tan ⁡ θ = 1 + sin 2 ⁡ θ cos 2 ⁡ θ = cos 2 ⁡ θ + sin 2 ⁡ θ cos 2 ⁡ θ = 1 cos 2 ⁡ θ = sec 2 ⁡ θ . {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\,\tan \theta =1+{\frac {\sin ^{2}\theta }{\cos ^{2}\theta }}={\frac {\cos ^{2}\theta +\sin ^{2}\theta }{\cos ^{2}\theta }}={\frac {1}{\cos ^{2}\theta }}=\sec ^{2}\theta \,.} ==== From the quotient rule ==== One can also compute the derivative of the tangent function using the quotient rule. d d θ tan ⁡ θ = d d θ sin ⁡ θ cos ⁡ θ = ( sin ⁡ θ ) ′ ⋅ cos ⁡ θ − sin ⁡ θ ⋅ ( cos ⁡ θ ) ′ cos 2 ⁡ θ = cos 2 ⁡ θ + sin 2 ⁡ θ cos 2 ⁡ θ {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\tan \theta ={\frac {\operatorname {d} }{\operatorname {d} \!\theta }}{\frac {\sin \theta }{\cos \theta }}={\frac {\left(\sin \theta \right)^{\prime }\cdot \cos \theta -\sin \theta \cdot \left(\cos \theta \right)^{\prime }}{\cos ^{2}\theta }}={\frac {\cos ^{2}\theta +\sin ^{2}\theta }{\cos ^{2}\theta }}} The numerator can be simplified to 1 by the Pythagorean identity, giving us, 1 cos 2 ⁡ θ = sec 2 ⁡ θ {\displaystyle {\frac {1}{\cos ^{2}\theta }}=\sec ^{2}\theta } Therefore, d d θ tan ⁡ θ = sec 2 ⁡ θ {\displaystyle {\frac {\operatorname {d} }{\operatorname {d} \!\theta }}\tan \theta =\sec ^{2}\theta } == Proofs of derivatives of inverse trigonometric functions == The following derivatives are found by setting a variable y equal to the inverse trigonometric function that we wish to take the derivative of. Using implicit differentiation and then solving for dy/dx, the derivative of the inverse function is found in terms of y. To convert dy/dx back into being in terms of x, we can draw a reference triangle on the unit circle, letting θ be y. Using the Pythagorean theorem and the definition of the regular trigonometric functions, we can finally express dy/dx in terms of x. === Differentiating the inverse sine function === We let y = arcsin ⁡ x {\displaystyle y=\arcsin x\,\!} Where − π 2 ≤ y ≤ π 2 {\displaystyle -{\frac {\pi }{2}}\leq y\leq {\frac {\pi }{2}}} Then sin ⁡ y = x {\displaystyle \sin y=x\,\!} Taking the derivative with respect to x {\displaystyle x} on both sides and solving for dy/dx: d d x sin ⁡ y = d d x x {\displaystyle {d \over dx}\sin y={d \over dx}x} cos ⁡ y ⋅ d y d x = 1 {\displaystyle \cos y\cdot {dy \over dx}=1\,\!} Substituting cos ⁡ y = 1 − sin 2 ⁡ y {\displaystyle \cos y={\sqrt {1-\sin ^{2}y}}} in from above, 1 − sin 2 ⁡ y ⋅ d y d x = 1 {\displaystyle {\sqrt {1-\sin ^{2}y}}\cdot {dy \over dx}=1} Substituting x = sin ⁡ y {\displaystyle x=\sin y} in from above, 1 − x 2 ⋅ d y d x = 1 {\displaystyle {\sqrt {1-x^{2}}}\cdot {dy \over dx}=1} d y d x = 1 1 − x 2 {\displaystyle {dy \over dx}={\frac {1}{\sqrt {1-x^{2}}}}} === Differentiating the inverse cosine function === We let y = arccos ⁡ x {\displaystyle y=\arccos x\,\!} Where 0 ≤ y ≤ π {\displaystyle 0\leq y\leq \pi } Then cos ⁡ y = x {\displaystyle \cos y=x\,\!} Taking the derivative with respect to x {\displaystyle x} on both sides and solving for dy/dx: d d x cos ⁡ y = d d x x {\displaystyle {d \over dx}\cos y={d \over dx}x} − sin ⁡ y ⋅ d y d x = 1 {\displaystyle -\sin y\cdot {dy \over dx}=1} Substituting sin ⁡ y = 1 − cos 2 ⁡ y {\displaystyle \sin y={\sqrt {1-\cos ^{2}y}}\,\!} in from above, we get − 1 − cos 2 ⁡ y ⋅ d y d x = 1 {\displaystyle -{\sqrt {1-\cos ^{2}y}}\cdot {dy \over dx}=1} Substituting x = cos ⁡ y {\displaystyle x=\cos y\,\!} in from above, we get − 1 − x 2 ⋅ d y d x = 1 {\displaystyle -{\sqrt {1-x^{2}}}\cdot {dy \over dx}=1} d y d x = − 1 1 − x 2 {\displaystyle {dy \over dx}=-{\frac {1}{\sqrt {1-x^{2}}}}} Alternatively, once the derivative of arcsin ⁡ x {\displaystyle \arcsin x} is established, the derivative of arccos ⁡ x {\displaystyle \arccos x} follows immediately by differentiating the identity arcsin ⁡ x + arccos ⁡ x = π / 2 {\displaystyle \arcsin x+\arccos x=\pi /2} so that ( arccos ⁡ x ) ′ = − ( arcsin ⁡ x ) ′ {\displaystyle (\arccos x)'=-(\arcsin x)'} . === Differentiating the inverse tangent function === We let y = arctan ⁡ x {\displaystyle y=\arctan x\,\!} Where − π 2 < y < π 2 {\displaystyle -{\frac {\pi }{2}}<y<{\frac {\pi }{2}}} Then tan ⁡ y = x {\displaystyle \tan y=x\,\!} Taking the derivative with respect to x {\displaystyle x} on both sides and solving for dy/dx: d d x tan ⁡ y = d d x x {\displaystyle {d \over dx}\tan y={d \over dx}x} Left side: d d x tan ⁡ y = sec 2 ⁡ y ⋅ d y d x = ( 1 + tan 2 ⁡ y ) d y d x {\displaystyle {d \over dx}\tan y=\sec ^{2}y\cdot {dy \over dx}=(1+\tan ^{2}y){dy \over dx}} using the Pythagorean identity Right side: d d x x = 1 {\displaystyle {d \over dx}x=1} Therefore, ( 1 + tan 2 ⁡ y ) d y d x = 1 {\displaystyle (1+\tan ^{2}y){dy \over dx}=1} Substituting x = tan ⁡ y {\displaystyle x=\tan y\,\!} in from above, we get ( 1 + x 2 ) d y d x = 1 {\displaystyle (1+x^{2}){dy \over dx}=1} d y d x = 1 1 + x 2 {\displaystyle {dy \over dx}={\frac {1}{1+x^{2}}}} === Differentiating the inverse cotangent function === We let y = arccot ⁡ x {\displaystyle y=\operatorname {arccot} x} where 0 < y < π {\displaystyle 0<y<\pi } . Then cot ⁡ y = x {\displaystyle \cot y=x} Taking the derivative with respect to x {\displaystyle x} on both sides and solving for dy/dx: d d x cot ⁡ y = d d x x {\displaystyle {\frac {d}{dx}}\cot y={\frac {d}{dx}}x} Left side: d d x cot ⁡ y = − csc 2 ⁡ y ⋅ d y d x = − ( 1 + cot 2 ⁡ y ) d y d x {\displaystyle {d \over dx}\cot y=-\csc ^{2}y\cdot {dy \over dx}=-(1+\cot ^{2}y){dy \over dx}} using the Pythagorean identity Right side: d d x x = 1 {\displaystyle {d \over dx}x=1} Therefore, − ( 1 + cot 2 ⁡ y ) d y d x = 1 {\displaystyle -(1+\cot ^{2}y){\frac {dy}{dx}}=1} Substituting x = cot ⁡ y {\displaystyle x=\cot y} , − ( 1 + x 2 ) d y d x = 1 {\displaystyle -(1+x^{2}){\frac {dy}{dx}}=1} d y d x = − 1 1 + x 2 {\displaystyle {\frac {dy}{dx}}=-{\frac {1}{1+x^{2}}}} Alternatively, as the derivative of arctan ⁡ x {\displaystyle \arctan x} is derived as shown above, then using the identity arctan ⁡ x + arccot ⁡ x = π 2 {\displaystyle \arctan x+\operatorname {arccot} x={\dfrac {\pi }{2}}} follows immediately that d d x arccot ⁡ x = d d x ( π 2 − arctan ⁡ x ) = − 1 1 + x 2 {\displaystyle {\begin{aligned}{\dfrac {d}{dx}}\operatorname {arccot} x&={\dfrac {d}{dx}}\left({\dfrac {\pi }{2}}-\arctan x\right)\\&=-{\dfrac {1}{1+x^{2}}}\end{aligned}}} === Differentiating the inverse secant function === ==== Using implicit differentiation ==== Let y = arcsec ⁡ x ∣ | x | ≥ 1 {\displaystyle y=\operatorname {arcsec} x\ \mid |x|\geq 1} Then x = sec ⁡ y ∣ y ∈ [ 0 , π 2 ) ∪ ( π 2 , π ] {\displaystyle x=\sec y\mid \ y\in \left[0,{\frac {\pi }{2}}\right)\cup \left({\frac {\pi }{2}},\pi \right]} d x d y = sec ⁡ y tan ⁡ y = | x | x 2 − 1 {\displaystyle {\frac {dx}{dy}}=\sec y\tan y=|x|{\sqrt {x^{2}-1}}} (The absolute value in the expression is necessary as the product of secant and tangent in the interval of y is always nonnegative, while the radical x 2 − 1 {\displaystyle {\sqrt {x^{2}-1}}} is always nonnegative by definition of the principal square root, so the remaining factor must also be nonnegative, which is achieved by using the absolute value of x.) d y d x = 1 | x | x 2 − 1 {\displaystyle {\frac {dy}{dx}}={\frac {1}{|x|{\sqrt {x^{2}-1}}}}} ==== Using the chain rule ==== Alternatively, the derivative of arcsecant may be derived from the derivative of arccosine using the chain rule. Let y = arcsec ⁡ x = arccos ⁡ ( 1 x ) {\displaystyle y=\operatorname {arcsec} x=\arccos \left({\frac {1}{x}}\right)} Where | x | ≥ 1 {\displaystyle |x|\geq 1} and y ∈ [ 0 , π 2 ) ∪ ( π 2 , π ] {\displaystyle y\in \left[0,{\frac {\pi }{2}}\right)\cup \left({\frac {\pi }{2}},\pi \right]} Then, applying the chain rule to arccos ⁡ ( 1 x ) {\displaystyle \arccos \left({\frac {1}{x}}\right)} : d y d x = − 1 1 − ( 1 x ) 2 ⋅ ( − 1 x 2 ) = 1 x 2 1 − 1 x 2 = 1 x 2 x 2 − 1 x 2 = 1 x 2 x 2 − 1 = 1 | x | x 2 − 1 {\displaystyle {\frac {dy}{dx}}=-{\frac {1}{\sqrt {1-({\frac {1}{x}})^{2}}}}\cdot \left(-{\frac {1}{x^{2}}}\right)={\frac {1}{x^{2}{\sqrt {1-{\frac {1}{x^{2}}}}}}}={\frac {1}{x^{2}{\frac {\sqrt {x^{2}-1}}{\sqrt {x^{2}}}}}}={\frac {1}{{\sqrt {x^{2}}}{\sqrt {x^{2}-1}}}}={\frac {1}{|x|{\sqrt {x^{2}-1}}}}} === Differentiating the inverse cosecant function === ==== Using implicit differentiation ==== Let y = arccsc ⁡ x ∣ | x | ≥ 1 {\displaystyle y=\operatorname {arccsc} x\ \mid |x|\geq 1} Then x = csc ⁡ y ∣ y ∈ [ − π 2 , 0 ) ∪ ( 0 , π 2 ] {\displaystyle x=\csc y\ \mid \ y\in \left[-{\frac {\pi }{2}},0\right)\cup \left(0,{\frac {\pi }{2}}\right]} d x d y = − csc ⁡ y cot ⁡ y = − | x | x 2 − 1 {\displaystyle {\frac {dx}{dy}}=-\csc y\cot y=-|x|{\sqrt {x^{2}-1}}} (The absolute value in the expression is necessary as the product of cosecant and cotangent in the interval of y is always nonnegative, while the radical x 2 − 1 {\displaystyle {\sqrt {x^{2}-1}}} is always nonnegative by definition of the principal square root, so the remaining factor must also be nonnegative, which is achieved by using the absolute value of x.) d y d x = − 1 | x | x 2 − 1 {\displaystyle {\frac {dy}{dx}}={\frac {-1}{|x|{\sqrt {x^{2}-1}}}}} ==== Using the chain rule ==== Alternatively, the derivative of arccosecant may be derived from the derivative of arcsine using the chain rule. Let y = arccsc ⁡ x = arcsin ⁡ ( 1 x ) {\displaystyle y=\operatorname {arccsc} x=\arcsin \left({\frac {1}{x}}\right)} Where | x | ≥ 1 {\displaystyle |x|\geq 1} and y ∈ [ − π 2 , 0 ) ∪ ( 0 , π 2 ] {\displaystyle y\in \left[-{\frac {\pi }{2}},0\right)\cup \left(0,{\frac {\pi }{2}}\right]} Then, applying the chain rule to arcsin ⁡ ( 1 x ) {\displaystyle \arcsin \left({\frac {1}{x}}\right)} : d y d x = 1 1 − ( 1 x ) 2 ⋅ ( − 1 x 2 ) = − 1 x 2 1 − 1 x 2 = − 1 x 2 x 2 − 1 x 2 = − 1 x 2 x 2 − 1 = − 1 | x | x 2 − 1 {\displaystyle {\frac {dy}{dx}}={\frac {1}{\sqrt {1-({\frac {1}{x}})^{2}}}}\cdot \left(-{\frac {1}{x^{2}}}\right)=-{\frac {1}{x^{2}{\sqrt {1-{\frac {1}{x^{2}}}}}}}=-{\frac {1}{x^{2}{\frac {\sqrt {x^{2}-1}}{\sqrt {x^{2}}}}}}=-{\frac {1}{{\sqrt {x^{2}}}{\sqrt {x^{2}-1}}}}=-{\frac {1}{|x|{\sqrt {x^{2}-1}}}}} == See also == Calculus – Branch of mathematics Derivative – Instantaneous rate of change (mathematics) Differentiation rules – Rules for computing derivatives of functions General Leibniz rule – Generalization of the product rule in calculus Inverse functions and differentiation – Formula for the derivative of an inverse functionPages displaying short descriptions of redirect targets Linearity of differentiation – Calculus property List of integrals of inverse trigonometric functions List of trigonometric identities Trigonometry – Area of geometry, about angles and lengths == References == == Bibliography == Handbook of Mathematical Functions, Edited by Abramowitz and Stegun, National Bureau of Standards, Applied Mathematics Series, 55 (1964)
Wikipedia/Differentiation_of_trigonometric_functions
In mathematics, the sign function or signum function (from signum, Latin for "sign") is a function that has the value −1, +1 or 0 according to whether the sign of a given real number is positive or negative, or the given number is itself zero. In mathematical notation the sign function is often represented as sgn ⁡ x {\displaystyle \operatorname {sgn} x} or sgn ⁡ ( x ) {\displaystyle \operatorname {sgn}(x)} . == Definition == The signum function of a real number x {\displaystyle x} is a piecewise function which is defined as follows: sgn ⁡ x := { − 1 if x < 0 , 0 if x = 0 , 1 if x > 0. {\displaystyle \operatorname {sgn} x:={\begin{cases}-1&{\text{if }}x<0,\\0&{\text{if }}x=0,\\1&{\text{if }}x>0.\end{cases}}} The law of trichotomy states that every real number must be positive, negative or zero. The signum function denotes which unique category a number falls into by mapping it to one of the values −1, +1 or 0, which can then be used in mathematical expressions or further calculations. For example: sgn ⁡ ( 2 ) = + 1 , sgn ⁡ ( π ) = + 1 , sgn ⁡ ( − 8 ) = − 1 , sgn ⁡ ( − 1 2 ) = − 1 , sgn ⁡ ( 0 ) = 0 . {\displaystyle {\begin{array}{lcr}\operatorname {sgn}(2)&=&+1\,,\\\operatorname {sgn}(\pi )&=&+1\,,\\\operatorname {sgn}(-8)&=&-1\,,\\\operatorname {sgn}(-{\frac {1}{2}})&=&-1\,,\\\operatorname {sgn}(0)&=&0\,.\end{array}}} == Basic properties == Any real number can be expressed as the product of its absolute value and its sign: x = | x | sgn ⁡ x . {\displaystyle x=|x|\operatorname {sgn} x\,.} It follows that whenever x {\displaystyle x} is not equal to 0 we have sgn ⁡ x = x | x | = | x | x . {\displaystyle \operatorname {sgn} x={\frac {x}{|x|}}={\frac {|x|}{x}}\,.} Similarly, for any real number x {\displaystyle x} , | x | = x sgn ⁡ x . {\displaystyle |x|=x\operatorname {sgn} x\,.} We can also be certain that: sgn ⁡ ( x y ) = ( sgn ⁡ x ) ( sgn ⁡ y ) , {\displaystyle \operatorname {sgn}(xy)=(\operatorname {sgn} x)(\operatorname {sgn} y)\,,} and so sgn ⁡ ( x n ) = ( sgn ⁡ x ) n . {\displaystyle \operatorname {sgn}(x^{n})=(\operatorname {sgn} x)^{n}\,.} == Some algebraic identities == The signum can also be written using the Iverson bracket notation: sgn ⁡ x = − [ x < 0 ] + [ x > 0 ] . {\displaystyle \operatorname {sgn} x=-[x<0]+[x>0]\,.} The signum can also be written using the floor and the absolute value functions: sgn ⁡ x = ⌊ x | x | + 1 ⌋ − ⌊ − x | x | + 1 ⌋ . {\displaystyle \operatorname {sgn} x={\Biggl \lfloor }{\frac {x}{|x|+1}}{\Biggr \rfloor }-{\Biggl \lfloor }{\frac {-x}{|x|+1}}{\Biggr \rfloor }\,.} If 0 0 {\displaystyle 0^{0}} is accepted to be equal to 1, the signum can also be written for all real numbers as sgn ⁡ x = 0 ( − x + | x | ) − 0 ( x + | x | ) . {\displaystyle \operatorname {sgn} x=0^{\left(-x+\left\vert x\right\vert \right)}-0^{\left(x+\left\vert x\right\vert \right)}\,.} == Properties in mathematical analysis == === Discontinuity at zero === Although the sign function takes the value −1 when x {\displaystyle x} is negative, the ringed point (0, −1) in the plot of sgn ⁡ x {\displaystyle \operatorname {sgn} x} indicates that this is not the case when x = 0 {\displaystyle x=0} . Instead, the value jumps abruptly to the solid point at (0, 0) where sgn ⁡ ( 0 ) = 0 {\displaystyle \operatorname {sgn}(0)=0} . There is then a similar jump to sgn ⁡ ( x ) = + 1 {\displaystyle \operatorname {sgn}(x)=+1} when x {\displaystyle x} is positive. Either jump demonstrates visually that the sign function sgn ⁡ x {\displaystyle \operatorname {sgn} x} is discontinuous at zero, even though it is continuous at any point where x {\displaystyle x} is either positive or negative. These observations are confirmed by any of the various equivalent formal definitions of continuity in mathematical analysis. A function f ( x ) {\displaystyle f(x)} , such as sgn ⁡ ( x ) , {\displaystyle \operatorname {sgn}(x),} is continuous at a point x = a {\displaystyle x=a} if the value f ( a ) {\displaystyle f(a)} can be approximated arbitrarily closely by the sequence of values f ( a 1 ) , f ( a 2 ) , f ( a 3 ) , … , {\displaystyle f(a_{1}),f(a_{2}),f(a_{3}),\dots ,} where the a n {\displaystyle a_{n}} make up any infinite sequence which becomes arbitrarily close to a {\displaystyle a} as n {\displaystyle n} becomes sufficiently large. In the notation of mathematical limits, continuity of f {\displaystyle f} at a {\displaystyle a} requires that f ( a n ) → f ( a ) {\displaystyle f(a_{n})\to f(a)} as n → ∞ {\displaystyle n\to \infty } for any sequence ( a n ) n = 1 ∞ {\displaystyle \left(a_{n}\right)_{n=1}^{\infty }} for which a n → a . {\displaystyle a_{n}\to a.} The arrow symbol can be read to mean approaches, or tends to, and it applies to the sequence as a whole. This criterion fails for the sign function at a = 0 {\displaystyle a=0} . For example, we can choose a n {\displaystyle a_{n}} to be the sequence 1 , 1 2 , 1 3 , 1 4 , … , {\displaystyle 1,{\tfrac {1}{2}},{\tfrac {1}{3}},{\tfrac {1}{4}},\dots ,} which tends towards zero as n {\displaystyle n} increases towards infinity. In this case, a n → a {\displaystyle a_{n}\to a} as required, but sgn ⁡ ( a ) = 0 {\displaystyle \operatorname {sgn}(a)=0} and sgn ⁡ ( a n ) = + 1 {\displaystyle \operatorname {sgn}(a_{n})=+1} for each n , {\displaystyle n,} so that sgn ⁡ ( a n ) → 1 ≠ sgn ⁡ ( a ) {\displaystyle \operatorname {sgn}(a_{n})\to 1\neq \operatorname {sgn}(a)} . This counterexample confirms more formally the discontinuity of sgn ⁡ x {\displaystyle \operatorname {sgn} x} at zero that is visible in the plot. Despite the sign function having a very simple form, the step change at zero causes difficulties for traditional calculus techniques, which are quite stringent in their requirements. Continuity is a frequent constraint. One solution can be to approximate the sign function by a smooth continuous function; others might involve less stringent approaches that build on classical methods to accommodate larger classes of function. === Smooth approximations and limits === The signum function can be given as a number of different (pointwise) limits: sgn ⁡ x = lim n → ∞ 1 − 2 − n x 1 + 2 − n x = lim n → ∞ 2 π arctan ⁡ ( n x ) = lim n → ∞ tanh ⁡ ( n x ) = lim ε → 0 x x 2 + ε 2 . {\displaystyle {\begin{aligned}\operatorname {sgn} x&=\lim _{n\to \infty }{\frac {1-2^{-nx}}{1+2^{-nx}}}\\&=\lim _{n\to \infty }{\frac {2}{\pi }}\operatorname {arctan} (nx)\\&=\lim _{n\to \infty }\tanh(nx)\\&=\lim _{\varepsilon \to 0}{\frac {x}{\sqrt {x^{2}+\varepsilon ^{2}}}}.\end{aligned}}} Here, tanh {\displaystyle \tanh } is the hyperbolic tangent, and arctan {\displaystyle \operatorname {arctan} } is the inverse tangent. The last of these is the derivative of x 2 + ε 2 {\displaystyle {\sqrt {x^{2}+\varepsilon ^{2}}}} . This is inspired from the fact that the above is exactly equal for all nonzero x {\displaystyle x} if ε = 0 {\displaystyle \varepsilon =0} , and has the advantage of simple generalization to higher-dimensional analogues of the sign function (for example, the partial derivatives of x 2 + y 2 {\displaystyle {\sqrt {x^{2}+y^{2}}}} ). See Heaviside step function § Analytic approximations. === Differentiation === The signum function sgn ⁡ x {\displaystyle \operatorname {sgn} x} is differentiable everywhere except when x = 0. {\displaystyle x=0.} Its derivative is zero when x {\displaystyle x} is non-zero: d ( sgn ⁡ x ) d x = 0 for x ≠ 0 . {\displaystyle {\frac {{\text{d}}\,(\operatorname {sgn} x)}{{\text{d}}x}}=0\qquad {\text{for }}x\neq 0\,.} This follows from the differentiability of any constant function, for which the derivative is always zero on its domain of definition. The signum sgn ⁡ x {\displaystyle \operatorname {sgn} x} acts as a constant function when it is restricted to the negative open region x < 0 , {\displaystyle x<0,} where it equals −1. It can similarly be regarded as a constant function within the positive open region x > 0 , {\displaystyle x>0,} where the corresponding constant is +1. Although these are two different constant functions, their derivative is equal to zero in each case. It is not possible to define a classical derivative at x = 0 {\displaystyle x=0} , because there is a discontinuity there. Although it is not differentiable at x = 0 {\displaystyle x=0} in the ordinary sense, under the generalized notion of differentiation in distribution theory, the derivative of the signum function is two times the Dirac delta function. This can be demonstrated using the identity sgn ⁡ x = 2 H ( x ) − 1 , {\displaystyle \operatorname {sgn} x=2H(x)-1\,,} where H ( x ) {\displaystyle H(x)} is the Heaviside step function using the standard H ( 0 ) = 1 2 {\displaystyle H(0)={\frac {1}{2}}} formalism. Using this identity, it is easy to derive the distributional derivative: d sgn ⁡ x d x = 2 d H ( x ) d x = 2 δ ( x ) . {\displaystyle {\frac {{\text{d}}\operatorname {sgn} x}{{\text{d}}x}}=2{\frac {{\text{d}}H(x)}{{\text{d}}x}}=2\delta (x)\,.} === Integration === The signum function has a definite integral between any pair of finite values a and b, even when the interval of integration includes zero. The resulting integral for a and b is then equal to the difference between their absolute values: ∫ a b ( sgn ⁡ x ) d x = | b | − | a | . {\displaystyle \int _{a}^{b}(\operatorname {sgn} x)\,{\text{d}}x=|b|-|a|\,.} In fact, the signum function is the derivative of the absolute value function, except where there is an abrupt change in gradient at zero: d | x | d x = sgn ⁡ x for x ≠ 0 . {\displaystyle {\frac {{\text{d}}|x|}{{\text{d}}x}}=\operatorname {sgn} x\qquad {\text{for }}x\neq 0\,.} We can understand this as before by considering the definition of the absolute value | x | {\displaystyle |x|} on the separate regions x > 0 {\displaystyle x>0} and x < 0. {\displaystyle x<0.} For example, the absolute value function is identical to x {\displaystyle x} in the region x > 0 , {\displaystyle x>0,} whose derivative is the constant value +1, which equals the value of sgn ⁡ x {\displaystyle \operatorname {sgn} x} there. Because the absolute value is a convex function, there is at least one subderivative at every point, including at the origin. Everywhere except zero, the resulting subdifferential consists of a single value, equal to the value of the sign function. In contrast, there are many subderivatives at zero, with just one of them taking the value sgn ⁡ ( 0 ) = 0 {\displaystyle \operatorname {sgn}(0)=0} . A subderivative value 0 occurs here because the absolute value function is at a minimum. The full family of valid subderivatives at zero constitutes the subdifferential interval [ − 1 , 1 ] {\displaystyle [-1,1]} , which might be thought of informally as "filling in" the graph of the sign function with a vertical line through the origin, making it continuous as a two dimensional curve. In integration theory, the signum function is a weak derivative of the absolute value function. Weak derivatives are equivalent if they are equal almost everywhere, making them impervious to isolated anomalies at a single point. This includes the change in gradient of the absolute value function at zero, which prohibits there being a classical derivative. === Fourier transform === The Fourier transform of the signum function is P V ∫ − ∞ ∞ ( sgn ⁡ x ) e − i k x d x = 2 i k for k ≠ 0 , {\displaystyle PV\int _{-\infty }^{\infty }(\operatorname {sgn} x)e^{-ikx}{\text{d}}x={\frac {2}{ik}}\qquad {\text{for }}k\neq 0,} where P V {\displaystyle PV} means taking the Cauchy principal value. == Generalizations == === Complex signum === The signum function can be generalized to complex numbers as: sgn ⁡ z = z | z | {\displaystyle \operatorname {sgn} z={\frac {z}{|z|}}} for any complex number z {\displaystyle z} except z = 0 {\displaystyle z=0} . The signum of a given complex number z {\displaystyle z} is the point on the unit circle of the complex plane that is nearest to z {\displaystyle z} . Then, for z ≠ 0 {\displaystyle z\neq 0} , sgn ⁡ z = e i arg ⁡ z , {\displaystyle \operatorname {sgn} z=e^{i\arg z}\,,} where arg {\displaystyle \arg } is the complex argument function. For reasons of symmetry, and to keep this a proper generalization of the signum function on the reals, also in the complex domain one usually defines, for z = 0 {\displaystyle z=0} : sgn ⁡ ( 0 + 0 i ) = 0 {\displaystyle \operatorname {sgn}(0+0i)=0} Another generalization of the sign function for real and complex expressions is csgn {\displaystyle {\text{csgn}}} , which is defined as: csgn ⁡ z = { 1 if R e ( z ) > 0 , − 1 if R e ( z ) < 0 , sgn ⁡ I m ( z ) if R e ( z ) = 0 {\displaystyle \operatorname {csgn} z={\begin{cases}1&{\text{if }}\mathrm {Re} (z)>0,\\-1&{\text{if }}\mathrm {Re} (z)<0,\\\operatorname {sgn} \mathrm {Im} (z)&{\text{if }}\mathrm {Re} (z)=0\end{cases}}} where Re ( z ) {\displaystyle {\text{Re}}(z)} is the real part of z {\displaystyle z} and Im ( z ) {\displaystyle {\text{Im}}(z)} is the imaginary part of z {\displaystyle z} . We then have (for z ≠ 0 {\displaystyle z\neq 0} ): csgn ⁡ z = z z 2 = z 2 z . {\displaystyle \operatorname {csgn} z={\frac {z}{\sqrt {z^{2}}}}={\frac {\sqrt {z^{2}}}{z}}.} === Polar decomposition of matrices === Thanks to the Polar decomposition theorem, a matrix A ∈ K n × n {\displaystyle {\boldsymbol {A}}\in \mathbb {K} ^{n\times n}} ( n ∈ N {\displaystyle n\in \mathbb {N} } and K ∈ { R , C } {\displaystyle \mathbb {K} \in \{\mathbb {R} ,\mathbb {C} \}} ) can be decomposed as a product Q P {\displaystyle {\boldsymbol {Q}}{\boldsymbol {P}}} where Q {\displaystyle {\boldsymbol {Q}}} is a unitary matrix and P {\displaystyle {\boldsymbol {P}}} is a self-adjoint, or Hermitian, positive definite matrix, both in K n × n {\displaystyle \mathbb {K} ^{n\times n}} . If A {\displaystyle {\boldsymbol {A}}} is invertible then such a decomposition is unique and Q {\displaystyle {\boldsymbol {Q}}} plays the role of A {\displaystyle {\boldsymbol {A}}} 's signum. A dual construction is given by the decomposition A = S R {\displaystyle {\boldsymbol {A}}={\boldsymbol {S}}{\boldsymbol {R}}} where R {\displaystyle {\boldsymbol {R}}} is unitary, but generally different than Q {\displaystyle {\boldsymbol {Q}}} . This leads to each invertible matrix having a unique left-signum Q {\displaystyle {\boldsymbol {Q}}} and right-signum R {\displaystyle {\boldsymbol {R}}} . In the special case where K = R , n = 2 , {\displaystyle \mathbb {K} =\mathbb {R} ,\ n=2,} and the (invertible) matrix A = [ a − b b a ] {\displaystyle {\boldsymbol {A}}=\left[{\begin{array}{rr}a&-b\\b&a\end{array}}\right]} , which identifies with the (nonzero) complex number a + i b = c {\displaystyle a+\mathrm {i} b=c} , then the signum matrices satisfy Q = P = [ a − b b a ] / | c | {\displaystyle {\boldsymbol {Q}}={\boldsymbol {P}}=\left[{\begin{array}{rr}a&-b\\b&a\end{array}}\right]/|c|} and identify with the complex signum of c {\displaystyle c} , sgn ⁡ c = c / | c | {\displaystyle \operatorname {sgn} c=c/|c|} . In this sense, polar decomposition generalizes to matrices the signum-modulus decomposition of complex numbers. === Signum as a generalized function === At real values of x {\displaystyle x} , it is possible to define a generalized function–version of the signum function, ε ( x ) {\displaystyle \varepsilon (x)} such that ε ( x ) 2 = 1 {\displaystyle \varepsilon (x)^{2}=1} everywhere, including at the point x = 0 {\displaystyle x=0} , unlike sgn {\displaystyle \operatorname {sgn} } , for which ( sgn ⁡ 0 ) 2 = 0 {\displaystyle (\operatorname {sgn} 0)^{2}=0} . This generalized signum allows construction of the algebra of generalized functions, but the price of such generalization is the loss of commutativity. In particular, the generalized signum anticommutes with the Dirac delta function ε ( x ) δ ( x ) + δ ( x ) ε ( x ) = 0 ; {\displaystyle \varepsilon (x)\delta (x)+\delta (x)\varepsilon (x)=0\,;} in addition, ε ( x ) {\displaystyle \varepsilon (x)} cannot be evaluated at x = 0 {\displaystyle x=0} ; and the special name, ε {\displaystyle \varepsilon } is necessary to distinguish it from the function sgn {\displaystyle \operatorname {sgn} } . ( ε ( 0 ) {\displaystyle \varepsilon (0)} is not defined, but sgn ⁡ 0 = 0 {\displaystyle \operatorname {sgn} 0=0} .) == See also == Absolute value Heaviside step function Negative number Rectangular function Sigmoid function (Hard sigmoid) Step function (Piecewise constant function) Three-way comparison Zero crossing Polar decomposition == Notes ==
Wikipedia/Sign_function
In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis. The trigonometric functions most widely used in modern mathematics are the sine, the cosine, and the tangent functions. Their reciprocals are respectively the cosecant, the secant, and the cotangent functions, which are less used. Each of these six trigonometric functions has a corresponding inverse function, and an analog among the hyperbolic functions. The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles. To extend the sine and cosine functions to functions whose domain is the whole real line, geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations. This allows extending the domain of sine and cosine functions to the whole complex plane, and the domain of the other trigonometric functions to the complex plane with some isolated points removed. == Notation == Conventionally, an abbreviation of each trigonometric function's name is used as its symbol in formulas. Today, the most common versions of these abbreviations are "sin" for sine, "cos" for cosine, "tan" or "tg" for tangent, "sec" for secant, "csc" or "cosec" for cosecant, and "cot" or "ctg" for cotangent. Historically, these abbreviations were first used in prose sentences to indicate particular line segments or their lengths related to an arc of an arbitrary circle, and later to indicate ratios of lengths, but as the function concept developed in the 17th–18th century, they began to be considered as functions of real-number-valued angle measures, and written with functional notation, for example sin(x). Parentheses are still often omitted to reduce clutter, but are sometimes necessary; for example the expression sin ⁡ x + y {\displaystyle \sin x+y} would typically be interpreted to mean ( sin ⁡ x ) + y , {\displaystyle (\sin x)+y,} so parentheses are required to express sin ⁡ ( x + y ) . {\displaystyle \sin(x+y).} A positive integer appearing as a superscript after the symbol of the function denotes exponentiation, not function composition. For example sin 2 ⁡ x {\displaystyle \sin ^{2}x} and sin 2 ⁡ ( x ) {\displaystyle \sin ^{2}(x)} denote ( sin ⁡ x ) 2 , {\displaystyle (\sin x)^{2},} not sin ⁡ ( sin ⁡ x ) . {\displaystyle \sin(\sin x).} This differs from the (historically later) general functional notation in which f 2 ( x ) = ( f ∘ f ) ( x ) = f ( f ( x ) ) . {\displaystyle f^{2}(x)=(f\circ f)(x)=f(f(x)).} In contrast, the superscript − 1 {\displaystyle -1} is commonly used to denote the inverse function, not the reciprocal. For example sin − 1 ⁡ x {\displaystyle \sin ^{-1}x} and sin − 1 ⁡ ( x ) {\displaystyle \sin ^{-1}(x)} denote the inverse trigonometric function alternatively written arcsin ⁡ x . {\displaystyle \arcsin x\,.} The equation θ = sin − 1 ⁡ x {\displaystyle \theta =\sin ^{-1}x} implies sin ⁡ θ = x , {\displaystyle \sin \theta =x,} not θ ⋅ sin ⁡ x = 1. {\displaystyle \theta \cdot \sin x=1.} In this case, the superscript could be considered as denoting a composed or iterated function, but negative superscripts other than − 1 {\displaystyle {-1}} are not in common use. == Right-angled triangle definitions == If the acute angle θ is given, then any right triangles that have an angle of θ are similar to each other. This means that the ratio of any two side lengths depends only on θ. Thus these six ratios define six functions of θ, which are the trigonometric functions. In the following definitions, the hypotenuse is the length of the side opposite the right angle, opposite represents the side opposite the given angle θ, and adjacent represents the side between the angle θ and the right angle. Various mnemonics can be used to remember these definitions. In a right-angled triangle, the sum of the two acute angles is a right angle, that is, 90° or ⁠π/2⁠ radians. Therefore sin ⁡ ( θ ) {\displaystyle \sin(\theta )} and cos ⁡ ( 90 ∘ − θ ) {\displaystyle \cos(90^{\circ }-\theta )} represent the same ratio, and thus are equal. This identity and analogous relationships between the other trigonometric functions are summarized in the following table. == Radians versus degrees == In geometric applications, the argument of a trigonometric function is generally the measure of an angle. For this purpose, any angular unit is convenient. One common unit is degrees, in which a right angle is 90° and a complete turn is 360° (particularly in elementary mathematics). However, in calculus and mathematical analysis, the trigonometric functions are generally regarded more abstractly as functions of real or complex numbers, rather than angles. In fact, the functions sin and cos can be defined for all complex numbers in terms of the exponential function, via power series, or as solutions to differential equations given particular initial values (see below), without reference to any geometric notions. The other four trigonometric functions (tan, cot, sec, csc) can be defined as quotients and reciprocals of sin and cos, except where zero occurs in the denominator. It can be proved, for real arguments, that these definitions coincide with elementary geometric definitions if the argument is regarded as an angle in radians. Moreover, these definitions result in simple expressions for the derivatives and indefinite integrals for the trigonometric functions. Thus, in settings beyond elementary geometry, radians are regarded as the mathematically natural unit for describing angle measures. When radians (rad) are employed, the angle is given as the length of the arc of the unit circle subtended by it: the angle that subtends an arc of length 1 on the unit circle is 1 rad (≈ 57.3°), and a complete turn (360°) is an angle of 2π (≈ 6.28) rad. For real number x, the notation sin x, cos x, etc. refers to the value of the trigonometric functions evaluated at an angle of x rad. If units of degrees are intended, the degree sign must be explicitly shown (sin x°, cos x°, etc.). Using this standard notation, the argument x for the trigonometric functions satisfies the relationship x = (180x/π)°, so that, for example, sin π = sin 180° when we take x = π. In this way, the degree symbol can be regarded as a mathematical constant such that 1° = π/180 ≈ 0.0175. == Unit-circle definitions == The six trigonometric functions can be defined as coordinate values of points on the Euclidean plane that are related to the unit circle, which is the circle of radius one centered at the origin O of this coordinate system. While right-angled triangle definitions allow for the definition of the trigonometric functions for angles between 0 and π 2 {\textstyle {\frac {\pi }{2}}} radians (90°), the unit circle definitions allow the domain of trigonometric functions to be extended to all positive and negative real numbers. Let L {\displaystyle {\mathcal {L}}} be the ray obtained by rotating by an angle θ the positive half of the x-axis (counterclockwise rotation for θ > 0 , {\displaystyle \theta >0,} and clockwise rotation for θ < 0 {\displaystyle \theta <0} ). This ray intersects the unit circle at the point A = ( x A , y A ) . {\displaystyle \mathrm {A} =(x_{\mathrm {A} },y_{\mathrm {A} }).} The ray L , {\displaystyle {\mathcal {L}},} extended to a line if necessary, intersects the line of equation x = 1 {\displaystyle x=1} at point B = ( 1 , y B ) , {\displaystyle \mathrm {B} =(1,y_{\mathrm {B} }),} and the line of equation y = 1 {\displaystyle y=1} at point C = ( x C , 1 ) . {\displaystyle \mathrm {C} =(x_{\mathrm {C} },1).} The tangent line to the unit circle at the point A, is perpendicular to L , {\displaystyle {\mathcal {L}},} and intersects the y- and x-axes at points D = ( 0 , y D ) {\displaystyle \mathrm {D} =(0,y_{\mathrm {D} })} and E = ( x E , 0 ) . {\displaystyle \mathrm {E} =(x_{\mathrm {E} },0).} The coordinates of these points give the values of all trigonometric functions for any arbitrary real value of θ in the following manner. The trigonometric functions cos and sin are defined, respectively, as the x- and y-coordinate values of point A. That is, cos ⁡ θ = x A {\displaystyle \cos \theta =x_{\mathrm {A} }\quad } and sin ⁡ θ = y A . {\displaystyle \quad \sin \theta =y_{\mathrm {A} }.} In the range 0 ≤ θ ≤ π / 2 {\displaystyle 0\leq \theta \leq \pi /2} , this definition coincides with the right-angled triangle definition, by taking the right-angled triangle to have the unit radius OA as hypotenuse. And since the equation x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} holds for all points P = ( x , y ) {\displaystyle \mathrm {P} =(x,y)} on the unit circle, this definition of cosine and sine also satisfies the Pythagorean identity. cos 2 ⁡ θ + sin 2 ⁡ θ = 1. {\displaystyle \cos ^{2}\theta +\sin ^{2}\theta =1.} The other trigonometric functions can be found along the unit circle as tan ⁡ θ = y B {\displaystyle \tan \theta =y_{\mathrm {B} }\quad } and cot ⁡ θ = x C , {\displaystyle \quad \cot \theta =x_{\mathrm {C} },} csc ⁡ θ = y D {\displaystyle \csc \theta \ =y_{\mathrm {D} }\quad } and sec ⁡ θ = x E . {\displaystyle \quad \sec \theta =x_{\mathrm {E} }.} By applying the Pythagorean identity and geometric proof methods, these definitions can readily be shown to coincide with the definitions of tangent, cotangent, secant and cosecant in terms of sine and cosine, that is tan ⁡ θ = sin ⁡ θ cos ⁡ θ , cot ⁡ θ = cos ⁡ θ sin ⁡ θ , sec ⁡ θ = 1 cos ⁡ θ , csc ⁡ θ = 1 sin ⁡ θ . {\displaystyle \tan \theta ={\frac {\sin \theta }{\cos \theta }},\quad \cot \theta ={\frac {\cos \theta }{\sin \theta }},\quad \sec \theta ={\frac {1}{\cos \theta }},\quad \csc \theta ={\frac {1}{\sin \theta }}.} Since a rotation of an angle of ± 2 π {\displaystyle \pm 2\pi } does not change the position or size of a shape, the points A, B, C, D, and E are the same for two angles whose difference is an integer multiple of 2 π {\displaystyle 2\pi } . Thus trigonometric functions are periodic functions with period 2 π {\displaystyle 2\pi } . That is, the equalities sin ⁡ θ = sin ⁡ ( θ + 2 k π ) {\displaystyle \sin \theta =\sin \left(\theta +2k\pi \right)\quad } and cos ⁡ θ = cos ⁡ ( θ + 2 k π ) {\displaystyle \quad \cos \theta =\cos \left(\theta +2k\pi \right)} hold for any angle θ and any integer k. The same is true for the four other trigonometric functions. By observing the sign and the monotonicity of the functions sine, cosine, cosecant, and secant in the four quadrants, one can show that 2 π {\displaystyle 2\pi } is the smallest value for which they are periodic (i.e., 2 π {\displaystyle 2\pi } is the fundamental period of these functions). However, after a rotation by an angle π {\displaystyle \pi } , the points B and C already return to their original position, so that the tangent function and the cotangent function have a fundamental period of π {\displaystyle \pi } . That is, the equalities tan ⁡ θ = tan ⁡ ( θ + k π ) {\displaystyle \tan \theta =\tan(\theta +k\pi )\quad } and cot ⁡ θ = cot ⁡ ( θ + k π ) {\displaystyle \quad \cot \theta =\cot(\theta +k\pi )} hold for any angle θ and any integer k. == Algebraic values == The algebraic expressions for the most important angles are as follows: sin ⁡ 0 = sin ⁡ 0 ∘ = 0 2 = 0 {\displaystyle \sin 0=\sin 0^{\circ }\quad ={\frac {\sqrt {0}}{2}}=0} (zero angle) sin ⁡ π 6 = sin ⁡ 30 ∘ = 1 2 = 1 2 {\displaystyle \sin {\frac {\pi }{6}}=\sin 30^{\circ }={\frac {\sqrt {1}}{2}}={\frac {1}{2}}} sin ⁡ π 4 = sin ⁡ 45 ∘ = 2 2 = 1 2 {\displaystyle \sin {\frac {\pi }{4}}=\sin 45^{\circ }={\frac {\sqrt {2}}{2}}={\frac {1}{\sqrt {2}}}} sin ⁡ π 3 = sin ⁡ 60 ∘ = 3 2 {\displaystyle \sin {\frac {\pi }{3}}=\sin 60^{\circ }={\frac {\sqrt {3}}{2}}} sin ⁡ π 2 = sin ⁡ 90 ∘ = 4 2 = 1 {\displaystyle \sin {\frac {\pi }{2}}=\sin 90^{\circ }={\frac {\sqrt {4}}{2}}=1} (right angle) Writing the numerators as square roots of consecutive non-negative integers, with a denominator of 2, provides an easy way to remember the values. Such simple expressions generally do not exist for other angles which are rational multiples of a right angle. For an angle which, measured in degrees, is a multiple of three, the exact trigonometric values of the sine and the cosine may be expressed in terms of square roots. These values of the sine and the cosine may thus be constructed by ruler and compass. For an angle of an integer number of degrees, the sine and the cosine may be expressed in terms of square roots and the cube root of a non-real complex number. Galois theory allows a proof that, if the angle is not a multiple of 3°, non-real cube roots are unavoidable. For an angle which, expressed in degrees, is a rational number, the sine and the cosine are algebraic numbers, which may be expressed in terms of nth roots. This results from the fact that the Galois groups of the cyclotomic polynomials are cyclic. For an angle which, expressed in degrees, is not a rational number, then either the angle or both the sine and the cosine are transcendental numbers. This is a corollary of Baker's theorem, proved in 1966. If the sine of an angle is a rational number then the cosine is not necessarily a rational number, and vice-versa. However if the tangent of an angle is rational then both the sine and cosine of the double angle will be rational. === Simple algebraic values === The following table lists the sines, cosines, and tangents of multiples of 15 degrees from 0 to 90 degrees. == Definitions in analysis == G. H. Hardy noted in his 1908 work A Course of Pure Mathematics that the definition of the trigonometric functions in terms of the unit circle is not satisfactory, because it depends implicitly on a notion of angle that can be measured by a real number. Thus in modern analysis, trigonometric functions are usually constructed without reference to geometry. Various ways exist in the literature for defining the trigonometric functions in a manner suitable for analysis; they include: Using the "geometry" of the unit circle, which requires formulating the arc length of a circle (or area of a sector) analytically. By a power series, which is particularly well-suited to complex variables. By using an infinite product expansion. By inverting the inverse trigonometric functions, which can be defined as integrals of algebraic or rational functions. As solutions of a differential equation. === Definition by differential equations === Sine and cosine can be defined as the unique solution to the initial value problem: d d x sin ⁡ x = cos ⁡ x , d d x cos ⁡ x = − sin ⁡ x , sin ⁡ ( 0 ) = 0 , cos ⁡ ( 0 ) = 1. {\displaystyle {\frac {d}{dx}}\sin x=\cos x,\ {\frac {d}{dx}}\cos x=-\sin x,\ \sin(0)=0,\ \cos(0)=1.} Differentiating again, d 2 d x 2 sin ⁡ x = d d x cos ⁡ x = − sin ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\sin x={\frac {d}{dx}}\cos x=-\sin x} and d 2 d x 2 cos ⁡ x = − d d x sin ⁡ x = − cos ⁡ x {\textstyle {\frac {d^{2}}{dx^{2}}}\cos x=-{\frac {d}{dx}}\sin x=-\cos x} , so both sine and cosine are solutions of the same ordinary differential equation y ″ + y = 0 . {\displaystyle y''+y=0\,.} Sine is the unique solution with y(0) = 0 and y′(0) = 1; cosine is the unique solution with y(0) = 1 and y′(0) = 0. One can then prove, as a theorem, that solutions cos , sin {\displaystyle \cos ,\sin } are periodic, having the same period. Writing this period as 2 π {\displaystyle 2\pi } is then a definition of the real number π {\displaystyle \pi } which is independent of geometry. Applying the quotient rule to the tangent tan ⁡ x = sin ⁡ x / cos ⁡ x {\displaystyle \tan x=\sin x/\cos x} , d d x tan ⁡ x = cos 2 ⁡ x + sin 2 ⁡ x cos 2 ⁡ x = 1 + tan 2 ⁡ x , {\displaystyle {\frac {d}{dx}}\tan x={\frac {\cos ^{2}x+\sin ^{2}x}{\cos ^{2}x}}=1+\tan ^{2}x\,,} so the tangent function satisfies the ordinary differential equation y ′ = 1 + y 2 . {\displaystyle y'=1+y^{2}\,.} It is the unique solution with y(0) = 0. === Power series expansion === The basic trigonometric functions can be defined by the following power series expansions. These series are also known as the Taylor series or Maclaurin series of these trigonometric functions: sin ⁡ x = x − x 3 3 ! + x 5 5 ! − x 7 7 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ! x 2 n + 1 cos ⁡ x = 1 − x 2 2 ! + x 4 4 ! − x 6 6 ! + ⋯ = ∑ n = 0 ∞ ( − 1 ) n ( 2 n ) ! x 2 n . {\displaystyle {\begin{aligned}\sin x&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}+\cdots \\[6mu]&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}\\[8pt]\cos x&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-{\frac {x^{6}}{6!}}+\cdots \\[6mu]&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}.\end{aligned}}} The radius of convergence of these series is infinite. Therefore, the sine and the cosine can be extended to entire functions (also called "sine" and "cosine"), which are (by definition) complex-valued functions that are defined and holomorphic on the whole complex plane. Term-by-term differentiation shows that the sine and cosine defined by the series obey the differential equation discussed previously, and conversely one can obtain these series from elementary recursion relations derived from the differential equation. Being defined as fractions of entire functions, the other trigonometric functions may be extended to meromorphic functions, that is functions that are holomorphic in the whole complex plane, except some isolated points called poles. Here, the poles are the numbers of the form ( 2 k + 1 ) π 2 {\textstyle (2k+1){\frac {\pi }{2}}} for the tangent and the secant, or k π {\displaystyle k\pi } for the cotangent and the cosecant, where k is an arbitrary integer. Recurrences relations may also be computed for the coefficients of the Taylor series of the other trigonometric functions. These series have a finite radius of convergence. Their coefficients have a combinatorial interpretation: they enumerate alternating permutations of finite sets. More precisely, defining Un, the nth up/down number, Bn, the nth Bernoulli number, and En, is the nth Euler number, one has the following series expansions: tan ⁡ x = ∑ n = 0 ∞ U 2 n + 1 ( 2 n + 1 ) ! x 2 n + 1 = ∑ n = 1 ∞ ( − 1 ) n − 1 2 2 n ( 2 2 n − 1 ) B 2 n ( 2 n ) ! x 2 n − 1 = x + 1 3 x 3 + 2 15 x 5 + 17 315 x 7 + ⋯ , for | x | < π 2 . {\displaystyle {\begin{aligned}\tan x&{}=\sum _{n=0}^{\infty }{\frac {U_{2n+1}}{(2n+1)!}}x^{2n+1}\\[8mu]&{}=\sum _{n=1}^{\infty }{\frac {(-1)^{n-1}2^{2n}\left(2^{2n}-1\right)B_{2n}}{(2n)!}}x^{2n-1}\\[5mu]&{}=x+{\frac {1}{3}}x^{3}+{\frac {2}{15}}x^{5}+{\frac {17}{315}}x^{7}+\cdots ,\qquad {\text{for }}|x|<{\frac {\pi }{2}}.\end{aligned}}} csc ⁡ x = ∑ n = 0 ∞ ( − 1 ) n + 1 2 ( 2 2 n − 1 − 1 ) B 2 n ( 2 n ) ! x 2 n − 1 = x − 1 + 1 6 x + 7 360 x 3 + 31 15120 x 5 + ⋯ , for 0 < | x | < π . {\displaystyle {\begin{aligned}\csc x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n+1}2\left(2^{2n-1}-1\right)B_{2n}}{(2n)!}}x^{2n-1}\\[5mu]&=x^{-1}+{\frac {1}{6}}x+{\frac {7}{360}}x^{3}+{\frac {31}{15120}}x^{5}+\cdots ,\qquad {\text{for }}0<|x|<\pi .\end{aligned}}} sec ⁡ x = ∑ n = 0 ∞ U 2 n ( 2 n ) ! x 2 n = ∑ n = 0 ∞ ( − 1 ) n E 2 n ( 2 n ) ! x 2 n = 1 + 1 2 x 2 + 5 24 x 4 + 61 720 x 6 + ⋯ , for | x | < π 2 . {\displaystyle {\begin{aligned}\sec x&=\sum _{n=0}^{\infty }{\frac {U_{2n}}{(2n)!}}x^{2n}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}E_{2n}}{(2n)!}}x^{2n}\\[5mu]&=1+{\frac {1}{2}}x^{2}+{\frac {5}{24}}x^{4}+{\frac {61}{720}}x^{6}+\cdots ,\qquad {\text{for }}|x|<{\frac {\pi }{2}}.\end{aligned}}} cot ⁡ x = ∑ n = 0 ∞ ( − 1 ) n 2 2 n B 2 n ( 2 n ) ! x 2 n − 1 = x − 1 − 1 3 x − 1 45 x 3 − 2 945 x 5 − ⋯ , for 0 < | x | < π . {\displaystyle {\begin{aligned}\cot x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}2^{2n}B_{2n}}{(2n)!}}x^{2n-1}\\[5mu]&=x^{-1}-{\frac {1}{3}}x-{\frac {1}{45}}x^{3}-{\frac {2}{945}}x^{5}-\cdots ,\qquad {\text{for }}0<|x|<\pi .\end{aligned}}} === Continued fraction expansion === The following continued fractions are valid in the whole complex plane: sin ⁡ x = x 1 + x 2 2 ⋅ 3 − x 2 + 2 ⋅ 3 x 2 4 ⋅ 5 − x 2 + 4 ⋅ 5 x 2 6 ⋅ 7 − x 2 + ⋱ {\displaystyle \sin x={\cfrac {x}{1+{\cfrac {x^{2}}{2\cdot 3-x^{2}+{\cfrac {2\cdot 3x^{2}}{4\cdot 5-x^{2}+{\cfrac {4\cdot 5x^{2}}{6\cdot 7-x^{2}+\ddots }}}}}}}}} cos ⁡ x = 1 1 + x 2 1 ⋅ 2 − x 2 + 1 ⋅ 2 x 2 3 ⋅ 4 − x 2 + 3 ⋅ 4 x 2 5 ⋅ 6 − x 2 + ⋱ {\displaystyle \cos x={\cfrac {1}{1+{\cfrac {x^{2}}{1\cdot 2-x^{2}+{\cfrac {1\cdot 2x^{2}}{3\cdot 4-x^{2}+{\cfrac {3\cdot 4x^{2}}{5\cdot 6-x^{2}+\ddots }}}}}}}}} tan ⁡ x = x 1 − x 2 3 − x 2 5 − x 2 7 − ⋱ = 1 1 x − 1 3 x − 1 5 x − 1 7 x − ⋱ {\displaystyle \tan x={\cfrac {x}{1-{\cfrac {x^{2}}{3-{\cfrac {x^{2}}{5-{\cfrac {x^{2}}{7-\ddots }}}}}}}}={\cfrac {1}{{\cfrac {1}{x}}-{\cfrac {1}{{\cfrac {3}{x}}-{\cfrac {1}{{\cfrac {5}{x}}-{\cfrac {1}{{\cfrac {7}{x}}-\ddots }}}}}}}}} The last one was used in the historically first proof that π is irrational. === Partial fraction expansion === There is a series representation as partial fraction expansion where just translated reciprocal functions are summed up, such that the poles of the cotangent function and the reciprocal functions match: π cot ⁡ π x = lim N → ∞ ∑ n = − N N 1 x + n . {\displaystyle \pi \cot \pi x=\lim _{N\to \infty }\sum _{n=-N}^{N}{\frac {1}{x+n}}.} This identity can be proved with the Herglotz trick. Combining the (–n)th with the nth term lead to absolutely convergent series: π cot ⁡ π x = 1 x + 2 x ∑ n = 1 ∞ 1 x 2 − n 2 . {\displaystyle \pi \cot \pi x={\frac {1}{x}}+2x\sum _{n=1}^{\infty }{\frac {1}{x^{2}-n^{2}}}.} Similarly, one can find a partial fraction expansion for the secant, cosecant and tangent functions: π csc ⁡ π x = ∑ n = − ∞ ∞ ( − 1 ) n x + n = 1 x + 2 x ∑ n = 1 ∞ ( − 1 ) n x 2 − n 2 , {\displaystyle \pi \csc \pi x=\sum _{n=-\infty }^{\infty }{\frac {(-1)^{n}}{x+n}}={\frac {1}{x}}+2x\sum _{n=1}^{\infty }{\frac {(-1)^{n}}{x^{2}-n^{2}}},} π 2 csc 2 ⁡ π x = ∑ n = − ∞ ∞ 1 ( x + n ) 2 , {\displaystyle \pi ^{2}\csc ^{2}\pi x=\sum _{n=-\infty }^{\infty }{\frac {1}{(x+n)^{2}}},} π sec ⁡ π x = ∑ n = 0 ∞ ( − 1 ) n ( 2 n + 1 ) ( n + 1 2 ) 2 − x 2 , {\displaystyle \pi \sec \pi x=\sum _{n=0}^{\infty }(-1)^{n}{\frac {(2n+1)}{(n+{\tfrac {1}{2}})^{2}-x^{2}}},} π tan ⁡ π x = 2 x ∑ n = 0 ∞ 1 ( n + 1 2 ) 2 − x 2 . {\displaystyle \pi \tan \pi x=2x\sum _{n=0}^{\infty }{\frac {1}{(n+{\tfrac {1}{2}})^{2}-x^{2}}}.} === Infinite product expansion === The following infinite product for the sine is due to Leonhard Euler, and is of great importance in complex analysis: sin ⁡ z = z ∏ n = 1 ∞ ( 1 − z 2 n 2 π 2 ) , z ∈ C . {\displaystyle \sin z=z\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{n^{2}\pi ^{2}}}\right),\quad z\in \mathbb {C} .} This may be obtained from the partial fraction decomposition of cot ⁡ z {\displaystyle \cot z} given above, which is the logarithmic derivative of sin ⁡ z {\displaystyle \sin z} . From this, it can be deduced also that cos ⁡ z = ∏ n = 1 ∞ ( 1 − z 2 ( n − 1 / 2 ) 2 π 2 ) , z ∈ C . {\displaystyle \cos z=\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{(n-1/2)^{2}\pi ^{2}}}\right),\quad z\in \mathbb {C} .} === Euler's formula and the exponential function === Euler's formula relates sine and cosine to the exponential function: e i x = cos ⁡ x + i sin ⁡ x . {\displaystyle e^{ix}=\cos x+i\sin x.} This formula is commonly considered for real values of x, but it remains true for all complex values. Proof: Let f 1 ( x ) = cos ⁡ x + i sin ⁡ x , {\displaystyle f_{1}(x)=\cos x+i\sin x,} and f 2 ( x ) = e i x . {\displaystyle f_{2}(x)=e^{ix}.} One has d f j ( x ) / d x = i f j ( x ) {\displaystyle df_{j}(x)/dx=if_{j}(x)} for j = 1, 2. The quotient rule implies thus that d / d x ( f 1 ( x ) / f 2 ( x ) ) = 0 {\displaystyle d/dx\,(f_{1}(x)/f_{2}(x))=0} . Therefore, f 1 ( x ) / f 2 ( x ) {\displaystyle f_{1}(x)/f_{2}(x)} is a constant function, which equals 1, as f 1 ( 0 ) = f 2 ( 0 ) = 1. {\displaystyle f_{1}(0)=f_{2}(0)=1.} This proves the formula. One has e i x = cos ⁡ x + i sin ⁡ x e − i x = cos ⁡ x − i sin ⁡ x . {\displaystyle {\begin{aligned}e^{ix}&=\cos x+i\sin x\\[5pt]e^{-ix}&=\cos x-i\sin x.\end{aligned}}} Solving this linear system in sine and cosine, one can express them in terms of the exponential function: sin ⁡ x = e i x − e − i x 2 i cos ⁡ x = e i x + e − i x 2 . {\displaystyle {\begin{aligned}\sin x&={\frac {e^{ix}-e^{-ix}}{2i}}\\[5pt]\cos x&={\frac {e^{ix}+e^{-ix}}{2}}.\end{aligned}}} When x is real, this may be rewritten as cos ⁡ x = Re ⁡ ( e i x ) , sin ⁡ x = Im ⁡ ( e i x ) . {\displaystyle \cos x=\operatorname {Re} \left(e^{ix}\right),\qquad \sin x=\operatorname {Im} \left(e^{ix}\right).} Most trigonometric identities can be proved by expressing trigonometric functions in terms of the complex exponential function by using above formulas, and then using the identity e a + b = e a e b {\displaystyle e^{a+b}=e^{a}e^{b}} for simplifying the result. Euler's formula can also be used to define the basic trigonometric function directly, as follows, using the language of topological groups. The set U {\displaystyle U} of complex numbers of unit modulus is a compact and connected topological group, which has a neighborhood of the identity that is homeomorphic to the real line. Therefore, it is isomorphic as a topological group to the one-dimensional torus group R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } , via an isomorphism e : R / Z → U . {\displaystyle e:\mathbb {R} /\mathbb {Z} \to U.} In pedestrian terms e ( t ) = exp ⁡ ( 2 π i t ) {\displaystyle e(t)=\exp(2\pi it)} , and this isomorphism is unique up to taking complex conjugates. For a nonzero real number a {\displaystyle a} (the base), the function t ↦ e ( t / a ) {\displaystyle t\mapsto e(t/a)} defines an isomorphism of the group R / a Z → U {\displaystyle \mathbb {R} /a\mathbb {Z} \to U} . The real and imaginary parts of e ( t / a ) {\displaystyle e(t/a)} are the cosine and sine, where a {\displaystyle a} is used as the base for measuring angles. For example, when a = 2 π {\displaystyle a=2\pi } , we get the measure in radians, and the usual trigonometric functions. When a = 360 {\displaystyle a=360} , we get the sine and cosine of angles measured in degrees. Note that a = 2 π {\displaystyle a=2\pi } is the unique value at which the derivative d d t e ( t / a ) {\displaystyle {\frac {d}{dt}}e(t/a)} becomes a unit vector with positive imaginary part at t = 0 {\displaystyle t=0} . This fact can, in turn, be used to define the constant 2 π {\displaystyle 2\pi } . === Definition via integration === Another way to define the trigonometric functions in analysis is using integration. For a real number t {\displaystyle t} , put θ ( t ) = ∫ 0 t d τ 1 + τ 2 = arctan ⁡ t {\displaystyle \theta (t)=\int _{0}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\arctan t} where this defines this inverse tangent function. Also, π {\displaystyle \pi } is defined by 1 2 π = ∫ 0 ∞ d τ 1 + τ 2 {\displaystyle {\frac {1}{2}}\pi =\int _{0}^{\infty }{\frac {d\tau }{1+\tau ^{2}}}} a definition that goes back to Karl Weierstrass. On the interval − π / 2 < θ < π / 2 {\displaystyle -\pi /2<\theta <\pi /2} , the trigonometric functions are defined by inverting the relation θ = arctan ⁡ t {\displaystyle \theta =\arctan t} . Thus we define the trigonometric functions by tan ⁡ θ = t , cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 , sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 {\displaystyle \tan \theta =t,\quad \cos \theta =(1+t^{2})^{-1/2},\quad \sin \theta =t(1+t^{2})^{-1/2}} where the point ( t , θ ) {\displaystyle (t,\theta )} is on the graph of θ = arctan ⁡ t {\displaystyle \theta =\arctan t} and the positive square root is taken. This defines the trigonometric functions on ( − π / 2 , π / 2 ) {\displaystyle (-\pi /2,\pi /2)} . The definition can be extended to all real numbers by first observing that, as θ → π / 2 {\displaystyle \theta \to \pi /2} , t → ∞ {\displaystyle t\to \infty } , and so cos ⁡ θ = ( 1 + t 2 ) − 1 / 2 → 0 {\displaystyle \cos \theta =(1+t^{2})^{-1/2}\to 0} and sin ⁡ θ = t ( 1 + t 2 ) − 1 / 2 → 1 {\displaystyle \sin \theta =t(1+t^{2})^{-1/2}\to 1} . Thus cos ⁡ θ {\displaystyle \cos \theta } and sin ⁡ θ {\displaystyle \sin \theta } are extended continuously so that cos ⁡ ( π / 2 ) = 0 , sin ⁡ ( π / 2 ) = 1 {\displaystyle \cos(\pi /2)=0,\sin(\pi /2)=1} . Now the conditions cos ⁡ ( θ + π ) = − cos ⁡ ( θ ) {\displaystyle \cos(\theta +\pi )=-\cos(\theta )} and sin ⁡ ( θ + π ) = − sin ⁡ ( θ ) {\displaystyle \sin(\theta +\pi )=-\sin(\theta )} define the sine and cosine as periodic functions with period 2 π {\displaystyle 2\pi } , for all real numbers. Proving the basic properties of sine and cosine, including the fact that sine and cosine are analytic, one may first establish the addition formulae. First, arctan ⁡ s + arctan ⁡ t = arctan ⁡ s + t 1 − s t {\displaystyle \arctan s+\arctan t=\arctan {\frac {s+t}{1-st}}} holds, provided arctan ⁡ s + arctan ⁡ t ∈ ( − π / 2 , π / 2 ) {\displaystyle \arctan s+\arctan t\in (-\pi /2,\pi /2)} , since arctan ⁡ s + arctan ⁡ t = ∫ − s t d τ 1 + τ 2 = ∫ 0 s + t 1 − s t d τ 1 + τ 2 {\displaystyle \arctan s+\arctan t=\int _{-s}^{t}{\frac {d\tau }{1+\tau ^{2}}}=\int _{0}^{\frac {s+t}{1-st}}{\frac {d\tau }{1+\tau ^{2}}}} after the substitution τ → s + τ 1 − s τ {\displaystyle \tau \to {\frac {s+\tau }{1-s\tau }}} . In particular, the limiting case as s → ∞ {\displaystyle s\to \infty } gives arctan ⁡ t + π 2 = arctan ⁡ ( − 1 / t ) , t ∈ ( − ∞ , 0 ) . {\displaystyle \arctan t+{\frac {\pi }{2}}=\arctan(-1/t),\quad t\in (-\infty ,0).} Thus we have sin ⁡ ( θ + π 2 ) = − 1 t 1 + ( − 1 / t ) 2 = − 1 1 + t 2 = − cos ⁡ ( θ ) {\displaystyle \sin \left(\theta +{\frac {\pi }{2}}\right)={\frac {-1}{t{\sqrt {1+(-1/t)^{2}}}}}={\frac {-1}{\sqrt {1+t^{2}}}}=-\cos(\theta )} and cos ⁡ ( θ + π 2 ) = 1 1 + ( − 1 / t ) 2 = t 1 + t 2 = sin ⁡ ( θ ) . {\displaystyle \cos \left(\theta +{\frac {\pi }{2}}\right)={\frac {1}{\sqrt {1+(-1/t)^{2}}}}={\frac {t}{\sqrt {1+t^{2}}}}=\sin(\theta ).} So the sine and cosine functions are related by translation over a quarter period π / 2 {\displaystyle \pi /2} . === Definitions using functional equations === One can also define the trigonometric functions using various functional equations. For example, the sine and the cosine form the unique pair of continuous functions that satisfy the difference formula cos ⁡ ( x − y ) = cos ⁡ x cos ⁡ y + sin ⁡ x sin ⁡ y {\displaystyle \cos(x-y)=\cos x\cos y+\sin x\sin y\,} and the added condition 0 < x cos ⁡ x < sin ⁡ x < x for 0 < x < 1. {\displaystyle 0<x\cos x<\sin x<x\quad {\text{ for }}\quad 0<x<1.} === In the complex plane === The sine and cosine of a complex number z = x + i y {\displaystyle z=x+iy} can be expressed in terms of real sines, cosines, and hyperbolic functions as follows: sin ⁡ z = sin ⁡ x cosh ⁡ y + i cos ⁡ x sinh ⁡ y cos ⁡ z = cos ⁡ x cosh ⁡ y − i sin ⁡ x sinh ⁡ y {\displaystyle {\begin{aligned}\sin z&=\sin x\cosh y+i\cos x\sinh y\\[5pt]\cos z&=\cos x\cosh y-i\sin x\sinh y\end{aligned}}} By taking advantage of domain coloring, it is possible to graph the trigonometric functions as complex-valued functions. Various features unique to the complex functions can be seen from the graph; for example, the sine and cosine functions can be seen to be unbounded as the imaginary part of z {\displaystyle z} becomes larger (since the color white represents infinity), and the fact that the functions contain simple zeros or poles is apparent from the fact that the hue cycles around each zero or pole exactly once. Comparing these graphs with those of the corresponding Hyperbolic functions highlights the relationships between the two. == Periodicity and asymptotes == The sine and cosine functions are periodic, with period 2 π {\displaystyle 2\pi } , which is the smallest positive period: sin ⁡ ( z + 2 π ) = sin ⁡ ( z ) , cos ⁡ ( z + 2 π ) = cos ⁡ ( z ) . {\displaystyle \sin(z+2\pi )=\sin(z),\quad \cos(z+2\pi )=\cos(z).} Consequently, the cosecant and secant also have 2 π {\displaystyle 2\pi } as their period. The functions sine and cosine also have semiperiods π {\displaystyle \pi } , and sin ⁡ ( z + π ) = − sin ⁡ ( z ) , cos ⁡ ( z + π ) = − cos ⁡ ( z ) {\displaystyle \sin(z+\pi )=-\sin(z),\quad \cos(z+\pi )=-\cos(z)} and consequently tan ⁡ ( z + π ) = tan ⁡ ( z ) , cot ⁡ ( z + π ) = cot ⁡ ( z ) . {\displaystyle \tan(z+\pi )=\tan(z),\quad \cot(z+\pi )=\cot(z).} Also, sin ⁡ ( x + π / 2 ) = cos ⁡ ( x ) , cos ⁡ ( x + π / 2 ) = − sin ⁡ ( x ) {\displaystyle \sin(x+\pi /2)=\cos(x),\quad \cos(x+\pi /2)=-\sin(x)} (see Complementary angles). The function sin ⁡ ( z ) {\displaystyle \sin(z)} has a unique zero (at z = 0 {\displaystyle z=0} ) in the strip − π < ℜ ( z ) < π {\displaystyle -\pi <\Re (z)<\pi } . The function cos ⁡ ( z ) {\displaystyle \cos(z)} has the pair of zeros z = ± π / 2 {\displaystyle z=\pm \pi /2} in the same strip. Because of the periodicity, the zeros of sine are π Z = { … , − 2 π , − π , 0 , π , 2 π , … } ⊂ C . {\displaystyle \pi \mathbb {Z} =\left\{\dots ,-2\pi ,-\pi ,0,\pi ,2\pi ,\dots \right\}\subset \mathbb {C} .} There zeros of cosine are π 2 + π Z = { … , − 3 π 2 , − π 2 , π 2 , 3 π 2 , … } ⊂ C . {\displaystyle {\frac {\pi }{2}}+\pi \mathbb {Z} =\left\{\dots ,-{\frac {3\pi }{2}},-{\frac {\pi }{2}},{\frac {\pi }{2}},{\frac {3\pi }{2}},\dots \right\}\subset \mathbb {C} .} All of the zeros are simple zeros, and both functions have derivative ± 1 {\displaystyle \pm 1} at each of the zeros. The tangent function tan ⁡ ( z ) = sin ⁡ ( z ) / cos ⁡ ( z ) {\displaystyle \tan(z)=\sin(z)/\cos(z)} has a simple zero at z = 0 {\displaystyle z=0} and vertical asymptotes at z = ± π / 2 {\displaystyle z=\pm \pi /2} , where it has a simple pole of residue − 1 {\displaystyle -1} . Again, owing to the periodicity, the zeros are all the integer multiples of π {\displaystyle \pi } and the poles are odd multiples of π / 2 {\displaystyle \pi /2} , all having the same residue. The poles correspond to vertical asymptotes lim x → π − tan ⁡ ( x ) = + ∞ , lim x → π + tan ⁡ ( x ) = − ∞ . {\displaystyle \lim _{x\to \pi ^{-}}\tan(x)=+\infty ,\quad \lim _{x\to \pi ^{+}}\tan(x)=-\infty .} The cotangent function cot ⁡ ( z ) = cos ⁡ ( z ) / sin ⁡ ( z ) {\displaystyle \cot(z)=\cos(z)/\sin(z)} has a simple pole of residue 1 at the integer multiples of π {\displaystyle \pi } and simple zeros at odd multiples of π / 2 {\displaystyle \pi /2} . The poles correspond to vertical asymptotes lim x → 0 − cot ⁡ ( x ) = − ∞ , lim x → 0 + cot ⁡ ( x ) = + ∞ . {\displaystyle \lim _{x\to 0^{-}}\cot(x)=-\infty ,\quad \lim _{x\to 0^{+}}\cot(x)=+\infty .} == Basic identities == Many identities interrelate the trigonometric functions. This section contains the most basic ones; for more identities, see List of trigonometric identities. These identities may be proved geometrically from the unit-circle definitions or the right-angled-triangle definitions (although, for the latter definitions, care must be taken for angles that are not in the interval [0, π/2], see Proofs of trigonometric identities). For non-geometrical proofs using only tools of calculus, one may use directly the differential equations, in a way that is similar to that of the above proof of Euler's identity. One can also use Euler's identity for expressing all trigonometric functions in terms of complex exponentials and using properties of the exponential function. === Parity === The cosine and the secant are even functions; the other trigonometric functions are odd functions. That is: sin ⁡ ( − x ) = − sin ⁡ x cos ⁡ ( − x ) = cos ⁡ x tan ⁡ ( − x ) = − tan ⁡ x cot ⁡ ( − x ) = − cot ⁡ x csc ⁡ ( − x ) = − csc ⁡ x sec ⁡ ( − x ) = sec ⁡ x . {\displaystyle {\begin{aligned}\sin(-x)&=-\sin x\\\cos(-x)&=\cos x\\\tan(-x)&=-\tan x\\\cot(-x)&=-\cot x\\\csc(-x)&=-\csc x\\\sec(-x)&=\sec x.\end{aligned}}} === Periods === All trigonometric functions are periodic functions of period 2π. This is the smallest period, except for the tangent and the cotangent, which have π as smallest period. This means that, for every integer k, one has sin ⁡ ( x + 2 k π ) = sin ⁡ x cos ⁡ ( x + 2 k π ) = cos ⁡ x tan ⁡ ( x + k π ) = tan ⁡ x cot ⁡ ( x + k π ) = cot ⁡ x csc ⁡ ( x + 2 k π ) = csc ⁡ x sec ⁡ ( x + 2 k π ) = sec ⁡ x . {\displaystyle {\begin{array}{lrl}\sin(x+&2k\pi )&=\sin x\\\cos(x+&2k\pi )&=\cos x\\\tan(x+&k\pi )&=\tan x\\\cot(x+&k\pi )&=\cot x\\\csc(x+&2k\pi )&=\csc x\\\sec(x+&2k\pi )&=\sec x.\end{array}}} See Periodicity and asymptotes. === Pythagorean identity === The Pythagorean identity, is the expression of the Pythagorean theorem in terms of trigonometric functions. It is sin 2 ⁡ x + cos 2 ⁡ x = 1 {\displaystyle \sin ^{2}x+\cos ^{2}x=1} . Dividing through by either cos 2 ⁡ x {\displaystyle \cos ^{2}x} or sin 2 ⁡ x {\displaystyle \sin ^{2}x} gives tan 2 ⁡ x + 1 = sec 2 ⁡ x {\displaystyle \tan ^{2}x+1=\sec ^{2}x} 1 + cot 2 ⁡ x = csc 2 ⁡ x {\displaystyle 1+\cot ^{2}x=\csc ^{2}x} and sec 2 ⁡ x + csc 2 ⁡ x = sec 2 ⁡ x csc 2 ⁡ x {\displaystyle \sec ^{2}x+\csc ^{2}x=\sec ^{2}x\csc ^{2}x} . === Sum and difference formulas === The sum and difference formulas allow expanding the sine, the cosine, and the tangent of a sum or a difference of two angles in terms of sines and cosines and tangents of the angles themselves. These can be derived geometrically, using arguments that date to Ptolemy (see Angle sum and difference identities). One can also produce them algebraically using Euler's formula. Sum sin ⁡ ( x + y ) = sin ⁡ x cos ⁡ y + cos ⁡ x sin ⁡ y , cos ⁡ ( x + y ) = cos ⁡ x cos ⁡ y − sin ⁡ x sin ⁡ y , tan ⁡ ( x + y ) = tan ⁡ x + tan ⁡ y 1 − tan ⁡ x tan ⁡ y . {\displaystyle {\begin{aligned}\sin \left(x+y\right)&=\sin x\cos y+\cos x\sin y,\\[5mu]\cos \left(x+y\right)&=\cos x\cos y-\sin x\sin y,\\[5mu]\tan(x+y)&={\frac {\tan x+\tan y}{1-\tan x\tan y}}.\end{aligned}}} Difference sin ⁡ ( x − y ) = sin ⁡ x cos ⁡ y − cos ⁡ x sin ⁡ y , cos ⁡ ( x − y ) = cos ⁡ x cos ⁡ y + sin ⁡ x sin ⁡ y , tan ⁡ ( x − y ) = tan ⁡ x − tan ⁡ y 1 + tan ⁡ x tan ⁡ y . {\displaystyle {\begin{aligned}\sin \left(x-y\right)&=\sin x\cos y-\cos x\sin y,\\[5mu]\cos \left(x-y\right)&=\cos x\cos y+\sin x\sin y,\\[5mu]\tan(x-y)&={\frac {\tan x-\tan y}{1+\tan x\tan y}}.\end{aligned}}} When the two angles are equal, the sum formulas reduce to simpler equations known as the double-angle formulae. sin ⁡ 2 x = 2 sin ⁡ x cos ⁡ x = 2 tan ⁡ x 1 + tan 2 ⁡ x , cos ⁡ 2 x = cos 2 ⁡ x − sin 2 ⁡ x = 2 cos 2 ⁡ x − 1 = 1 − 2 sin 2 ⁡ x = 1 − tan 2 ⁡ x 1 + tan 2 ⁡ x , tan ⁡ 2 x = 2 tan ⁡ x 1 − tan 2 ⁡ x . {\displaystyle {\begin{aligned}\sin 2x&=2\sin x\cos x={\frac {2\tan x}{1+\tan ^{2}x}},\\[5mu]\cos 2x&=\cos ^{2}x-\sin ^{2}x=2\cos ^{2}x-1=1-2\sin ^{2}x={\frac {1-\tan ^{2}x}{1+\tan ^{2}x}},\\[5mu]\tan 2x&={\frac {2\tan x}{1-\tan ^{2}x}}.\end{aligned}}} These identities can be used to derive the product-to-sum identities. By setting t = tan ⁡ 1 2 θ , {\displaystyle t=\tan {\tfrac {1}{2}}\theta ,} all trigonometric functions of θ {\displaystyle \theta } can be expressed as rational fractions of t {\displaystyle t} : sin ⁡ θ = 2 t 1 + t 2 , cos ⁡ θ = 1 − t 2 1 + t 2 , tan ⁡ θ = 2 t 1 − t 2 . {\displaystyle {\begin{aligned}\sin \theta &={\frac {2t}{1+t^{2}}},\\[5mu]\cos \theta &={\frac {1-t^{2}}{1+t^{2}}},\\[5mu]\tan \theta &={\frac {2t}{1-t^{2}}}.\end{aligned}}} Together with d θ = 2 1 + t 2 d t , {\displaystyle d\theta ={\frac {2}{1+t^{2}}}\,dt,} this is the tangent half-angle substitution, which reduces the computation of integrals and antiderivatives of trigonometric functions to that of rational fractions. === Derivatives and antiderivatives === The derivatives of trigonometric functions result from those of sine and cosine by applying the quotient rule. The values given for the antiderivatives in the following table can be verified by differentiating them. The number C is a constant of integration. Note: For 0 < x < π {\displaystyle 0<x<\pi } the integral of csc ⁡ x {\displaystyle \csc x} can also be written as − arsinh ⁡ ( cot ⁡ x ) , {\displaystyle -\operatorname {arsinh} (\cot x),} and for the integral of sec ⁡ x {\displaystyle \sec x} for − π / 2 < x < π / 2 {\displaystyle -\pi /2<x<\pi /2} as arsinh ⁡ ( tan ⁡ x ) , {\displaystyle \operatorname {arsinh} (\tan x),} where arsinh {\displaystyle \operatorname {arsinh} } is the inverse hyperbolic sine. Alternatively, the derivatives of the 'co-functions' can be obtained using trigonometric identities and the chain rule: d cos ⁡ x d x = d d x sin ⁡ ( π / 2 − x ) = − cos ⁡ ( π / 2 − x ) = − sin ⁡ x , d csc ⁡ x d x = d d x sec ⁡ ( π / 2 − x ) = − sec ⁡ ( π / 2 − x ) tan ⁡ ( π / 2 − x ) = − csc ⁡ x cot ⁡ x , d cot ⁡ x d x = d d x tan ⁡ ( π / 2 − x ) = − sec 2 ⁡ ( π / 2 − x ) = − csc 2 ⁡ x . {\displaystyle {\begin{aligned}{\frac {d\cos x}{dx}}&={\frac {d}{dx}}\sin(\pi /2-x)=-\cos(\pi /2-x)=-\sin x\,,\\{\frac {d\csc x}{dx}}&={\frac {d}{dx}}\sec(\pi /2-x)=-\sec(\pi /2-x)\tan(\pi /2-x)=-\csc x\cot x\,,\\{\frac {d\cot x}{dx}}&={\frac {d}{dx}}\tan(\pi /2-x)=-\sec ^{2}(\pi /2-x)=-\csc ^{2}x\,.\end{aligned}}} == Inverse functions == The trigonometric functions are periodic, and hence not injective, so strictly speaking, they do not have an inverse function. However, on each interval on which a trigonometric function is monotonic, one can define an inverse function, and this defines inverse trigonometric functions as multivalued functions. To define a true inverse function, one must restrict the domain to an interval where the function is monotonic, and is thus bijective from this interval to its image by the function. The common choice for this interval, called the set of principal values, is given in the following table. As usual, the inverse trigonometric functions are denoted with the prefix "arc" before the name or its abbreviation of the function. The notations sin−1, cos−1, etc. are often used for arcsin and arccos, etc. When this notation is used, inverse functions could be confused with multiplicative inverses. The notation with the "arc" prefix avoids such a confusion, though "arcsec" for arcsecant can be confused with "arcsecond". Just like the sine and cosine, the inverse trigonometric functions can also be expressed in terms of infinite series. They can also be expressed in terms of complex logarithms. == Applications == === Angles and sides of a triangle === In this section A, B, C denote the three (interior) angles of a triangle, and a, b, c denote the lengths of the respective opposite edges. They are related by various formulas, which are named by the trigonometric functions they involve. ==== Law of sines ==== The law of sines states that for an arbitrary triangle with sides a, b, and c and angles opposite those sides A, B and C: sin ⁡ A a = sin ⁡ B b = sin ⁡ C c = 2 Δ a b c , {\displaystyle {\frac {\sin A}{a}}={\frac {\sin B}{b}}={\frac {\sin C}{c}}={\frac {2\Delta }{abc}},} where Δ is the area of the triangle, or, equivalently, a sin ⁡ A = b sin ⁡ B = c sin ⁡ C = 2 R , {\displaystyle {\frac {a}{\sin A}}={\frac {b}{\sin B}}={\frac {c}{\sin C}}=2R,} where R is the triangle's circumradius. It can be proved by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation, a technique to determine unknown distances by measuring two angles and an accessible enclosed distance. ==== Law of cosines ==== The law of cosines (also known as the cosine formula or cosine rule) is an extension of the Pythagorean theorem: c 2 = a 2 + b 2 − 2 a b cos ⁡ C , {\displaystyle c^{2}=a^{2}+b^{2}-2ab\cos C,} or equivalently, cos ⁡ C = a 2 + b 2 − c 2 2 a b . {\displaystyle \cos C={\frac {a^{2}+b^{2}-c^{2}}{2ab}}.} In this formula the angle at C is opposite to the side c. This theorem can be proved by dividing the triangle into two right ones and using the Pythagorean theorem. The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known. ==== Law of tangents ==== The law of tangents says that: tan ⁡ A − B 2 tan ⁡ A + B 2 = a − b a + b {\displaystyle {\frac {\tan {\frac {A-B}{2}}}{\tan {\frac {A+B}{2}}}}={\frac {a-b}{a+b}}} . ==== Law of cotangents ==== If s is the triangle's semiperimeter, (a + b + c)/2, and r is the radius of the triangle's incircle, then rs is the triangle's area. Therefore Heron's formula implies that: r = 1 s ( s − a ) ( s − b ) ( s − c ) {\displaystyle r={\sqrt {{\frac {1}{s}}(s-a)(s-b)(s-c)}}} . The law of cotangents says that: cot ⁡ A 2 = s − a r {\displaystyle \cot {\frac {A}{2}}={\frac {s-a}{r}}} It follows that cot ⁡ A 2 s − a = cot ⁡ B 2 s − b = cot ⁡ C 2 s − c = 1 r . {\displaystyle {\frac {\cot {\dfrac {A}{2}}}{s-a}}={\frac {\cot {\dfrac {B}{2}}}{s-b}}={\frac {\cot {\dfrac {C}{2}}}{s-c}}={\frac {1}{r}}.} === Periodic functions === The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describe simple harmonic motion, which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections of uniform circular motion. Trigonometric functions also prove to be useful in the study of general periodic functions. The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or light waves. Under rather general conditions, a periodic function f (x) can be expressed as a sum of sine waves or cosine waves in a Fourier series. Denoting the sine or cosine basis functions by φk, the expansion of the periodic function f (t) takes the form: f ( t ) = ∑ k = 1 ∞ c k φ k ( t ) . {\displaystyle f(t)=\sum _{k=1}^{\infty }c_{k}\varphi _{k}(t).} For example, the square wave can be written as the Fourier series f square ( t ) = 4 π ∑ k = 1 ∞ sin ⁡ ( ( 2 k − 1 ) t ) 2 k − 1 . {\displaystyle f_{\text{square}}(t)={\frac {4}{\pi }}\sum _{k=1}^{\infty }{\sin {\big (}(2k-1)t{\big )} \over 2k-1}.} In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of a sawtooth wave are shown underneath. == History == While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was defined by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The functions of sine and versine (1 – cosine) are closely related to the jyā and koti-jyā functions used in Gupta period Indian astronomy (Aryabhatiya, Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin. (See Aryabhata's sine table.) All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines, used in solving triangles. Al-Khwārizmī (c. 780–850) produced tables of sines and cosines. Circa 860, Habash al-Hasib al-Marwazi defined the tangent and the cotangent, and produced their tables. Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) defined the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. The trigonometric functions were later studied by mathematicians including Omar Khayyám, Bhāskara II, Nasir al-Din al-Tusi, Jamshīd al-Kāshī (14th century), Ulugh Beg (14th century), Regiomontanus (1464), Rheticus, and Rheticus' student Valentinus Otho. Madhava of Sangamagrama (c. 1400) made early strides in the analysis of trigonometric functions in terms of infinite series. (See Madhava series and Madhava's sine table.) The tangent function was brought to Europe by Giovanni Bianchini in 1467 in trigonometry tables he created to support the calculation of stellar coordinates. The terms tangent and secant were first introduced by the Danish mathematician Thomas Fincke in his book Geometria rotundi (1583). The 17th century French mathematician Albert Girard made the first published use of the abbreviations sin, cos, and tan in his book Trigonométrie. In a paper published in 1682, Gottfried Leibniz proved that sin x is not an algebraic function of x. Though defined as ratios of sides of a right triangle, and thus appearing to be rational functions, Leibnitz result established that they are actually transcendental functions of their argument. The task of assimilating circular functions into algebraic expressions was accomplished by Euler in his Introduction to the Analysis of the Infinite (1748). His method was to show that the sine and cosine functions are alternating series formed from the even and odd terms respectively of the exponential series. He presented "Euler's formula", as well as near-modern abbreviations (sin., cos., tang., cot., sec., and cosec.). A few functions were common historically, but are now seldom used, such as the chord, versine (which appeared in the earliest tables), haversine, coversine, half-tangent (tangent of half an angle), and exsecant. List of trigonometric identities shows more relations between these functions. crd ⁡ θ = 2 sin ⁡ 1 2 θ , vers ⁡ θ = 1 − cos ⁡ θ = 2 sin 2 ⁡ 1 2 θ , hav ⁡ θ = 1 2 vers ⁡ θ = sin 2 ⁡ 1 2 θ , covers ⁡ θ = 1 − sin ⁡ θ = vers ⁡ ( 1 2 π − θ ) , exsec ⁡ θ = sec ⁡ θ − 1. {\displaystyle {\begin{aligned}\operatorname {crd} \theta &=2\sin {\tfrac {1}{2}}\theta ,\\[5mu]\operatorname {vers} \theta &=1-\cos \theta =2\sin ^{2}{\tfrac {1}{2}}\theta ,\\[5mu]\operatorname {hav} \theta &={\tfrac {1}{2}}\operatorname {vers} \theta =\sin ^{2}{\tfrac {1}{2}}\theta ,\\[5mu]\operatorname {covers} \theta &=1-\sin \theta =\operatorname {vers} {\bigl (}{\tfrac {1}{2}}\pi -\theta {\bigr )},\\[5mu]\operatorname {exsec} \theta &=\sec \theta -1.\end{aligned}}} Historically, trigonometric functions were often combined with logarithms in compound functions like the logarithmic sine, logarithmic cosine, logarithmic secant, logarithmic cosecant, logarithmic tangent and logarithmic cotangent. == Etymology == The word sine derives from Latin sinus, meaning "bend; bay", and more specifically "the hanging fold of the upper part of a toga", "the bosom of a garment", which was chosen as the translation of what was interpreted as the Arabic word jaib, meaning "pocket" or "fold" in the twelfth-century translations of works by Al-Battani and al-Khwārizmī into Medieval Latin. The choice was based on a misreading of the Arabic written form j-y-b (جيب), which itself originated as a transliteration from Sanskrit jīvā, which along with its synonym jyā (the standard Sanskrit term for the sine) translates to "bowstring", being in turn adopted from Ancient Greek χορδή "string". The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans—"cutting"—since the line cuts the circle. The prefix "co-" (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter's Canon triangulorum (1620), which defines the cosinus as an abbreviation of the sinus complementi (sine of the complementary angle) and proceeds to define the cotangens similarly. == See also == == Notes == == References == == External links == "Trigonometric functions", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Visionlearning Module on Wave Mathematics GonioLab Visualization of the unit circle, trigonometric and hyperbolic functions q-Sine Article about the q-analog of sin at MathWorld q-Cosine Article about the q-analog of cos at MathWorld
Wikipedia/Secant_function
In mathematics, the Gudermannian function relates a hyperbolic angle measure ψ {\textstyle \psi } to a circular angle measure ϕ {\textstyle \phi } called the gudermannian of ψ {\textstyle \psi } and denoted gd ⁡ ψ {\textstyle \operatorname {gd} \psi } . The Gudermannian function reveals a close relationship between the circular functions and hyperbolic functions. It was introduced in the 1760s by Johann Heinrich Lambert, and later named for Christoph Gudermann who also described the relationship between circular and hyperbolic functions in 1830. The gudermannian is sometimes called the hyperbolic amplitude as a limiting case of the Jacobi elliptic amplitude am ⁡ ( ψ , m ) {\textstyle \operatorname {am} (\psi ,m)} when parameter m = 1. {\textstyle m=1.} The real Gudermannian function is typically defined for − ∞ < ψ < ∞ {\textstyle -\infty <\psi <\infty } to be the integral of the hyperbolic secant ϕ = gd ⁡ ψ ≡ ∫ 0 ψ sech ⁡ t d t = arctan ⁡ ( sinh ⁡ ψ ) . {\displaystyle \phi =\operatorname {gd} \psi \equiv \int _{0}^{\psi }\operatorname {sech} t\,\mathrm {d} t=\operatorname {arctan} (\sinh \psi ).} The real inverse Gudermannian function can be defined for − 1 2 π < ϕ < 1 2 π {\textstyle -{\tfrac {1}{2}}\pi <\phi <{\tfrac {1}{2}}\pi } as the integral of the (circular) secant ψ = gd − 1 ⁡ ϕ = ∫ 0 ϕ sec ⁡ t d t = arsinh ⁡ ( tan ⁡ ϕ ) . {\displaystyle \psi =\operatorname {gd} ^{-1}\phi =\int _{0}^{\phi }\operatorname {sec} t\,\mathrm {d} t=\operatorname {arsinh} (\tan \phi ).} The hyperbolic angle measure ψ = gd − 1 ⁡ ϕ {\displaystyle \psi =\operatorname {gd} ^{-1}\phi } is called the anti-gudermannian of ϕ {\displaystyle \phi } or sometimes the lambertian of ϕ {\displaystyle \phi } , denoted ψ = lam ⁡ ϕ . {\displaystyle \psi =\operatorname {lam} \phi .} In the context of geodesy and navigation for latitude ϕ {\textstyle \phi } , k gd − 1 ⁡ ϕ {\displaystyle k\operatorname {gd} ^{-1}\phi } (scaled by arbitrary constant k {\textstyle k} ) was historically called the meridional part of ϕ {\displaystyle \phi } (French: latitude croissante). It is the vertical coordinate of the Mercator projection. The two angle measures ϕ {\textstyle \phi } and ψ {\textstyle \psi } are related by a common stereographic projection s = tan ⁡ 1 2 ϕ = tanh ⁡ 1 2 ψ , {\displaystyle s=\tan {\tfrac {1}{2}}\phi =\tanh {\tfrac {1}{2}}\psi ,} and this identity can serve as an alternative definition for gd {\textstyle \operatorname {gd} } and gd − 1 {\textstyle \operatorname {gd} ^{-1}} valid throughout the complex plane: gd ⁡ ψ = 2 arctan ( tanh ⁡ 1 2 ψ ) , gd − 1 ⁡ ϕ = 2 artanh ( tan ⁡ 1 2 ϕ ) . {\displaystyle {\begin{aligned}\operatorname {gd} \psi &={2\arctan }{\bigl (}\tanh {\tfrac {1}{2}}\psi \,{\bigr )},\\[5mu]\operatorname {gd} ^{-1}\phi &={2\operatorname {artanh} }{\bigl (}\tan {\tfrac {1}{2}}\phi \,{\bigr )}.\end{aligned}}} == Circular–hyperbolic identities == We can evaluate the integral of the hyperbolic secant using the stereographic projection (hyperbolic half-tangent) as a change of variables: gd ⁡ ψ ≡ ∫ 0 ψ 1 cosh ⁡ t d t = ∫ 0 tanh ⁡ 1 2 ψ 1 − u 2 1 + u 2 2 d u 1 − u 2 ( u = tanh ⁡ 1 2 t ) = 2 ∫ 0 tanh ⁡ 1 2 ψ 1 1 + u 2 d u = 2 arctan ( tanh ⁡ 1 2 ψ ) , tan ⁡ 1 2 gd ⁡ ψ = tanh ⁡ 1 2 ψ . {\displaystyle {\begin{aligned}\operatorname {gd} \psi &\equiv \int _{0}^{\psi }{\frac {1}{\operatorname {cosh} t}}\mathrm {d} t=\int _{0}^{\tanh {\frac {1}{2}}\psi }{\frac {1-u^{2}}{1+u^{2}}}{\frac {2\,\mathrm {d} u}{1-u^{2}}}\qquad {\bigl (}u=\tanh {\tfrac {1}{2}}t{\bigr )}\\[8mu]&=2\int _{0}^{\tanh {\frac {1}{2}}\psi }{\frac {1}{1+u^{2}}}\mathrm {d} u={2\arctan }{\bigl (}\tanh {\tfrac {1}{2}}\psi \,{\bigr )},\\[5mu]\tan {\tfrac {1}{2}}{\operatorname {gd} \psi }&=\tanh {\tfrac {1}{2}}\psi .\end{aligned}}} Letting ϕ = gd ⁡ ψ {\textstyle \phi =\operatorname {gd} \psi } and s = tan ⁡ 1 2 ϕ = tanh ⁡ 1 2 ψ {\textstyle s=\tan {\tfrac {1}{2}}\phi =\tanh {\tfrac {1}{2}}\psi } we can derive a number of identities between hyperbolic functions of ψ {\textstyle \psi } and circular functions of ϕ . {\textstyle \phi .} s = tan ⁡ 1 2 ϕ = tanh ⁡ 1 2 ψ , 2 s 1 + s 2 = sin ⁡ ϕ = tanh ⁡ ψ , 1 + s 2 2 s = csc ⁡ ϕ = coth ⁡ ψ , 1 − s 2 1 + s 2 = cos ⁡ ϕ = sech ⁡ ψ , 1 + s 2 1 − s 2 = sec ⁡ ϕ = cosh ⁡ ψ , 2 s 1 − s 2 = tan ⁡ ϕ = sinh ⁡ ψ , 1 − s 2 2 s = cot ⁡ ϕ = csch ⁡ ψ . {\displaystyle {\begin{aligned}s&=\tan {\tfrac {1}{2}}\phi =\tanh {\tfrac {1}{2}}\psi ,\\[6mu]{\frac {2s}{1+s^{2}}}&=\sin \phi =\tanh \psi ,\quad &{\frac {1+s^{2}}{2s}}&=\csc \phi =\coth \psi ,\\[10mu]{\frac {1-s^{2}}{1+s^{2}}}&=\cos \phi =\operatorname {sech} \psi ,\quad &{\frac {1+s^{2}}{1-s^{2}}}&=\sec \phi =\cosh \psi ,\\[10mu]{\frac {2s}{1-s^{2}}}&=\tan \phi =\sinh \psi ,\quad &{\frac {1-s^{2}}{2s}}&=\cot \phi =\operatorname {csch} \psi .\\[8mu]\end{aligned}}} These are commonly used as expressions for gd {\displaystyle \operatorname {gd} } and gd − 1 {\displaystyle \operatorname {gd} ^{-1}} for real values of ψ {\displaystyle \psi } and ϕ {\displaystyle \phi } with | ϕ | < 1 2 π . {\displaystyle |\phi |<{\tfrac {1}{2}}\pi .} For example, the numerically well-behaved formulas gd ⁡ ψ = arctan ⁡ ( sinh ⁡ ψ ) , gd − 1 ⁡ ϕ = arsinh ⁡ ( tan ⁡ ϕ ) . {\displaystyle {\begin{aligned}\operatorname {gd} \psi &=\operatorname {arctan} (\sinh \psi ),\\[6mu]\operatorname {gd} ^{-1}\phi &=\operatorname {arsinh} (\tan \phi ).\end{aligned}}} (Note, for | ϕ | > 1 2 π {\displaystyle |\phi |>{\tfrac {1}{2}}\pi } and for complex arguments, care must be taken choosing branches of the inverse functions.) We can also express ψ {\textstyle \psi } and ϕ {\textstyle \phi } in terms of s : {\textstyle s\colon } 2 arctan ⁡ s = ϕ = gd ⁡ ψ , 2 artanh ⁡ s = gd − 1 ⁡ ϕ = ψ . {\displaystyle {\begin{aligned}2\arctan s&=\phi =\operatorname {gd} \psi ,\\[6mu]2\operatorname {artanh} s&=\operatorname {gd} ^{-1}\phi =\psi .\\[6mu]\end{aligned}}} If we expand tan ⁡ 1 2 {\textstyle \tan {\tfrac {1}{2}}} and tanh ⁡ 1 2 {\textstyle \tanh {\tfrac {1}{2}}} in terms of the exponential, then we can see that s , {\textstyle s,} exp ⁡ ϕ i , {\displaystyle \exp \phi i,} and exp ⁡ ψ {\displaystyle \exp \psi } are all Möbius transformations of each-other (specifically, rotations of the Riemann sphere): s = i 1 − e ϕ i 1 + e ϕ i = e ψ − 1 e ψ + 1 , i s − i s + i = exp ⁡ ϕ i = e ψ − i e ψ + i , 1 + s 1 − s = i i + e ϕ i i − e ϕ i = exp ⁡ ψ . {\displaystyle {\begin{aligned}s&=i{\frac {1-e^{\phi i}}{1+e^{\phi i}}}={\frac {e^{\psi }-1}{e^{\psi }+1}},\\[10mu]i{\frac {s-i}{s+i}}&=\exp \phi i\quad ={\frac {e^{\psi }-i}{e^{\psi }+i}},\\[10mu]{\frac {1+s}{1-s}}&=i{\frac {i+e^{\phi i}}{i-e^{\phi i}}}\,=\exp \psi .\end{aligned}}} For real values of ψ {\textstyle \psi } and ϕ {\textstyle \phi } with | ϕ | < 1 2 π {\displaystyle |\phi |<{\tfrac {1}{2}}\pi } , these Möbius transformations can be written in terms of trigonometric functions in several ways, exp ⁡ ψ = sec ⁡ ϕ + tan ⁡ ϕ = tan ⁡ 1 2 ( 1 2 π + ϕ ) = 1 + tan ⁡ 1 2 ϕ 1 − tan ⁡ 1 2 ϕ = 1 + sin ⁡ ϕ 1 − sin ⁡ ϕ , exp ⁡ ϕ i = sech ⁡ ψ + i tanh ⁡ ψ = tanh ⁡ 1 2 ( − 1 2 π i + ψ ) = 1 + i tanh ⁡ 1 2 ψ 1 − i tanh ⁡ 1 2 ψ = 1 + i sinh ⁡ ψ 1 − i sinh ⁡ ψ . {\displaystyle {\begin{aligned}\exp \psi &=\sec \phi +\tan \phi =\tan {\tfrac {1}{2}}{\bigl (}{\tfrac {1}{2}}\pi +\phi {\bigr )}\\[6mu]&={\frac {1+\tan {\tfrac {1}{2}}\phi }{1-\tan {\tfrac {1}{2}}\phi }}={\sqrt {\frac {1+\sin \phi }{1-\sin \phi }}},\\[12mu]\exp \phi i&=\operatorname {sech} \psi +i\tanh \psi =\tanh {\tfrac {1}{2}}{\bigl (}{-{\tfrac {1}{2}}}\pi i+\psi {\bigr )}\\[6mu]&={\frac {1+i\tanh {\tfrac {1}{2}}\psi }{1-i\tanh {\tfrac {1}{2}}\psi }}={\sqrt {\frac {1+i\sinh \psi }{1-i\sinh \psi }}}.\end{aligned}}} These give further expressions for gd {\displaystyle \operatorname {gd} } and gd − 1 {\displaystyle \operatorname {gd} ^{-1}} for real arguments with | ϕ | < 1 2 π . {\displaystyle |\phi |<{\tfrac {1}{2}}\pi .} For example, gd ⁡ ψ = 2 arctan ⁡ e ψ − 1 2 π , gd − 1 ⁡ ϕ = log ⁡ ( sec ⁡ ϕ + tan ⁡ ϕ ) . {\displaystyle {\begin{aligned}\operatorname {gd} \psi &=2\arctan e^{\psi }-{\tfrac {1}{2}}\pi ,\\[6mu]\operatorname {gd} ^{-1}\phi &=\log(\sec \phi +\tan \phi ).\end{aligned}}} == Complex values == As a function of a complex variable, z ↦ w = gd ⁡ z {\textstyle z\mapsto w=\operatorname {gd} z} conformally maps the infinite strip | Im ⁡ z | ≤ 1 2 π {\textstyle \left|\operatorname {Im} z\right|\leq {\tfrac {1}{2}}\pi } to the infinite strip | Re ⁡ w | ≤ 1 2 π , {\textstyle \left|\operatorname {Re} w\right|\leq {\tfrac {1}{2}}\pi ,} while w ↦ z = gd − 1 ⁡ w {\textstyle w\mapsto z=\operatorname {gd} ^{-1}w} conformally maps the infinite strip | Re ⁡ w | ≤ 1 2 π {\textstyle \left|\operatorname {Re} w\right|\leq {\tfrac {1}{2}}\pi } to the infinite strip | Im ⁡ z | ≤ 1 2 π . {\textstyle \left|\operatorname {Im} z\right|\leq {\tfrac {1}{2}}\pi .} Analytically continued by reflections to the whole complex plane, z ↦ w = gd ⁡ z {\textstyle z\mapsto w=\operatorname {gd} z} is a periodic function of period 2 π i {\textstyle 2\pi i} which sends any infinite strip of "height" 2 π i {\textstyle 2\pi i} onto the strip − π < Re ⁡ w ≤ π . {\textstyle -\pi <\operatorname {Re} w\leq \pi .} Likewise, extended to the whole complex plane, w ↦ z = gd − 1 ⁡ w {\textstyle w\mapsto z=\operatorname {gd} ^{-1}w} is a periodic function of period 2 π {\textstyle 2\pi } which sends any infinite strip of "width" 2 π {\textstyle 2\pi } onto the strip − π < Im ⁡ z ≤ π . {\textstyle -\pi <\operatorname {Im} z\leq \pi .} For all points in the complex plane, these functions can be correctly written as: gd ⁡ z = 2 arctan ( tanh ⁡ 1 2 z ) , gd − 1 ⁡ w = 2 artanh ( tan ⁡ 1 2 w ) . {\displaystyle {\begin{aligned}\operatorname {gd} z&={2\arctan }{\bigl (}\tanh {\tfrac {1}{2}}z\,{\bigr )},\\[5mu]\operatorname {gd} ^{-1}w&={2\operatorname {artanh} }{\bigl (}\tan {\tfrac {1}{2}}w\,{\bigr )}.\end{aligned}}} For the gd {\textstyle \operatorname {gd} } and gd − 1 {\textstyle \operatorname {gd} ^{-1}} functions to remain invertible with these extended domains, we might consider each to be a multivalued function (perhaps Gd {\textstyle \operatorname {Gd} } and Gd − 1 {\textstyle \operatorname {Gd} ^{-1}} , with gd {\textstyle \operatorname {gd} } and gd − 1 {\textstyle \operatorname {gd} ^{-1}} the principal branch) or consider their domains and codomains as Riemann surfaces. If u + i v = gd ⁡ ( x + i y ) , {\textstyle u+iv=\operatorname {gd} (x+iy),} then the real and imaginary components u {\textstyle u} and v {\textstyle v} can be found by: tan ⁡ u = sinh ⁡ x cos ⁡ y , tanh ⁡ v = sin ⁡ y cosh ⁡ x . {\displaystyle \tan u={\frac {\sinh x}{\cos y}},\quad \tanh v={\frac {\sin y}{\cosh x}}.} (In practical implementation, make sure to use the 2-argument arctangent, u = atan2 ⁡ ( sinh ⁡ x , cos ⁡ y ) {\textstyle u=\operatorname {atan2} (\sinh x,\cos y)} .) Likewise, if x + i y = gd − 1 ⁡ ( u + i v ) , {\textstyle x+iy=\operatorname {gd} ^{-1}(u+iv),} then components x {\textstyle x} and y {\textstyle y} can be found by: tanh ⁡ x = sin ⁡ u cosh ⁡ v , tan ⁡ y = sinh ⁡ v cos ⁡ u . {\displaystyle \tanh x={\frac {\sin u}{\cosh v}},\quad \tan y={\frac {\sinh v}{\cos u}}.} Multiplying these together reveals the additional identity tanh ⁡ x tan ⁡ y = tan ⁡ u tanh ⁡ v . {\displaystyle \tanh x\,\tan y=\tan u\,\tanh v.} === Symmetries === The two functions can be thought of as rotations or reflections of each-other, with a similar relationship as sinh ⁡ i z = i sin ⁡ z {\textstyle \sinh iz=i\sin z} between sine and hyperbolic sine: gd ⁡ i z = i gd − 1 ⁡ z , gd − 1 ⁡ i z = i gd ⁡ z . {\displaystyle {\begin{aligned}\operatorname {gd} iz&=i\operatorname {gd} ^{-1}z,\\[5mu]\operatorname {gd} ^{-1}iz&=i\operatorname {gd} z.\end{aligned}}} The functions are both odd and they commute with complex conjugation. That is, a reflection across the real or imaginary axis in the domain results in the same reflection in the codomain: gd ⁡ ( − z ) = − gd ⁡ z , gd ⁡ z ¯ = gd ⁡ z ¯ , gd ⁡ ( − z ¯ ) = − gd ⁡ z ¯ , gd − 1 ⁡ ( − z ) = − gd − 1 ⁡ z , gd − 1 ⁡ z ¯ = gd − 1 ⁡ z ¯ , gd − 1 ⁡ ( − z ¯ ) = − gd − 1 ⁡ z ¯ . {\displaystyle {\begin{aligned}\operatorname {gd} (-z)&=-\operatorname {gd} z,&\quad \operatorname {gd} {\bar {z}}&={\overline {\operatorname {gd} z}},&\quad \operatorname {gd} (-{\bar {z}})&=-{\overline {\operatorname {gd} z}},\\[5mu]\operatorname {gd} ^{-1}(-z)&=-\operatorname {gd} ^{-1}z,&\quad \operatorname {gd} ^{-1}{\bar {z}}&={\overline {\operatorname {gd} ^{-1}z}},&\quad \operatorname {gd} ^{-1}(-{\bar {z}})&=-{\overline {\operatorname {gd} ^{-1}z}}.\end{aligned}}} The functions are periodic, with periods 2 π i {\textstyle 2\pi i} and 2 π {\textstyle 2\pi } : gd ⁡ ( z + 2 π i ) = gd ⁡ z , gd − 1 ⁡ ( z + 2 π ) = gd − 1 ⁡ z . {\displaystyle {\begin{aligned}\operatorname {gd} (z+2\pi i)&=\operatorname {gd} z,\\[5mu]\operatorname {gd} ^{-1}(z+2\pi )&=\operatorname {gd} ^{-1}z.\end{aligned}}} A translation in the domain of gd {\textstyle \operatorname {gd} } by ± π i {\textstyle \pm \pi i} results in a half-turn rotation and translation in the codomain by one of ± π , {\textstyle \pm \pi ,} and vice versa for gd − 1 : {\textstyle \operatorname {gd} ^{-1}\colon } gd ⁡ ( ± π i + z ) = { π − gd ⁡ z if Re ⁡ z ≥ 0 , − π − gd ⁡ z if Re ⁡ z < 0 , gd − 1 ⁡ ( ± π + z ) = { π i − gd − 1 ⁡ z if Im ⁡ z ≥ 0 , − π i − gd − 1 ⁡ z if Im ⁡ z < 0. {\displaystyle {\begin{aligned}\operatorname {gd} ({\pm \pi i}+z)&={\begin{cases}\pi -\operatorname {gd} z\quad &{\mbox{if }}\ \ \operatorname {Re} z\geq 0,\\[5mu]-\pi -\operatorname {gd} z\quad &{\mbox{if }}\ \ \operatorname {Re} z<0,\end{cases}}\\[15mu]\operatorname {gd} ^{-1}({\pm \pi }+z)&={\begin{cases}\pi i-\operatorname {gd} ^{-1}z\quad &{\mbox{if }}\ \ \operatorname {Im} z\geq 0,\\[3mu]-\pi i-\operatorname {gd} ^{-1}z\quad &{\mbox{if }}\ \ \operatorname {Im} z<0.\end{cases}}\end{aligned}}} A reflection in the domain of gd {\textstyle \operatorname {gd} } across either of the lines x ± 1 2 π i {\textstyle x\pm {\tfrac {1}{2}}\pi i} results in a reflection in the codomain across one of the lines ± 1 2 π + y i , {\textstyle \pm {\tfrac {1}{2}}\pi +yi,} and vice versa for gd − 1 : {\textstyle \operatorname {gd} ^{-1}\colon } gd ⁡ ( ± π i + z ¯ ) = { π − gd ⁡ z ¯ if Re ⁡ z ≥ 0 , − π − gd ⁡ z ¯ if Re ⁡ z < 0 , gd − 1 ⁡ ( ± π − z ¯ ) = { π i + gd − 1 ⁡ z ¯ if Im ⁡ z ≥ 0 , − π i + gd − 1 ⁡ z ¯ if Im ⁡ z < 0. {\displaystyle {\begin{aligned}\operatorname {gd} ({\pm \pi i}+{\bar {z}})&={\begin{cases}\pi -{\overline {\operatorname {gd} z}}\quad &{\mbox{if }}\ \ \operatorname {Re} z\geq 0,\\[5mu]-\pi -{\overline {\operatorname {gd} z}}\quad &{\mbox{if }}\ \ \operatorname {Re} z<0,\end{cases}}\\[15mu]\operatorname {gd} ^{-1}({\pm \pi }-{\bar {z}})&={\begin{cases}\pi i+{\overline {\operatorname {gd} ^{-1}z}}\quad &{\mbox{if }}\ \ \operatorname {Im} z\geq 0,\\[3mu]-\pi i+{\overline {\operatorname {gd} ^{-1}z}}\quad &{\mbox{if }}\ \ \operatorname {Im} z<0.\end{cases}}\end{aligned}}} This is related to the identity tanh ⁡ 1 2 ( π i ± z ) = tan ⁡ 1 2 ( π ∓ gd ⁡ z ) . {\displaystyle \tanh {\tfrac {1}{2}}({\pi i}\pm z)=\tan {\tfrac {1}{2}}({\pi }\mp \operatorname {gd} z).} === Specific values === A few specific values (where ∞ {\textstyle \infty } indicates the limit at one end of the infinite strip): gd ⁡ ( 0 ) = 0 , gd ( ± log ( 2 + 3 ) ) = ± 1 3 π , gd ⁡ ( π i ) = π , gd ( ± 1 3 π i ) = ± log ( 2 + 3 ) i , gd ⁡ ( ± ∞ ) = ± 1 2 π , gd ( ± log ( 1 + 2 ) ) = ± 1 4 π , gd ( ± 1 2 π i ) = ± ∞ i , gd ( ± 1 4 π i ) = ± log ( 1 + 2 ) i , gd ( log ( 1 + 2 ) ± 1 2 π i ) = 1 2 π ± log ( 1 + 2 ) i , gd ( − log ( 1 + 2 ) ± 1 2 π i ) = − 1 2 π ± log ( 1 + 2 ) i . {\displaystyle {\begin{aligned}\operatorname {gd} (0)&=0,&\quad {\operatorname {gd} }{\bigl (}{\pm {\log }{\bigl (}2+{\sqrt {3}}{\bigr )}}{\bigr )}&=\pm {\tfrac {1}{3}}\pi ,\\[5mu]\operatorname {gd} (\pi i)&=\pi ,&\quad {\operatorname {gd} }{\bigl (}{\pm {\tfrac {1}{3}}}\pi i{\bigr )}&=\pm {\log }{\bigl (}2+{\sqrt {3}}{\bigr )}i,\\[5mu]\operatorname {gd} ({\pm \infty })&=\pm {\tfrac {1}{2}}\pi ,&\quad {\operatorname {gd} }{\bigl (}{\pm {\log }{\bigl (}1+{\sqrt {2}}{\bigr )}}{\bigr )}&=\pm {\tfrac {1}{4}}\pi ,\\[5mu]{\operatorname {gd} }{\bigl (}{\pm {\tfrac {1}{2}}}\pi i{\bigr )}&=\pm \infty i,&\quad {\operatorname {gd} }{\bigl (}{\pm {\tfrac {1}{4}}}\pi i{\bigr )}&=\pm {\log }{\bigl (}1+{\sqrt {2}}{\bigr )}i,\\[5mu]&&{\operatorname {gd} }{\bigl (}{\log }{\bigl (}1+{\sqrt {2}}{\bigr )}\pm {\tfrac {1}{2}}\pi i{\bigr )}&={\tfrac {1}{2}}\pi \pm {\log }{\bigl (}1+{\sqrt {2}}{\bigr )}i,\\[5mu]&&{\operatorname {gd} }{\bigl (}{-\log }{\bigl (}1+{\sqrt {2}}{\bigr )}\pm {\tfrac {1}{2}}\pi i{\bigr )}&=-{\tfrac {1}{2}}\pi \pm {\log }{\bigl (}1+{\sqrt {2}}{\bigr )}i.\end{aligned}}} == Derivatives == As the Gudermannian and inverse Gudermannian functions can be defined as the antiderivatives of the hyperbolic secant and circular secant functions, respectively, their derivatives are those secant functions: d d z gd ⁡ z = sech ⁡ z , d d z gd − 1 ⁡ z = sec ⁡ z . {\displaystyle {\begin{aligned}{\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {gd} z&=\operatorname {sech} z,\\[10mu]{\frac {\mathrm {d} }{\mathrm {d} z}}\operatorname {gd} ^{-1}z&=\sec z.\end{aligned}}} == Argument-addition identities == By combining hyperbolic and circular argument-addition identities, tanh ⁡ ( z + w ) = tanh ⁡ z + tanh ⁡ w 1 + tanh ⁡ z tanh ⁡ w , tan ⁡ ( z + w ) = tan ⁡ z + tan ⁡ w 1 − tan ⁡ z tan ⁡ w , {\displaystyle {\begin{aligned}\tanh(z+w)&={\frac {\tanh z+\tanh w}{1+\tanh z\,\tanh w}},\\[10mu]\tan(z+w)&={\frac {\tan z+\tan w}{1-\tan z\,\tan w}},\end{aligned}}} with the circular–hyperbolic identity, tan ⁡ 1 2 ( gd ⁡ z ) = tanh ⁡ 1 2 z , {\displaystyle \tan {\tfrac {1}{2}}(\operatorname {gd} z)=\tanh {\tfrac {1}{2}}z,} we have the Gudermannian argument-addition identities: gd ⁡ ( z + w ) = 2 arctan ⁡ tan ⁡ 1 2 ( gd ⁡ z ) + tan ⁡ 1 2 ( gd ⁡ w ) 1 + tan ⁡ 1 2 ( gd ⁡ z ) tan ⁡ 1 2 ( gd ⁡ w ) , gd − 1 ⁡ ( z + w ) = 2 artanh ⁡ tanh ⁡ 1 2 ( gd − 1 ⁡ z ) + tanh ⁡ 1 2 ( gd − 1 ⁡ w ) 1 − tanh ⁡ 1 2 ( gd − 1 ⁡ z ) tanh ⁡ 1 2 ( gd − 1 ⁡ w ) . {\displaystyle {\begin{aligned}\operatorname {gd} (z+w)&=2\arctan {\frac {\tan {\tfrac {1}{2}}(\operatorname {gd} z)+\tan {\tfrac {1}{2}}(\operatorname {gd} w)}{1+\tan {\tfrac {1}{2}}(\operatorname {gd} z)\,\tan {\tfrac {1}{2}}(\operatorname {gd} w)}},\\[12mu]\operatorname {gd} ^{-1}(z+w)&=2\operatorname {artanh} {\frac {\tanh {\tfrac {1}{2}}(\operatorname {gd} ^{-1}z)+\tanh {\tfrac {1}{2}}(\operatorname {gd} ^{-1}w)}{1-\tanh {\tfrac {1}{2}}(\operatorname {gd} ^{-1}z)\,\tanh {\tfrac {1}{2}}(\operatorname {gd} ^{-1}w)}}.\end{aligned}}} Further argument-addition identities can be written in terms of other circular functions, but they require greater care in choosing branches in inverse functions. Notably, gd ⁡ ( z + w ) = u + v , where tan ⁡ u = sinh ⁡ z cosh ⁡ w , tan ⁡ v = sinh ⁡ w cosh ⁡ z , gd − 1 ⁡ ( z + w ) = u + v , where tanh ⁡ u = sin ⁡ z cos ⁡ w , tanh ⁡ v = sin ⁡ w cos ⁡ z , {\displaystyle {\begin{aligned}\operatorname {gd} (z+w)&=u+v,\quad {\text{where}}\ \tan u={\frac {\sinh z}{\cosh w}},\ \tan v={\frac {\sinh w}{\cosh z}},\\[10mu]\operatorname {gd} ^{-1}(z+w)&=u+v,\quad {\text{where}}\ \tanh u={\frac {\sin z}{\cos w}},\ \tanh v={\frac {\sin w}{\cos z}},\end{aligned}}} which can be used to derive the per-component computation for the complex Gudermannian and inverse Gudermannian. In the specific case z = w , {\textstyle z=w,} double-argument identities are gd ⁡ ( 2 z ) = 2 arctan ⁡ ( sin ⁡ ( gd ⁡ z ) ) , gd − 1 ⁡ ( 2 z ) = 2 artanh ⁡ ( sinh ⁡ ( gd − 1 ⁡ z ) ) . {\displaystyle {\begin{aligned}\operatorname {gd} (2z)&=2\arctan(\sin(\operatorname {gd} z)),\\[5mu]\operatorname {gd} ^{-1}(2z)&=2\operatorname {artanh} (\sinh(\operatorname {gd} ^{-1}z)).\end{aligned}}} == Taylor series == The Taylor series near zero, valid for complex values z {\textstyle z} with | z | < 1 2 π , {\textstyle |z|<{\tfrac {1}{2}}\pi ,} are gd ⁡ z = ∑ k = 0 ∞ E k ( k + 1 ) ! z k + 1 = z − 1 6 z 3 + 1 24 z 5 − 61 5040 z 7 + 277 72576 z 9 − … , gd − 1 ⁡ z = ∑ k = 0 ∞ | E k | ( k + 1 ) ! z k + 1 = z + 1 6 z 3 + 1 24 z 5 + 61 5040 z 7 + 277 72576 z 9 + … , {\displaystyle {\begin{aligned}\operatorname {gd} z&=\sum _{k=0}^{\infty }{\frac {E_{k}}{(k+1)!}}z^{k+1}=z-{\frac {1}{6}}z^{3}+{\frac {1}{24}}z^{5}-{\frac {61}{5040}}z^{7}+{\frac {277}{72576}}z^{9}-\dots ,\\[10mu]\operatorname {gd} ^{-1}z&=\sum _{k=0}^{\infty }{\frac {|E_{k}|}{(k+1)!}}z^{k+1}=z+{\frac {1}{6}}z^{3}+{\frac {1}{24}}z^{5}+{\frac {61}{5040}}z^{7}+{\frac {277}{72576}}z^{9}+\dots ,\end{aligned}}} where the numbers E k {\textstyle E_{k}} are the Euler secant numbers, 1, 0, -1, 0, 5, 0, -61, 0, 1385 ... (sequences A122045, A000364, and A028296 in the OEIS). These series were first computed by James Gregory in 1671. Because the Gudermannian and inverse Gudermannian functions are the integrals of the hyperbolic secant and secant functions, the numerators E k {\textstyle E_{k}} and | E k | {\textstyle |E_{k}|} are same as the numerators of the Taylor series for sech and sec, respectively, but shifted by one place. The reduced unsigned numerators are 1, 1, 1, 61, 277, ... and the reduced denominators are 1, 6, 24, 5040, 72576, ... (sequences A091912 and A136606 in the OEIS). == History == The function and its inverse are related to the Mercator projection. The vertical coordinate in the Mercator projection is called isometric latitude, and is often denoted ψ . {\textstyle \psi .} In terms of latitude ϕ {\textstyle \phi } on the sphere (expressed in radians) the isometric latitude can be written ψ = gd − 1 ⁡ ϕ = ∫ 0 ϕ sec ⁡ t d t . {\displaystyle \psi =\operatorname {gd} ^{-1}\phi =\int _{0}^{\phi }\sec t\,\mathrm {d} t.} The inverse from the isometric latitude to spherical latitude is ϕ = gd ⁡ ψ . {\textstyle \phi =\operatorname {gd} \psi .} (Note: on an ellipsoid of revolution, the relation between geodetic latitude and isometric latitude is slightly more complicated.) Gerardus Mercator plotted his celebrated map in 1569, but the precise method of construction was not revealed. In 1599, Edward Wright described a method for constructing a Mercator projection numerically from trigonometric tables, but did not produce a closed formula. The closed formula was published in 1668 by James Gregory. The Gudermannian function per se was introduced by Johann Heinrich Lambert in the 1760s at the same time as the hyperbolic functions. He called it the "transcendent angle", and it went by various names until 1862 when Arthur Cayley suggested it be given its current name as a tribute to Christoph Gudermann's work in the 1830s on the theory of special functions. Gudermann had published articles in Crelle's Journal that were later collected in a book which expounded sinh {\textstyle \sinh } and cosh {\textstyle \cosh } to a wide audience (although represented by the symbols S i n {\textstyle {\mathfrak {Sin}}} and C o s {\textstyle {\mathfrak {Cos}}} ). The notation gd {\textstyle \operatorname {gd} } was introduced by Cayley who starts by calling ϕ = gd ⁡ u {\textstyle \phi =\operatorname {gd} u} the Jacobi elliptic amplitude am ⁡ u {\textstyle \operatorname {am} u} in the degenerate case where the elliptic modulus is m = 1 , {\textstyle m=1,} so that 1 − m sin 2 ϕ {\textstyle {\sqrt {1-m\sin \!^{2}\,\phi }}} reduces to cos ⁡ ϕ . {\textstyle \cos \phi .} This is the inverse of the integral of the secant function. Using Cayley's notation, u = ∫ 0 d ϕ cos ⁡ ϕ = log tan ( 1 4 π + 1 2 ϕ ) . {\displaystyle u=\int _{0}{\frac {d\phi }{\cos \phi }}={\log \,\tan }{\bigl (}{\tfrac {1}{4}}\pi +{\tfrac {1}{2}}\phi {\bigr )}.} He then derives "the definition of the transcendent", gd ⁡ u = 1 i log tan ( 1 4 π + 1 2 u i ) , {\displaystyle \operatorname {gd} u={{\frac {1}{i}}\log \,\tan }{\bigl (}{\tfrac {1}{4}}\pi +{\tfrac {1}{2}}ui{\bigr )},} observing that "although exhibited in an imaginary form, [it] is a real function of u {\textstyle u} ". The Gudermannian and its inverse were used to make trigonometric tables of circular functions also function as tables of hyperbolic functions. Given a hyperbolic angle ψ {\textstyle \psi } , hyperbolic functions could be found by first looking up ϕ = gd ⁡ ψ {\textstyle \phi =\operatorname {gd} \psi } in a Gudermannian table and then looking up the appropriate circular function of ϕ {\textstyle \phi } , or by directly locating ψ {\textstyle \psi } in an auxiliary gd − 1 {\displaystyle \operatorname {gd} ^{-1}} column of the trigonometric table. == Generalization == The Gudermannian function can be thought of mapping points on one branch of a hyperbola to points on a semicircle. Points on one sheet of an n-dimensional hyperboloid of two sheets can be likewise mapped onto a n-dimensional hemisphere via stereographic projection. The hemisphere model of hyperbolic space uses such a map to represent hyperbolic space. == Applications == The angle of parallelism function in hyperbolic geometry is the complement of the gudermannian, Π ( ψ ) = 1 2 π − gd ⁡ ψ . {\displaystyle {\mathit {\Pi }}(\psi )={\tfrac {1}{2}}\pi -\operatorname {gd} \psi .} On a Mercator projection a line of constant latitude is parallel to the equator (on the projection) at a distance proportional to the anti-gudermannian of the latitude. The Gudermannian function (with a complex argument) may be used to define the transverse Mercator projection. The Gudermannian function appears in a non-periodic solution of the inverted pendulum. The Gudermannian function appears in a moving mirror solution of the dynamical Casimir effect. If an infinite number of infinitely long, equidistant, parallel, coplanar, straight wires are kept at equal potentials with alternating signs, the potential-flux distribution in a cross-sectional plane perpendicular to the wires is the complex Gudermannian function. The Gudermannian function is a sigmoid function, and as such is sometimes used as an activation function in machine learning. The (scaled and shifted) Gudermannian function is the cumulative distribution function of the hyperbolic secant distribution. A function based on the Gudermannian provides a good model for the shape of spiral galaxy arms. == See also == Tractrix Catenary § Catenary of equal strength == Notes == == References == == External links == Penn, Michael (2020) "the Gudermannian function!" on YouTube.
Wikipedia/Gudermannian_function
In mathematics, a conjecture is a conclusion or a proposition that is proffered on a tentative basis without proof. Some conjectures, such as the Riemann hypothesis or Fermat's conjecture (now a theorem, proven in 1995 by Andrew Wiles), have shaped much of mathematical history as new areas of mathematics are developed in order to prove them. == Resolution of conjectures == === Proof === Formal mathematics is based on provable truth. In mathematics, any number of cases supporting a universally quantified conjecture, no matter how large, is insufficient for establishing the conjecture's veracity, since a single counterexample could immediately bring down the conjecture. Mathematical journals sometimes publish the minor results of research teams having extended the search for a counterexample farther than previously done. For instance, the Collatz conjecture, which concerns whether or not certain sequences of integers terminate, has been tested for all integers up to 1.2 × 1012 (1.2 trillion). However, the failure to find a counterexample after extensive search does not constitute a proof that the conjecture is true—because the conjecture might be false but with a very large minimal counterexample. Nevertheless, mathematicians often regard a conjecture as strongly supported by evidence even though not yet proved. That evidence may be of various kinds, such as verification of consequences of it or strong interconnections with known results. A conjecture is considered proven only when it has been shown that it is logically impossible for it to be false. There are various methods of doing so; see methods of mathematical proof for more details. One method of proof, applicable when there are only a finite number of cases that could lead to counterexamples, is known as "brute force": in this approach, all possible cases are considered and shown not to give counterexamples. In some occasions, the number of cases is quite large, in which case a brute-force proof may require as a practical matter the use of a computer algorithm to check all the cases. For example, the validity of the 1976 and 1997 brute-force proofs of the four color theorem by computer was initially doubted, but was eventually confirmed in 2005 by theorem-proving software. When a conjecture has been proven, it is no longer a conjecture but a theorem. Many important theorems were once conjectures, such as the Geometrization theorem (which resolved the Poincaré conjecture), Fermat's Last Theorem, and others. === Disproof === Conjectures disproven through counterexample are sometimes referred to as false conjectures (cf. the Pólya conjecture and Euler's sum of powers conjecture). In the case of the latter, the first counterexample found for the n=4 case involved numbers in the millions, although it has been subsequently found that the minimal counterexample is actually smaller. === Independent conjectures === Not every conjecture ends up being proven true or false. The continuum hypothesis, which tries to ascertain the relative cardinality of certain infinite sets, was eventually shown to be independent from the generally accepted set of Zermelo–Fraenkel axioms of set theory. It is therefore possible to adopt this statement, or its negation, as a new axiom in a consistent manner (much as Euclid's parallel postulate can be taken either as true or false in an axiomatic system for geometry). In this case, if a proof uses this statement, researchers will often look for a new proof that does not require the hypothesis (in the same way that it is desirable that statements in Euclidean geometry be proved using only the axioms of neutral geometry, i.e. without the parallel postulate). The one major exception to this in practice is the axiom of choice, as the majority of researchers usually do not worry whether a result requires it—unless they are studying this axiom in particular. == Conditional proofs == Sometimes, a conjecture is called a hypothesis when it is used frequently and repeatedly as an assumption in proofs of other results. For example, the Riemann hypothesis is a conjecture from number theory that — amongst other things — makes predictions about the distribution of prime numbers. Few number theorists doubt that the Riemann hypothesis is true. In fact, in anticipation of its eventual proof, some have even proceeded to develop further proofs which are contingent on the truth of this conjecture. These are called conditional proofs: the conjectures assumed appear in the hypotheses of the theorem, for the time being. These "proofs", however, would fall apart if it turned out that the hypothesis was false, so there is considerable interest in verifying the truth or falsity of conjectures of this type. == Important examples == === Fermat's Last Theorem === In number theory, Fermat's Last Theorem (sometimes called Fermat's conjecture, especially in older texts) states that no three positive integers a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} can satisfy the equation a n + b n = c n {\displaystyle a^{n}+b^{n}=c^{n}} for any integer value of n {\displaystyle n} greater than two. This theorem was first conjectured by Pierre de Fermat in 1637 in the margin of a copy of Arithmetica, where he claimed that he had a proof that was too large to fit in the margin. The first successful proof was released in 1994 by Andrew Wiles, and formally published in 1995, after 358 years of effort by mathematicians. The unsolved problem stimulated the development of algebraic number theory in the 19th century, and the proof of the modularity theorem in the 20th century. It is among the most notable theorems in the history of mathematics, and prior to its proof it was in the Guinness Book of World Records for "most difficult mathematical problems". === Four color theorem === In mathematics, the four color theorem, or the four color map theorem, states that given any separation of a plane into contiguous regions, producing a figure called a map, no more than four colors are required to color the regions of the map—so that no two adjacent regions have the same color. Two regions are called adjacent if they share a common boundary that is not a corner, where corners are the points shared by three or more regions. For example, in the map of the United States of America, Utah and Arizona are adjacent, but Utah and New Mexico, which only share a point that also belongs to Arizona and Colorado, are not. Möbius mentioned the problem in his lectures as early as 1840. The conjecture was first proposed on October 23, 1852 when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. The five color theorem, which has a short elementary proof, states that five colors suffice to color a map and was proven in the late 19th century; however, proving that four colors suffice turned out to be significantly harder. A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852. The four color theorem was ultimately proven in 1976 by Kenneth Appel and Wolfgang Haken. It was the first major theorem to be proved using a computer. Appel and Haken's approach started by showing that there is a particular set of 1,936 maps, each of which cannot be part of a smallest-sized counterexample to the four color theorem (i.e., if they did appear, one could make a smaller counter-example). Appel and Haken used a special-purpose computer program to confirm that each of these maps had this property. Additionally, any map that could potentially be a counterexample must have a portion that looks like one of these 1,936 maps. Showing this with hundreds of pages of hand analysis, Appel and Haken concluded that no smallest counterexample exists because any must contain, yet do not contain, one of these 1,936 maps. This contradiction means there are no counterexamples at all and that the theorem is therefore true. Initially, their proof was not accepted by mathematicians at all because the computer-assisted proof was infeasible for a human to check by hand. However, the proof has since then gained wider acceptance, although doubts still remain. === Hauptvermutung === The Hauptvermutung (German for main conjecture) of geometric topology is the conjecture that any two triangulations of a triangulable space have a common refinement, a single triangulation that is a subdivision of both of them. It was originally formulated in 1908, by Steinitz and Tietze. This conjecture is now known to be false. The non-manifold version was disproved by John Milnor in 1961 using Reidemeister torsion. The manifold version is true in dimensions m ≤ 3. The cases m = 2 and 3 were proved by Tibor Radó and Edwin E. Moise in the 1920s and 1950s, respectively. === Weil conjectures === In mathematics, the Weil conjectures were some highly influential proposals by André Weil (1949) on the generating functions (known as local zeta-functions) derived from counting the number of points on algebraic varieties over finite fields. A variety V over a finite field with q elements has a finite number of rational points, as well as points over every finite field with qk elements containing that field. The generating function has coefficients derived from the numbers Nk of points over the (essentially unique) field with qk elements. Weil conjectured that such zeta-functions should be rational functions, should satisfy a form of functional equation, and should have their zeroes in restricted places. The last two parts were quite consciously modeled on the Riemann zeta function and Riemann hypothesis. The rationality was proved by Dwork (1960), the functional equation by Grothendieck (1965), and the analogue of the Riemann hypothesis was proved by Deligne (1974). === Poincaré conjecture === In mathematics, the Poincaré conjecture is a theorem about the characterization of the 3-sphere, which is the hypersphere that bounds the unit ball in four-dimensional space. The conjecture states that: Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere. An equivalent form of the conjecture involves a coarser form of equivalence than homeomorphism called homotopy equivalence: if a 3-manifold is homotopy equivalent to the 3-sphere, then it is necessarily homeomorphic to it. Originally conjectured by Henri Poincaré in 1904, the theorem concerns a space that locally looks like ordinary three-dimensional space but is connected, finite in size, and lacks any boundary (a closed 3-manifold). The Poincaré conjecture claims that if such a space has the additional property that each loop in the space can be continuously tightened to a point, then it is necessarily a three-dimensional sphere. An analogous result has been known in higher dimensions for some time. After nearly a century of effort by mathematicians, Grigori Perelman presented a proof of the conjecture in three papers made available in 2002 and 2003 on arXiv. The proof followed on from the program of Richard S. Hamilton to use the Ricci flow to attempt to solve the problem. Hamilton later introduced a modification of the standard Ricci flow, called Ricci flow with surgery to systematically excise singular regions as they develop, in a controlled way, but was unable to prove this method "converged" in three dimensions. Perelman completed this portion of the proof. Several teams of mathematicians have verified that Perelman's proof is correct. The Poincaré conjecture, before being proven, was one of the most important open questions in topology. === Riemann hypothesis === In mathematics, the Riemann hypothesis, proposed by Bernhard Riemann (1859), is a conjecture that the non-trivial zeros of the Riemann zeta function all have real part 1/2. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields. The Riemann hypothesis implies results about the distribution of prime numbers. Along with suitable generalizations, some mathematicians consider it the most important unresolved problem in pure mathematics. The Riemann hypothesis, along with the Goldbach conjecture, is part of Hilbert's eighth problem in David Hilbert's list of 23 unsolved problems; it is also one of the Clay Mathematics Institute Millennium Prize Problems. === P versus NP problem === The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer; it is widely conjectured that the answer is no. It was essentially first mentioned in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether a certain NP-complete problem could be solved in quadratic or linear time. The precise statement of the P=NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" and is considered by many to be the most important open problem in the field. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution. === Other conjectures === Goldbach's conjecture The twin prime conjecture The Collatz conjecture The Manin conjecture The Maldacena conjecture The Euler conjecture, proposed by Euler in the 18th century but for which counterexamples for a number of exponents (starting with n=4) were found beginning in the mid 20th century The Hardy-Littlewood conjectures are a pair of conjectures concerning the distribution of prime numbers, the first of which expands upon the aforementioned twin prime conjecture. Neither one has either been proven or disproven, but it has been proven that both cannot simultaneously be true (i.e., at least one must be false). It has not been proven which one is false, but it is widely believed that the first conjecture is true and the second one is false. The Langlands program is a far-reaching web of these ideas of 'unifying conjectures' that link different subfields of mathematics (e.g. between number theory and representation theory of Lie groups). Some of these conjectures have since been proved. == In other sciences == Karl Popper pioneered the use of the term "conjecture" in scientific philosophy. Conjecture is related to hypothesis, which in science refers to a testable conjecture. == See also == Bold hypothesis Futures studies Hypotheticals List of conjectures Ramanujan machine == References == === Works cited === Deligne, Pierre (1974), "La conjecture de Weil. I", Publications Mathématiques de l'IHÉS, 43 (43): 273–307, doi:10.1007/BF02684373, ISSN 1618-1913, MR 0340258, S2CID 123139343 Dwork, Bernard (1960), "On the rationality of the zeta function of an algebraic variety", American Journal of Mathematics, 82 (3), American Journal of Mathematics, Vol. 82, No. 3: 631–648, doi:10.2307/2372974, ISSN 0002-9327, JSTOR 2372974, MR 0140494 Grothendieck, Alexander (1995) [1965], "Formule de Lefschetz et rationalité des fonctions L", Séminaire Bourbaki, vol. 9, Paris: Société Mathématique de France, pp. 41–55, MR 1608788 == External links == Media related to Conjectures at Wikimedia Commons Open Problem Garden Unsolved Problems web site
Wikipedia/Conjecture
In electromagnetism and electronics, electromotive force (also electromotance, abbreviated emf, denoted E {\displaystyle {\mathcal {E}}} ) is an energy transfer to an electric circuit per unit of electric charge, measured in volts. Devices called electrical transducers provide an emf by converting other forms of energy into electrical energy. Other types of electrical equipment also produce an emf, such as batteries, which convert chemical energy, and generators, which convert mechanical energy. This energy conversion is achieved by physical forces applying physical work on electric charges. However, electromotive force itself is not a physical force, and ISO/IEC standards have deprecated the term in favor of source voltage or source tension instead (denoted U s {\displaystyle U_{s}} ). An electronic–hydraulic analogy may view emf as the mechanical work done to water by a pump, which results in a pressure difference (analogous to voltage). In electromagnetic induction, emf can be defined around a closed loop of a conductor as the electromagnetic work that would be done on an elementary electric charge (such as an electron) if it travels once around the loop. For two-terminal devices modeled as a Thévenin equivalent circuit, an equivalent emf can be measured as the open-circuit voltage between the two terminals. This emf can drive an electric current if an external circuit is attached to the terminals, in which case the device becomes the voltage source of that circuit. Although an emf gives rise to a voltage and can be measured as a voltage and may sometimes informally be called a "voltage", they are not the same phenomenon (see § Distinction with potential difference). == Overview == Devices that can provide emf include electrochemical cells, thermoelectric devices, solar cells, photodiodes, electrical generators, inductors, transformers and even Van de Graaff generators. In nature, emf is generated when magnetic field fluctuations occur through a surface. For example, the shifting of the Earth's magnetic field during a geomagnetic storm induces currents in an electrical grid as the lines of the magnetic field are shifted about and cut across the conductors. In a battery, the charge separation that gives rise to a potential difference (voltage) between the terminals is accomplished by chemical reactions at the electrodes that convert chemical potential energy into electromagnetic potential energy. A voltaic cell can be thought of as having a "charge pump" of atomic dimensions at each electrode, that is: A (chemical) source of emf can be thought of as a kind of charge pump that acts to move positive charges from a point of low potential through its interior to a point of high potential. … By chemical, mechanical or other means, the source of emf performs work d W {\textstyle {\mathit {d}}W} on that charge to move it to the high-potential terminal. The emf E {\textstyle {\mathcal {E}}} of the source is defined as the work d W {\textstyle {\mathit {d}}W} done per charge d q {\textstyle dq} . E = d W d q {\textstyle {\mathcal {E}}={\frac {{\mathit {d}}W}{{\mathit {d}}q}}} . In an electrical generator, a time-varying magnetic field inside the generator creates an electric field via electromagnetic induction, which creates a potential difference between the generator terminals. Charge separation takes place within the generator because electrons flow away from one terminal toward the other, until, in the open-circuit case, an electric field is developed that makes further charge separation impossible. The emf is countered by the electrical voltage due to charge separation. If a load is attached, this voltage can drive a current. The general principle governing the emf in such electrical machines is Faraday's law of induction. == History == In 1801, Alessandro Volta introduced the term "force motrice électrique" to describe the active agent of a battery (which he had invented around 1798). This is called the "electromotive force" in English. Around 1830, Michael Faraday established that chemical reactions at each of two electrode–electrolyte interfaces provide the "seat of emf" for the voltaic cell. That is, these reactions drive the current and are not an endless source of energy as the earlier obsolete theory thought. In the open-circuit case, charge separation continues until the electrical field from the separated charges is sufficient to arrest the reactions. Years earlier, Alessandro Volta, who had measured a contact potential difference at the metal–metal (electrode–electrode) interface of his cells, held the incorrect opinion that contact alone (without taking into account a chemical reaction) was the origin of the emf. It is independent of size of the cell but depends on the nature of the electrolyte used. == Notation and units of measurement == Electromotive force (EMF) is typically denoted by the symbol ℰ (script E). It represents the energy provided by a source per unit electric charge. The standard unit of EMF in the International System of Units (SI) is the volt (V). In a device without internal resistance, if an electric charge q {\displaystyle q} passing through that device gains an energy W {\displaystyle W} via work, the net emf for that device is the energy gained per unit charge: W q . {\textstyle {\tfrac {W}{q}}.} Like other measures of energy per charge, emf uses the SI unit volt, which is equivalent to a joule (SI unit of energy) per coulomb (SI unit of charge). Electromotive force in electrostatic units is the statvolt (in the centimeter gram second system of units equal in amount to an erg per electrostatic unit of charge). == Formal definitions == Inside a source of emf (such as a battery) that is open-circuited, a charge separation occurs between the negative terminal N and the positive terminal P. This leads to an electrostatic field E o p e n c i r c u i t {\displaystyle {\boldsymbol {E}}_{\mathrm {open\ circuit} }} that points from P to N, whereas the emf of the source must be able to drive current from N to P when connected to a circuit. This led Max Abraham to introduce the concept of a nonelectrostatic field E ′ {\displaystyle {\boldsymbol {E}}'} that exists only inside the source of emf. In the open-circuit case, E ′ = − E o p e n c i r c u i t {\displaystyle {\boldsymbol {E}}'=-{\boldsymbol {E}}_{\mathrm {open\ circuit} }} , while when the source is connected to a circuit the electric field E {\displaystyle {\boldsymbol {E}}} inside the source changes but E ′ {\displaystyle {\boldsymbol {E}}'} remains essentially the same. In the open-circuit case, the conservative electrostatic field created by separation of charge exactly cancels the forces producing the emf. Mathematically: E s o u r c e = ∫ N P E ′ ⋅ d ℓ = − ∫ N P E o p e n c i r c u i t ⋅ d ℓ = V P − V N , {\displaystyle {\mathcal {E}}_{\mathrm {source} }=\int _{N}^{P}{\boldsymbol {E}}'\cdot \mathrm {d} {\boldsymbol {\ell }}=-\int _{N}^{P}{\boldsymbol {E}}_{\mathrm {open\ circuit} }\cdot \mathrm {d} {\boldsymbol {\ell }}=V_{P}-V_{N}\ ,} where E o p e n c i r c u i t {\displaystyle {\boldsymbol {E}}_{\mathrm {open\ circuit} }} is the conservative electrostatic field created by the charge separation associated with the emf, d ℓ {\displaystyle \mathrm {d} {\boldsymbol {\ell }}} is an element of the path from terminal N to terminal P, ' ⋅ {\displaystyle \cdot } ' denotes the vector dot product, and V {\displaystyle V} is the electric scalar potential. This emf is the work done on a unit charge by the source's nonelectrostatic field E ′ {\displaystyle {\boldsymbol {E}}'} when the charge moves from N to P. When the source is connected to a load, its emf is just E s o u r c e = ∫ N P E ′ ⋅ d ℓ , {\displaystyle {\mathcal {E}}_{\mathrm {source} }=\int _{N}^{P}{\boldsymbol {E}}'\cdot \mathrm {d} {\boldsymbol {\ell }}\ ,} and no longer has a simple relation to the electric field E {\displaystyle {\boldsymbol {E}}} inside it. In the case of a closed path in the presence of a varying magnetic field, the integral of the electric field around the (stationary) closed loop C {\displaystyle C} may be nonzero. Then, the "induced emf" (often called the "induced voltage") in the loop is: E C = ∮ C E ⋅ d ℓ = − d Φ C d t = − d d t ∮ C A ⋅ d ℓ , {\displaystyle {\mathcal {E}}_{C}=\oint _{C}{\boldsymbol {E}}\cdot \mathrm {d} {\boldsymbol {\ell }}=-{\frac {d\Phi _{C}}{dt}}=-{\frac {d}{dt}}\oint _{C}{\boldsymbol {A}}\cdot \mathrm {d} {\boldsymbol {\ell }}\ ,} where E {\displaystyle {\boldsymbol {E}}} is the entire electric field, conservative and non-conservative, and the integral is around an arbitrary, but stationary, closed curve C {\displaystyle C} through which there is a time-varying magnetic flux Φ C {\displaystyle \Phi _{C}} , and A {\displaystyle {\boldsymbol {A}}} is the vector potential. The electrostatic field does not contribute to the net emf around a circuit because the electrostatic portion of the electric field is conservative (i.e., the work done against the field around a closed path is zero, see Kirchhoff's voltage law, which is valid, as long as the circuit elements remain at rest and radiation is ignored). That is, the "induced emf" (like the emf of a battery connected to a load) is not a "voltage" in the sense of a difference in the electric scalar potential. If the loop C {\displaystyle C} is a conductor that carries current I {\displaystyle I} in the direction of integration around the loop, and the magnetic flux is due to that current, we have that Φ B = L I {\displaystyle \Phi _{B}=LI} , where L {\displaystyle L} is the self inductance of the loop. If in addition, the loop includes a coil that extends from point 1 to 2, such that the magnetic flux is largely localized to that region, it is customary to speak of that region as an inductor, and to consider that its emf is localized to that region. Then, we can consider a different loop C ′ {\displaystyle C'} that consists of the coiled conductor from 1 to 2, and an imaginary line down the center of the coil from 2 back to 1. The magnetic flux, and emf, in loop C ′ {\displaystyle C'} is essentially the same as that in loop C {\displaystyle C} : E C = E C ′ = − d Φ C ′ d t = − L d I d t = ∮ C E ⋅ d ℓ = ∫ 1 2 E c o n d u c t o r ⋅ d ℓ − ∫ 1 2 E c e n t e r l i n e ⋅ d ℓ . {\displaystyle {\mathcal {E}}_{C}={\mathcal {E}}_{C'}=-{\frac {d\Phi _{C'}}{dt}}=-L{\frac {dI}{dt}}=\oint _{C}{\boldsymbol {E}}\cdot \mathrm {d} {\boldsymbol {\ell }}=\int _{1}^{2}{\boldsymbol {E}}_{\mathrm {conductor} }\cdot \mathrm {d} {\boldsymbol {\ell }}-\int _{1}^{2}{\boldsymbol {E}}_{\mathrm {center\ line} }\cdot \mathrm {d} {\boldsymbol {\ell }}\ .} For a good conductor, E c o n d u c t o r {\displaystyle {\boldsymbol {E}}_{\mathrm {conductor} }} is negligible, so we have, to a good approximation, L d I d t = ∫ 1 2 E c e n t e r l i n e ⋅ d ℓ = V 1 − V 2 , {\displaystyle L{\frac {dI}{dt}}=\int _{1}^{2}{\boldsymbol {E}}_{\mathrm {center\ line} }\cdot \mathrm {d} {\boldsymbol {\ell }}=V_{1}-V_{2}\ ,} where V {\displaystyle V} is the electric scalar potential along the centerline between points 1 and 2. Thus, we can associate an effective "voltage drop" L d I / d t {\displaystyle L\ dI/dt} with an inductor (even though our basic understanding of induced emf is based on the vector potential rather than the scalar potential), and consider it as a load element in Kirchhoff's voltage law, ∑ E s o u r c e = ∑ l o a d e l e m e n t s v o l t a g e d r o p s , {\displaystyle \sum {\mathcal {E}}_{\mathrm {source} }=\sum _{\mathrm {load\ elements} }\mathrm {voltage\ drops} ,} where now the induced emf is not considered to be a source emf. This definition can be extended to arbitrary sources of emf and paths C {\displaystyle C} moving with velocity v {\displaystyle {\boldsymbol {v}}} through the electric field E {\displaystyle {\boldsymbol {E}}} and magnetic field B {\displaystyle {\boldsymbol {B}}} : E = ∮ C [ E + v × B ] ⋅ d ℓ + 1 q ∮ C E f f e c t i v e c h e m i c a l f o r c e s ⋅ d ℓ + 1 q ∮ C E f f e c t i v e t h e r m a l f o r c e s ⋅ d ℓ , {\displaystyle {\begin{aligned}{\mathcal {E}}&=\oint _{C}\left[{\boldsymbol {E}}+{\boldsymbol {v}}\times {\boldsymbol {B}}\right]\cdot \mathrm {d} {\boldsymbol {\ell }}\\&\qquad +{\frac {1}{q}}\oint _{C}\mathrm {Effective\ chemical\ forces\ \cdot } \ \mathrm {d} {\boldsymbol {\ell }}\\&\qquad \qquad +{\frac {1}{q}}\oint _{C}\mathrm {Effective\ thermal\ forces\ \cdot } \ \mathrm {d} {\boldsymbol {\ell }}\ ,\end{aligned}}} which is a conceptual equation mainly, because the determination of the "effective forces" is difficult. The term ∮ C [ E + v × B ] ⋅ d ℓ {\displaystyle \oint _{C}\left[{\boldsymbol {E}}+{\boldsymbol {v}}\times {\boldsymbol {B}}\right]\cdot \mathrm {d} {\boldsymbol {\ell }}} is often called a "motional emf". == In (electrochemical) thermodynamics == When multiplied by an amount of charge d Q {\displaystyle dQ} the emf E {\displaystyle {\mathcal {E}}} yields a thermodynamic work term E d Q {\displaystyle {\mathcal {E}}\,dQ} that is used in the formalism for the change in Gibbs energy when charge is passed in a battery: d G = − S d T + V d P + E d Q , {\displaystyle dG=-S\,dT+V\,dP+{\mathcal {E}}\,dQ\ ,} where G {\displaystyle G} is the Gibbs free energy, S {\displaystyle S} is the entropy, V {\displaystyle V} is the system volume, P {\displaystyle P} is its pressure and T {\displaystyle T} is its absolute temperature. The combination ( E , Q ) {\displaystyle ({\mathcal {E}},Q)} is an example of a conjugate pair of variables. At constant pressure the above relationship produces a Maxwell relation that links the change in open cell voltage with temperature T {\displaystyle T} (a measurable quantity) to the change in entropy S {\displaystyle S} when charge is passed isothermally and isobarically. The latter is closely related to the reaction entropy of the electrochemical reaction that lends the battery its power. This Maxwell relation is: ( ∂ E ∂ T ) Q = − ( ∂ S ∂ Q ) T {\displaystyle \left({\frac {\partial {\mathcal {E}}}{\partial T}}\right)_{Q}=-\left({\frac {\partial S}{\partial Q}}\right)_{T}} If a mole of ions goes into solution (for example, in a Daniell cell, as discussed below) the charge through the external circuit is: Δ Q = − n 0 F 0 , {\displaystyle \Delta Q=-n_{0}F_{0}\ ,} where n 0 {\displaystyle n_{0}} is the number of electrons/ion, and F 0 {\displaystyle F_{0}} is the Faraday constant and the minus sign indicates discharge of the cell. Assuming constant pressure and volume, the thermodynamic properties of the cell are related strictly to the behavior of its emf by: Δ H = − n 0 F 0 ( E − T d E d T ) , {\displaystyle \Delta H=-n_{0}F_{0}\left({\mathcal {E}}-T{\frac {d{\mathcal {E}}}{dT}}\right)\,,} where Δ H {\displaystyle \Delta H} is the enthalpy of reaction. The quantities on the right are all directly measurable. Assuming constant temperature and pressure: Δ G = − n 0 F 0 E {\displaystyle \Delta G=-n_{0}F_{0}{\mathcal {E}}} which is used in the derivation of the Nernst equation. == Distinction with potential difference == Although an electrical potential difference (voltage) is sometimes called an emf, they are formally distinct concepts: Potential difference is a more general term that includes emf. Emf is the cause of a potential difference. In a circuit of a voltage source and a resistor, the sum of the source's applied voltage plus the ohmic voltage drop through the resistor is zero. But the resistor provides no emf, only the voltage source does: For a circuit using a battery source, the emf is due solely to the chemical forces in the battery. For a circuit using an electric generator, the emf is due solely to a time-varying magnetic forces within the generator. Both a 1 volt emf and a 1 volt potential difference correspond to 1 joule per coulomb of charge. In the case of an open circuit, the electric charge that has been separated by the mechanism generating the emf creates an electric field opposing the separation mechanism. For example, the chemical reaction in a voltaic cell stops when the opposing electric field at each electrode is strong enough to arrest the reactions. A larger opposing field can reverse the reactions in what are called reversible cells. The electric charge that has been separated creates an electric potential difference that can (in many cases) be measured with a voltmeter between the terminals of the device, when not connected to a load. The magnitude of the emf for the battery (or other source) is the value of this open-circuit voltage. When the battery is charging or discharging, the emf itself cannot be measured directly using the external voltage because some voltage is lost inside the source. It can, however, be inferred from a measurement of the current I {\displaystyle I} and potential difference V {\displaystyle V} , provided that the internal resistance R {\displaystyle R} already has been measured: E = V load + I R . {\displaystyle {\mathcal {E}}=V_{\text{load}}+IR\,.} "Potential difference" is not the same as "induced emf" (often called "induced voltage"). The potential difference (difference in the electric scalar potential) between two points A and B is independent of the path we take from A to B. If a voltmeter always measured the potential difference between A and B, then the position of the voltmeter would make no difference. However, it is quite possible for the measurement by a voltmeter between points A and B to depend on the position of the voltmeter, if a time-dependent magnetic field is present. For example, consider an infinitely long solenoid using an AC current to generate a varying flux in the interior of the solenoid. Outside the solenoid we have two resistors connected in a ring around the solenoid. The resistor on the left is 100 Ω and the one on the right is 200 Ω, they are connected at the top and bottom at points A and B. The induced voltage, by Faraday's law is V {\displaystyle V} , so the current I = V / ( 100 + 200 ) . {\displaystyle I=V/(100+200).} Therefore, the voltage across the 100 Ω resistor is 100 I {\displaystyle 100\ I} and the voltage across the 200 Ω resistor is 200 I {\displaystyle 200\ I} , yet the two resistors are connected on both ends, but V A B {\displaystyle V_{AB}} measured with the voltmeter to the left of the solenoid is not the same as V A B {\displaystyle V_{AB}} measured with the voltmeter to the right of the solenoid. == Generation == === Chemical sources === The question of how batteries (galvanic cells) generate an emf occupied scientists for most of the 19th century. The "seat of the electromotive force" was eventually determined in 1889 by Walther Nernst to be primarily at the interfaces between the electrodes and the electrolyte. Atoms in molecules or solids are held together by chemical bonding, which stabilizes the molecule or solid (i.e. reduces its energy). When molecules or solids of relatively high energy are brought together, a spontaneous chemical reaction can occur that rearranges the bonding and reduces the (free) energy of the system. In batteries, coupled half-reactions, often involving metals and their ions, occur in tandem, with a gain of electrons (termed "reduction") by one conductive electrode and loss of electrons (termed "oxidation") by another (reduction-oxidation or redox reactions). The spontaneous overall reaction can only occur if electrons move through an external wire between the electrodes. The electrical energy given off is the free energy lost by the chemical reaction system. As an example, a Daniell cell consists of a zinc anode (an electron collector) that is oxidized as it dissolves into a zinc sulfate solution. The dissolving zinc leaving behind its electrons in the electrode according to the oxidation reaction (s = solid electrode; aq = aqueous solution): Z n ( s ) → Z n ( a q ) 2 + + 2 e − {\displaystyle \mathrm {Zn_{(s)}\rightarrow Zn_{(aq)}^{2+}+2e^{-}\ } } The zinc sulfate is the electrolyte in that half cell. It is a solution which contains zinc cations Z n 2 + {\displaystyle \mathrm {Zn} ^{2+}} , and sulfate anions S O 4 2 − {\displaystyle \mathrm {SO} _{4}^{2-}} with charges that balance to zero. In the other half cell, the copper cations in a copper sulfate electrolyte move to the copper cathode to which they attach themselves as they adopt electrons from the copper electrode by the reduction reaction: C u ( a q ) 2 + + 2 e − → C u ( s ) {\displaystyle \mathrm {Cu_{(aq)}^{2+}+2e^{-}\rightarrow Cu_{(s)}\ } } which leaves a deficit of electrons on the copper cathode. The difference of excess electrons on the anode and deficit of electrons on the cathode creates an electrical potential between the two electrodes. (A detailed discussion of the microscopic process of electron transfer between an electrode and the ions in an electrolyte may be found in Conway.) The electrical energy released by this reaction (213 kJ per 65.4 g of zinc) can be attributed mostly due to the 207 kJ weaker bonding (smaller magnitude of the cohesive energy) of zinc, which has filled 3d- and 4s-orbitals, compared to copper, which has an unfilled orbital available for bonding. If the cathode and anode are connected by an external conductor, electrons pass through that external circuit (light bulb in figure), while ions pass through the salt bridge to maintain charge balance until the anode and cathode reach electrical equilibrium of zero volts as chemical equilibrium is reached in the cell. In the process the zinc anode is dissolved while the copper electrode is plated with copper. The salt bridge has to close the electrical circuit while preventing the copper ions from moving to the zinc electrode and being reduced there without generating an external current. It is not made of salt but of material able to wick cations and anions (a dissociated salt) into the solutions. The flow of positively charged cations along the bridge is equivalent to the same number of negative charges flowing in the opposite direction. If the light bulb is removed (open circuit) the emf between the electrodes is opposed by the electric field due to the charge separation, and the reactions stop. For this particular cell chemistry, at 298 K (room temperature), the emf E {\displaystyle {\mathcal {E}}} = 1.0934 V, with a temperature coefficient of d E / d T {\displaystyle d{\mathcal {E}}/dT} = −4.53×10−4 V/K. ==== Voltaic cells ==== Volta developed the voltaic cell about 1792, and presented his work March 20, 1800. Volta correctly identified the role of dissimilar electrodes in producing the voltage, but incorrectly dismissed any role for the electrolyte. Volta ordered the metals in a 'tension series', "that is to say in an order such that any one in the list becomes positive when in contact with any one that succeeds, but negative by contact with any one that precedes it." A typical symbolic convention in a schematic of this circuit ( –||– ) would have a long electrode 1 and a short electrode 2, to indicate that electrode 1 dominates. Volta's law about opposing electrode emfs implies that, given ten electrodes (for example, zinc and nine other materials), 45 unique combinations of voltaic cells (10 × 9/2) can be created. ==== Typical values ==== The electromotive force produced by primary (single-use) and secondary (rechargeable) cells is usually of the order of a few volts. The figures quoted below are nominal, because emf varies according to the size of the load and the state of exhaustion of the cell. ==== Other chemical sources ==== Other chemical sources include fuel cells. === Electromagnetic induction === Electromagnetic induction is the production of a circulating electric field by a time-dependent magnetic field. A time-dependent magnetic field can be produced either by motion of a magnet relative to a circuit, by motion of a circuit relative to another circuit (at least one of these must be carrying an electric current), or by changing the electric current in a fixed circuit. The effect on the circuit itself, of changing the electric current, is known as self-induction; the effect on another circuit is known as mutual induction. For a given circuit, the electromagnetically induced emf is determined purely by the rate of change of the magnetic flux through the circuit according to Faraday's law of induction. An emf is induced in a coil or conductor whenever there is change in the flux linkages. Depending on the way in which the changes are brought about, there are two types: When the conductor is moved in a stationary magnetic field to procure a change in the flux linkage, the emf is statically induced. The electromotive force generated by motion is often referred to as motional emf. When the change in flux linkage arises from a change in the magnetic field around the stationary conductor, the emf is dynamically induced. The electromotive force generated by a time-varying magnetic field is often referred to as transformer emf. === Contact potentials === When solids of two different materials are in contact, thermodynamic equilibrium requires that one of the solids assume a higher electrical potential than the other. This is called the contact potential. Dissimilar metals in contact produce what is known also as a contact electromotive force or Galvani potential. The magnitude of this potential difference is often expressed as a difference in Fermi levels in the two solids when they are at charge neutrality, where the Fermi level (a name for the chemical potential of an electron system) describes the energy necessary to remove an electron from the body to some common point (such as ground). If there is an energy advantage in taking an electron from one body to the other, such a transfer will occur. The transfer causes a charge separation, with one body gaining electrons and the other losing electrons. This charge transfer causes a potential difference between the bodies, which partly cancels the potential originating from the contact, and eventually equilibrium is reached. At thermodynamic equilibrium, the Fermi levels are equal (the electron removal energy is identical) and there is now a built-in electrostatic potential between the bodies. The original difference in Fermi levels, before contact, is referred to as the emf. The contact potential cannot drive steady current through a load attached to its terminals because that current would involve a charge transfer. No mechanism exists to continue such transfer and, hence, maintain a current, once equilibrium is attained. One might inquire why the contact potential does not appear in Kirchhoff's law of voltages as one contribution to the sum of potential drops. The customary answer is that any circuit involves not only a particular diode or junction, but also all the contact potentials due to wiring and so forth around the entire circuit. The sum of all the contact potentials is zero, and so they may be ignored in Kirchhoff's law. === Solar cell === Operation of a solar cell can be understood from its equivalent circuit. Photons with energy greater than the bandgap of the semiconductor create mobile electron–hole pairs. Charge separation occurs because of a pre-existing electric field associated with the p-n junction. This electric field is created from a built-in potential, which arises from the contact potential between the two different materials in the junction. The charge separation between positive holes and negative electrons across the p–n diode yields a forward voltage, the photo voltage, between the illuminated diode terminals, which drives current through any attached load. Photo voltage is sometimes referred to as the photo emf, distinguishing between the effect and the cause. ==== Solar cell current–voltage relationship ==== Two internal current losses I S H + I D {\displaystyle I_{SH}+I_{D}} limit the total current I {\displaystyle I} available to the external circuit. The light-induced charge separation eventually creates a forward current I S H {\displaystyle I_{SH}} through the cell's internal resistance R S H {\displaystyle R_{SH}} in the direction opposite the light-induced current I L {\displaystyle I_{L}} . In addition, the induced voltage tends to forward bias the junction, which at high enough voltages will cause a recombination current I D {\displaystyle I_{D}} in the diode opposite the light-induced current. When the output is short-circuited, the output voltage is zeroed, and so the voltage across the diode is smallest. Thus, short-circuiting results in the smallest I S H + I D {\displaystyle I_{SH}+I_{D}} losses and consequently the maximum output current, which for a high-quality solar cell is approximately equal to the light-induced current I L {\displaystyle I_{L}} . Approximately this same current is obtained for forward voltages up to the point where the diode conduction becomes significant. The current delivered by the illuminated diode to the external circuit can be simplified (based on certain assumptions) to: I = I L − I 0 ( e V m V T − 1 ) . {\displaystyle I=I_{L}-I_{0}\left(e^{\frac {V}{m\ V_{\mathrm {T} }}}-1\right)\ .} I 0 {\displaystyle I_{0}} is the reverse saturation current. Two parameters that depend on the solar cell construction and to some degree upon the voltage itself are the ideality factor m and the thermal voltage V T = k T q {\displaystyle V_{\mathrm {T} }={\tfrac {kT}{q}}} , which is about 26 millivolts at room temperature. ==== Solar cell photo emf ==== Solving the illuminated diode's above simplified current–voltage relationship for output voltage yields: V = m V T ln ⁡ ( I L − I I 0 + 1 ) , {\displaystyle V=m\ V_{\mathrm {T} }\ln \left({\frac {I_{\text{L}}-I}{I_{0}}}+1\right)\ ,} which is plotted against I / I 0 {\displaystyle I/I_{0}} in the figure. The solar cell's photo emf E p h o t o {\displaystyle {\mathcal {E}}_{\mathrm {photo} }} has the same value as the open-circuit voltage V o c {\displaystyle V_{oc}} , which is determined by zeroing the output current I {\displaystyle I} : E p h o t o = V oc = m V T ln ⁡ ( I L I 0 + 1 ) . {\displaystyle {\mathcal {E}}_{\mathrm {photo} }=V_{\text{oc}}=m\ V_{\mathrm {T} }\ln \left({\frac {I_{\text{L}}}{I_{0}}}+1\right)\ .} It has a logarithmic dependence on the light-induced current I L {\displaystyle I_{L}} and is where the junction's forward bias voltage is just enough that the forward current completely balances the light-induced current. For silicon junctions, it is typically not much more than 0.5 volts. While for high-quality silicon panels it can exceed 0.7 volts in direct sunlight. When driving a resistive load, the output voltage can be determined using Ohm's law and will lie between the short-circuit value of zero volts and the open-circuit voltage V o c {\displaystyle V_{oc}} . When that resistance is small enough such that I ≈ I L {\displaystyle I\approx I_{L}} (the near-vertical part of the two illustrated curves), the solar cell acts more like a current generator rather than a voltage generator, since the current drawn is nearly fixed over a range of output voltages. This contrasts with batteries, which act more like voltage generators. === Other sources that generate emf === A transformer coupling two circuits may be considered a source of emf for one of the circuits, just as if it were caused by an electrical generator; this is the origin of the term "transformer emf". For converting sound waves into voltage signals: a microphone generates an emf from a moving diaphragm. a magnetic pickup generates an emf from a varying magnetic field produced by an instrument. a piezoelectric sensor generates an emf from strain on a piezoelectric crystal. Devices that use temperature to produce emfs include thermocouples and thermopiles. Any electrical transducer which converts a physical energy into electrical energy. == See also == Counter-electromotive force Electric battery Electrochemical cell Electrolytic cell Galvanic cell Magnetomotive force Voltaic pile == References == == Further reading == George F. Barker, "On the measurement of electromotive force". Proceedings of the American Philosophical Society Held at Philadelphia for Promoting Useful Knowledge, American Philosophical Society. January 19, 1883. Andrew Gray, "Absolute Measurements in Electricity and Magnetism", Electromotive force. Macmillan and co., 1884. Charles Albert Perkins, "Outlines of Electricity and Magnetism", Measurement of Electromotive Force. Henry Holt and co., 1896. John Livingston Rutgers Morgan, "The Elements of Physical Chemistry", Electromotive force. J. Wiley, 1899. "Abhandlungen zur Thermodynamik, von H. Helmholtz. Hrsg. von Max Planck". (Tr. "Papers to thermodynamics, on H. Helmholtz. Hrsg. by Max Planck".) Leipzig, W. Engelmann, Of Ostwald classical author of the accurate sciences series. New consequence. No. 124, 1902. Theodore William Richards and Gustavus Edward Behr, jr., "The electromotive force of iron under varying conditions, and the effect of occluded hydrogen". Carnegie Institution of Washington publication series, 1906. LCCN 07-3935 Henry S. Carhart, "Thermo-electromotive force in electric cells, the thermo-electromotive force between a metal and a solution of one of its salts". New York, D. Van Nostrand company, 1920. LCCN 20-20413 Hazel Rossotti, "Chemical applications of potentiometry". London, Princeton, N.J., Van Nostrand, 1969. ISBN 0-442-07048-9 LCCN 69-11985 Nabendu S. Choudhury, 1973. "Electromotive force measurements on cells involving beta-alumina solid electrolyte". NASA technical note, D-7322. John O'M. Bockris; Amulya K. N. Reddy (1973). "Electrodics". Modern Electrochemistry: An Introduction to an Interdisciplinary Area (2 ed.). Springer. ISBN 978-0-306-25002-6. Roberts, Dana (1983). "How batteries work: A gravitational analog". Am. J. Phys. 51 (9): 829. Bibcode:1983AmJPh..51..829R. doi:10.1119/1.13128. G. W. Burns, et al., "Temperature-electromotive force reference functions and tables for the letter-designated thermocouple types based on the ITS-90". Gaithersburg, MD : U.S. Dept. of Commerce, National Institute of Standards and Technology, Washington, Supt. of Docs., U.S. G.P.O., 1993. Norio Sato (1998). "Semiconductor photoelectrodes". Electrochemistry at metal and semiconductor electrodes (2nd ed.). Elsevier. p. 326 ff. ISBN 978-0-444-82806-4. Hai, Pham Nam; Ohya, Shinobu; Tanaka, Masaaki; Barnes, Stewart E.; Maekawa, Sadamichi (2009-03-08). "Electromotive force and huge magnetoresistance in magnetic tunnel junctions". Nature. 458 (7237): 489–92. Bibcode:2009Natur.458..489H. doi:10.1038/nature07879. PMID 19270681. S2CID 4320209.
Wikipedia/Electromotive_force
In mathematical physics, the Whitham equation is a non-local model for non-linear dispersive waves. The equation is notated as follows:This integro-differential equation for the oscillatory variable η(x,t) is named after Gerald Whitham who introduced it as a model to study breaking of non-linear dispersive water waves in 1967. Wave breaking – bounded solutions with unbounded derivatives – for the Whitham equation has recently been proven. For a certain choice of the kernel K(x − ξ) it becomes the Fornberg–Whitham equation. == Water waves == Using the Fourier transform (and its inverse), with respect to the space coordinate x and in terms of the wavenumber k: For surface gravity waves, the phase speed c(k) as a function of wavenumber k is taken as: c ww ( k ) = g k tanh ⁡ ( k h ) , {\displaystyle c_{\text{ww}}(k)={\sqrt {{\frac {g}{k}}\,\tanh(kh)}},} while α ww = 3 2 g h , {\displaystyle \alpha _{\text{ww}}={\frac {3}{2}}{\sqrt {\frac {g}{h}}},} with g the gravitational acceleration and h the mean water depth. The associated kernel Kww(s) is, using the inverse Fourier transform: K ww ( s ) = 1 2 π ∫ − ∞ + ∞ c ww ( k ) e i k s d k = 1 2 π ∫ − ∞ + ∞ c ww ( k ) cos ⁡ ( k s ) d k , {\displaystyle K_{\text{ww}}(s)={\frac {1}{2\pi }}\int _{-\infty }^{+\infty }c_{\text{ww}}(k)\,{\text{e}}^{iks}\,{\text{d}}k={\frac {1}{2\pi }}\int _{-\infty }^{+\infty }c_{\text{ww}}(k)\,\cos(ks)\,{\text{d}}k,} since cww is an even function of the wavenumber k. The Korteweg–de Vries equation (KdV equation) emerges when retaining the first two terms of a series expansion of cww(k) for long waves with kh ≪ 1: c kdv ( k ) = g h ( 1 − 1 6 k 2 h 2 ) , {\displaystyle c_{\text{kdv}}(k)={\sqrt {gh}}\left(1-{\frac {1}{6}}k^{2}h^{2}\right),} K kdv ( s ) = g h ( δ ( s ) + 1 6 h 2 δ ′ ′ ( s ) ) , {\displaystyle K_{\text{kdv}}(s)={\sqrt {gh}}\left(\delta (s)+{\frac {1}{6}}h^{2}\,\delta ^{\prime \prime }(s)\right),} α kdv = 3 2 g h , {\displaystyle \alpha _{\text{kdv}}={\frac {3}{2}}{\sqrt {\frac {g}{h}}},} with δ(s) the Dirac delta function. Bengt Fornberg and Gerald Whitham studied the kernel Kfw(s) – non-dimensionalised using g and h: K fw ( s ) = 1 2 ν e − ν | s | {\displaystyle K_{\text{fw}}(s)={\frac {1}{2}}\nu {\text{e}}^{-\nu |s|}} and c fw = ν 2 ν 2 + k 2 , {\displaystyle c_{\text{fw}}={\frac {\nu ^{2}}{\nu ^{2}+k^{2}}},} with α fw = 3 2 . {\displaystyle \alpha _{\text{fw}}={\frac {3}{2}}.} The resulting integro-differential equation can be reduced to the partial differential equation known as the Fornberg–Whitham equation: ( ∂ 2 ∂ x 2 − ν 2 ) ( ∂ η ∂ t + 3 2 η ∂ η ∂ x ) + ∂ η ∂ x = 0. {\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}-\nu ^{2}\right)\left({\frac {\partial \eta }{\partial t}}+{\frac {3}{2}}\,\eta \,{\frac {\partial \eta }{\partial x}}\right)+{\frac {\partial \eta }{\partial x}}=0.} This equation is shown to allow for peakon solutions – as a model for waves of limiting height – as well as the occurrence of wave breaking (shock waves, absent in e.g. solutions of the Korteweg–de Vries equation). == Notes and references == === Notes === === References ===
Wikipedia/Whitham_equation
In computational neuroscience, the Wilson–Cowan model describes the dynamics of interactions between populations of very simple excitatory and inhibitory model neurons. It was developed by Hugh R. Wilson and Jack D. Cowan and extensions of the model have been widely used in modeling neuronal populations. The model is important historically because it uses phase plane methods and numerical solutions to describe the responses of neuronal populations to stimuli. Because the model neurons are simple, only elementary limit cycle behavior, i.e. neural oscillations, and stimulus-dependent evoked responses are predicted. The key findings include the existence of multiple stable states, and hysteresis, in the population response. == Mathematical description == The Wilson–Cowan model considers a homogeneous population of interconnected neurons of excitatory and inhibitory subtypes. All cells receive the same number of excitatory and inhibitory afferents, that is, all cells receive the same average excitation, x(t). The target is to analyze the evolution in time of number of excitatory and inhibitory cells firing at time t, E ( t ) {\displaystyle E(t)} and I ( t ) {\displaystyle I(t)} respectively. The equations that describes this evolution are the Wilson-Cowan model: E ( t + τ ) = [ 1 − ∫ t − r t E ( t ′ ) d t ′ ] S e ( ∫ − ∞ t α ( t − t ′ ) [ c 1 E ( t ′ ) − c 2 I ( t ′ ) + P ( t ′ ) ] d t ′ ) {\displaystyle E(t+\tau )=\left[1-\int _{t-r}^{t}E(t')dt'\right]\;S_{e}\left(\int _{-\infty }^{t}\alpha (t-t')[c_{1}E(t')-c_{2}I(t')+P(t')]dt'\right)} I ( t + τ ) = [ 1 − ∫ t − r t I ( t ′ ) d t ′ ] S i ( ∫ − ∞ t α ( t − t ′ ) [ c 3 E ( t ′ ) − c 4 I ( t ′ ) + Q ( t ′ ) ] d t ′ ) {\displaystyle I(t+\tau )=\left[1-\int _{t-r}^{t}I(t')dt'\right]\;S_{i}\left(\int _{-\infty }^{t}\alpha (t-t')[c_{3}E(t')-c_{4}I(t')+Q(t')]dt'\right)} where: S e { } {\displaystyle S_{e}\{\}} and S i { } {\displaystyle S_{i}\{\}} are functions of sigmoid form that depends on the distribution of the trigger thresholds (see below) α ( t ) {\displaystyle \alpha (t)} is the stimulus decay function c 1 {\displaystyle c_{1}} and c 2 {\displaystyle c_{2}} are respectively the connectivity coefficient giving the average number of excitatory and inhibitory synapses per excitatory cell; c 3 {\displaystyle c_{3}} and c 4 {\displaystyle c_{4}} its counterparts for inhibitory cells P ( t ) {\displaystyle P(t)} and Q ( t ) {\displaystyle Q(t)} are the external input to the excitatory/inhibitory populations. If θ {\displaystyle \theta } denotes a cell's threshold potential and D ( θ ) {\displaystyle D(\theta )} is the distribution of thresholds in all cells, then the expected proportion of neurons receiving an excitation at or above threshold level per unit time is: S ( x ) = ∫ 0 x D ( θ ) d θ {\displaystyle S(x)=\int _{0}^{x}D(\theta )d\theta } , that is a function of sigmoid form if D ( ) {\displaystyle D()} is unimodal. If, instead of all cells receiving same excitatory inputs and different threshold, we consider that all cells have same threshold but different number of afferent synapses per cell, being C ( w ) {\displaystyle C(w)} the distribution of the number of afferent synapses, a variant of function S ( ) {\displaystyle S()} must be used: S ( x ) = ∫ θ x ∞ C ( w ) d w {\displaystyle S(x)=\int _{\frac {\theta }{x}}^{\infty }C(w)dw} === Derivation of the model === If we denote by τ {\displaystyle \tau } the refractory period after a trigger, the proportion of cells in refractory period is ∫ t − r t E ( t ′ ) d t ′ {\displaystyle \int _{t-r}^{t}E(t')dt'} and the proportion of sensitive (able to trigger) cells is 1 − ∫ t − r t E ( t ′ ) d t ′ {\displaystyle 1-\int _{t-r}^{t}E(t')dt'} . The average excitation level of an excitatory cell at time t {\displaystyle t} is: x ( t ) = ∫ − ∞ t α ( t − t ′ ) [ c 1 E ( t ′ ) − c 2 I ( t ′ ) + P ( t ′ ) ] d t ′ {\displaystyle x(t)=\int _{-\infty }^{t}\alpha (t-t')[c_{1}E(t')-c_{2}I(t')+P(t')]dt'} Thus, the number of cells that triggers at some time E ( t + τ ) {\displaystyle E(t+\tau )} is the number of cells not in refractory interval, 1 − ∫ t − r t E ( t ′ ) d t ′ {\displaystyle 1-\int _{t-r}^{t}E(t')dt'} AND that have reached the excitatory level, S e ( x ( t ) ) {\displaystyle S_{e}(x(t))} , obtaining in this way the product at right side of the first equation of the model (with the assumption of uncorrelated terms). Same rationale can be done for inhibitory cells, obtaining second equation. === Simplification of the model assuming time coarse graining === When time coarse-grained modeling is assumed the model simplifies, being the new equations of the model: τ d E ¯ d t = − E ¯ + ( 1 − r E ¯ ) S e [ k c 1 E ¯ ( t ) − k c 2 I ¯ ( t ) + k P ( t ) ] {\displaystyle \tau {\frac {d{\bar {E}}}{dt}}=-{\bar {E}}+(1-r{\bar {E}})S_{e}[kc_{1}{\bar {E}}(t)-kc_{2}{\bar {I}}(t)+kP(t)]} τ ′ d I ¯ d t = − I ¯ + ( 1 − r ′ I ¯ ) S i [ k ′ c 3 E ¯ ( t ) − k ′ c 4 I ¯ ( t ) + k ′ Q ( t ) ] {\displaystyle \tau '{\frac {d{\bar {I}}}{dt}}=-{\bar {I}}+(1-r'{\bar {I}})S_{i}[k'c_{3}{\bar {E}}(t)-k'c_{4}{\bar {I}}(t)+k'Q(t)]} where bar terms are the time coarse-grained versions of original ones. == Application to epilepsy == The determination of three concepts is fundamental to an understanding of hypersynchronization of neurophysiological activity at the global (system) level: The mechanism by which normal (baseline) neurophysiological activity evolves into hypersynchronization of large regions of the brain during epileptic seizures The key factors that govern the rate of expansion of hypersynchronized regions The electrophysiological activity pattern dynamics on a large-scale A canonical analysis of these issues, developed in 2008 by Shusterman and Troy using the Wilson–Cowan model, predicts qualitative and quantitative features of epileptiform activity. In particular, it accurately predicts the propagation speed of epileptic seizures (which is approximately 4–7 times slower than normal brain wave activity) in a human subject with chronically implanted electroencephalographic electrodes. === Transition into hypersynchronization === The transition from normal state of brain activity to epileptic seizures was not formulated theoretically until 2008, when a theoretical path from a baseline state to large-scale self-sustained oscillations, which spread out uniformly from the point of stimulus, has been mapped for the first time. A realistic state of baseline physiological activity has been defined, using the following two-component definition: (1) A time-independent component represented by subthreshold excitatory activity E and superthreshold inhibitory activity I. (2) A time-varying component which may include singlepulse waves, multipulse waves, or periodic waves caused by spontaneous neuronal activity. This baseline state represents activity of the brain in the state of relaxation, in which neurons receive some level of spontaneous, weak stimulation by small, naturally present concentrations of neurohormonal substances. In waking adults this state is commonly associated with alpha rhythm, whereas slower (theta and delta) rhythms are usually observed during deeper relaxation and sleep. To describe this general setting, a 3-variable ( u , I , v ) {\displaystyle (u,I,v)} spatially dependent extension of the classical Wilson–Cowan model can be utilized. Under appropriate initial conditions, the excitatory component, u, dominates over the inhibitory component, I, and the three-variable system reduces to the two-variable Pinto-Ermentrout type model ∂ u ∂ t = u − v + ∫ R 2 ω ( x − x ′ , y − y ′ ) f ( u − θ ) d x d y + ζ ( x , y , t ) , {\displaystyle {\partial u \over \partial t}=u-v+\int _{R^{2}}\omega (x-x',y-y')f(u-\theta )\,dxdy+\zeta (x,y,t),} ∂ v ∂ t = ϵ ( β u − v ) . {\displaystyle {\partial v \over \partial t}=\epsilon (\beta u-v).} The variable v governs the recovery of excitation u; ϵ > 0 {\displaystyle \epsilon >0} and β > 0 {\displaystyle \beta >0} determine the rate of change of recovery. The connection function ω ( x , y ) {\displaystyle \omega (x,y)} is positive, continuous, symmetric, and has the typical form ω = A e − λ − ( x 2 + y 2 ) {\displaystyle \omega =Ae^{-\lambda {\sqrt {-(x^{2}+y^{2})}}}} . In Ref. ( A , λ ) = ( 2.1 , 1 ) . {\displaystyle (A,\lambda )=(2.1,1).} The firing rate function, which is generally accepted to have a sharply increasing sigmoidal shape, is approximated by f ( u − θ ) = H ( u − θ ) {\displaystyle f(u-\theta )=H(u-\theta )} , where H denotes the Heaviside function; ζ ( x , y , t ) {\displaystyle \zeta (x,y,t)} is a short-time stimulus. This ( u , v ) {\displaystyle (u,v)} system has been successfully used in a wide variety of neuroscience research studies. In particular, it predicted the existence of spiral waves, which can occur during seizures; this theoretical prediction was subsequently confirmed experimentally using optical imaging of slices from the rat cortex. === Rate of expansion === The expansion of hypersynchronized regions exhibiting large-amplitude stable bulk oscillations occurs when the oscillations coexist with the stable rest state ( u , v ) = ( 0 , 0 ) {\displaystyle (u,v)=(0,0)} . To understand the mechanism responsible for the expansion, it is necessary to linearize the ( u , v ) {\displaystyle (u,v)} system around ( 0 , 0 ) {\displaystyle (0,0)} when ϵ > 0 {\displaystyle \epsilon >0} is held fixed. The linearized system exhibits subthreshold decaying oscillations whose frequency increases as β {\displaystyle \beta } increases. At a critical value β ∗ {\displaystyle \beta ^{*}} where the oscillation frequency is high enough, bistability occurs in the ( u , v ) {\displaystyle (u,v)} system: a stable, spatially independent, periodic solution (bulk oscillation) and a stable rest state coexist over a continuous range of parameters. When β ≥ β ∗ {\displaystyle \beta \geq \beta ^{*}} where bulk oscillations occur, "The rate of expansion of the hypersynchronization region is determined by an interplay between two key features: (i) the speed c of waves that form and propagate outward from the edge of the region, and (ii) the concave shape of the graph of the activation variable u as it rises, during each bulk oscillation cycle, from the rest state u=0 to the activation threshold. Numerical experiments show that during the rise of u towards threshold, as the rate of vertical increase slows down, over time interval Δ t , {\displaystyle \Delta t,} due to the concave component, the stable solitary wave emanating from the region causes the region to expand spatially at a Rate proportional to the wave speed. From this initial observation it is natural to expect that the proportionality constant should be the fraction of the time that the solution is concave during one cycle." Therefore, when β ≥ β ∗ {\displaystyle \beta \geq \beta ^{*}} , the rate of expansion of the region is estimated by R a t e = ( Δ t / T ) ∗ c ( 1 ) {\displaystyle Rate=(\Delta t/T)*c~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(1)} where Δ t {\displaystyle \Delta t} is the length of subthreshold time interval, T is period of the periodic solution; c is the speed of waves emanating from the hypersynchronization region. A realistic value of c, derived by Wilson et al., is c=22.4 mm/s. How to evaluate the ratio Δ t / T ? {\displaystyle \Delta t/T?} To determine values for Δ t / T {\displaystyle \Delta t/T} it is necessary to analyze the underlying bulk oscillation which satisfies the spatially independent system d u d t = u − v + H ( u − θ ) , {\displaystyle {{du} \over {dt}}=u-v+H(u-\theta ),} d v d t = ϵ ( β u − v ) . {\displaystyle {{dv} \over {dt}}=\epsilon (\beta u-v).} This system is derived using standard functions and parameter values ω = 2.1 e − λ − ( x 2 + y 2 ) {\displaystyle \omega =2.1e^{-\lambda {\sqrt {-(x^{2}+y^{2})}}}} , ϵ = 0.1 {\displaystyle \epsilon =0.1} and θ = 0.1 {\displaystyle \theta =0.1} Bulk oscillations occur when β ≥ β ∗ = 12.61 {\displaystyle \beta \geq \beta ^{*}=12.61} . When 12.61 ≤ β ≤ 17 {\displaystyle 12.61\leq \beta \leq 17} , Shusterman and Troy analyzed the bulk oscillations and found 0.136 ≤ Δ t / T ≤ 0.238 {\displaystyle 0.136\leq \Delta t/T\leq 0.238} . This gives the range 3.046 m m / s ≤ R a t e ≤ 5.331 m m / s ( 2 ) {\displaystyle 3.046mm/s\leq Rate\leq 5.331mm/s~~~~~~~~~~~~(2)} Since 0.136 ≤ Δ t / T ≤ 0.238 {\displaystyle 0.136\leq \Delta t/T\leq 0.238} , Eq. (1) shows that the migration Rate is a fraction of the traveling wave speed, which is consistent with experimental and clinical observations regarding the slow spread of epileptic activity. This migration mechanism also provides a plausible explanation for spread and sustenance of epileptiform activity without a driving source that, despite a number of experimental studies, has never been observed. === Comparing theoretical and experimental migration rates === The rate of migration of hypersynchronous activity that was experimentally recorded during seizures in a human subject, using chronically implanted subdural electrodes on the surface of the left temporal lobe, has been estimated as R a t e ≈ 4 m m / s {\displaystyle Rate\approx 4mm/s} , which is consistent with the theoretically predicted range given above in (2). The ratio R a t e / c {\displaystyle Rate/c} in formula (1) shows that the leading edge of the region of synchronous seizure activity migrates approximately 4–7 times more slowly than normal brain wave activity, which is in agreement with the experimental data described above. To summarize, mathematical modeling and theoretical analysis of large-scale electrophysiological activity provide tools for predicting the spread and migration of hypersynchronous brain activity, which can be useful for diagnostic evaluation and management of patients with epilepsy. It might be also useful for predicting migration and spread of electrical activity over large regions of the brain that occur during deep sleep (Delta wave), cognitive activity and in other functional settings. == References ==
Wikipedia/Wilson-Cowan_model
The Heaviside step function, or the unit step function, usually denoted by H or θ (but sometimes u, 1 or 𝟙), is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for positive arguments. Different conventions concerning the value H(0) are in use. It is an example of the general class of step functions, all of which can be represented as linear combinations of translations of this one. The function was originally developed in operational calculus for the solution of differential equations, where it represents a signal that switches on at a specified time and stays switched on indefinitely. Heaviside developed the operational calculus as a tool in the analysis of telegraphic communications and represented the function as 1. == Formulation == Taking the convention that H(0) = 1, the Heaviside function may be defined as: a piecewise function: H ( x ) := { 1 , x ≥ 0 0 , x < 0 {\displaystyle H(x):={\begin{cases}1,&x\geq 0\\0,&x<0\end{cases}}} using the Iverson bracket notation: H ( x ) := [ x ≥ 0 ] {\displaystyle H(x):=[x\geq 0]} an indicator function: H ( x ) := 1 x ≥ 0 = 1 R + ( x ) {\displaystyle H(x):=\mathbf {1} _{x\geq 0}=\mathbf {1} _{\mathbb {R} _{+}}(x)} For the alternative convention that H(0) = ⁠1/2⁠, it may be expressed as: a piecewise function: H ( x ) := { 1 , x > 0 1 2 , x = 0 0 , x < 0 {\displaystyle H(x):={\begin{cases}1,&x>0\\{\frac {1}{2}},&x=0\\0,&x<0\end{cases}}} a linear transformation of the sign function, H ( x ) := 1 2 ( sgn x + 1 ) {\displaystyle H(x):={\frac {1}{2}}\left({\mbox{sgn}}\,x+1\right)} the arithmetic mean of two Iverson brackets, H ( x ) := [ x ≥ 0 ] + [ x > 0 ] 2 {\displaystyle H(x):={\frac {[x\geq 0]+[x>0]}{2}}} a one-sided limit of the two-argument arctangent H ( x ) =: lim ϵ → 0 + atan2 ( ϵ , − x ) π {\displaystyle H(x)=:\lim _{\epsilon \to 0^{+}}{\frac {{\mbox{atan2}}(\epsilon ,-x)}{\pi }}} a hyperfunction H ( x ) =: ( 1 − 1 2 π i log ⁡ z , − 1 2 π i log ⁡ z ) {\displaystyle H(x)=:\left(1-{\frac {1}{2\pi i}}\log z,\ -{\frac {1}{2\pi i}}\log z\right)} or equivalently H ( x ) =: ( − log − z 2 π i , − log − z 2 π i ) {\displaystyle H(x)=:\left(-{\frac {\log -z}{2\pi i}},-{\frac {\log -z}{2\pi i}}\right)} where log z is the principal value of the complex logarithm of z Other definitions which are undefined at H(0) include: a piecewise function: H ( x ) := { 1 , x > 0 0 , x < 0 {\displaystyle H(x):={\begin{cases}1,&x>0\\0,&x<0\end{cases}}} the derivative of the ramp function: H ( x ) := d d x max { x , 0 } for x ≠ 0 {\displaystyle H(x):={\frac {d}{dx}}\max\{x,0\}\quad {\mbox{for }}x\neq 0} in terms of the absolute value function as H ( x ) = x + | x | 2 x {\displaystyle H(x)={\frac {x+|x|}{2x}}} == Relationship with Dirac delta == The Dirac delta function is the weak derivative of the Heaviside function: δ ( x ) = d d x H ( x ) . {\displaystyle \delta (x)={\frac {d}{dx}}H(x).} Hence the Heaviside function can be considered to be the integral of the Dirac delta function. This is sometimes written as H ( x ) := ∫ − ∞ x δ ( s ) d s {\displaystyle H(x):=\int _{-\infty }^{x}\delta (s)\,ds} although this expansion may not hold (or even make sense) for x = 0, depending on which formalism one uses to give meaning to integrals involving δ. In this context, the Heaviside function is the cumulative distribution function of a random variable which is almost surely 0. (See Constant random variable.) == Analytic approximations == Approximations to the Heaviside step function are of use in biochemistry and neuroscience, where logistic approximations of step functions (such as the Hill and the Michaelis–Menten equations) may be used to approximate binary cellular switches in response to chemical signals. For a smooth approximation to the step function, one can use the logistic function H ( x ) ≈ 1 2 + 1 2 tanh ⁡ k x = 1 1 + e − 2 k x , {\displaystyle H(x)\approx {\tfrac {1}{2}}+{\tfrac {1}{2}}\tanh kx={\frac {1}{1+e^{-2kx}}},} where a larger k corresponds to a sharper transition at x = 0. If we take H(0) = ⁠1/2⁠, equality holds in the limit: H ( x ) = lim k → ∞ 1 2 ( 1 + tanh ⁡ k x ) = lim k → ∞ 1 1 + e − 2 k x . {\displaystyle H(x)=\lim _{k\to \infty }{\tfrac {1}{2}}(1+\tanh kx)=\lim _{k\to \infty }{\frac {1}{1+e^{-2kx}}}.} There are many other smooth, analytic approximations to the step function. Among the possibilities are: H ( x ) = lim k → ∞ ( 1 2 + 1 π arctan ⁡ k x ) H ( x ) = lim k → ∞ ( 1 2 + 1 2 erf ⁡ k x ) {\displaystyle {\begin{aligned}H(x)&=\lim _{k\to \infty }\left({\tfrac {1}{2}}+{\tfrac {1}{\pi }}\arctan kx\right)\\H(x)&=\lim _{k\to \infty }\left({\tfrac {1}{2}}+{\tfrac {1}{2}}\operatorname {erf} kx\right)\end{aligned}}} These limits hold pointwise and in the sense of distributions. In general, however, pointwise convergence need not imply distributional convergence, and vice versa distributional convergence need not imply pointwise convergence. (However, if all members of a pointwise convergent sequence of functions are uniformly bounded by some "nice" function, then convergence holds in the sense of distributions too.) In general, any cumulative distribution function of a continuous probability distribution that is peaked around zero and has a parameter that controls for variance can serve as an approximation, in the limit as the variance approaches zero. For example, all three of the above approximations are cumulative distribution functions of common probability distributions: the logistic, Cauchy and normal distributions, respectively. == Non-Analytic approximations == Approximations to the Heaviside step function could be made through Smooth transition function like 1 ≤ m → ∞ {\displaystyle 1\leq m\to \infty } : f ( x ) = { 1 2 ( 1 + tanh ⁡ ( m 2 x 1 − x 2 ) ) , | x | < 1 1 , x ≥ 1 0 , x ≤ − 1 {\displaystyle {\begin{aligned}f(x)&={\begin{cases}{\displaystyle {\frac {1}{2}}\left(1+\tanh \left(m{\frac {2x}{1-x^{2}}}\right)\right)},&|x|<1\\\\1,&x\geq 1\\0,&x\leq -1\end{cases}}\end{aligned}}} == Integral representations == Often an integral representation of the Heaviside step function is useful: H ( x ) = lim ε → 0 + − 1 2 π i ∫ − ∞ ∞ 1 τ + i ε e − i x τ d τ = lim ε → 0 + 1 2 π i ∫ − ∞ ∞ 1 τ − i ε e i x τ d τ . {\displaystyle {\begin{aligned}H(x)&=\lim _{\varepsilon \to 0^{+}}-{\frac {1}{2\pi i}}\int _{-\infty }^{\infty }{\frac {1}{\tau +i\varepsilon }}e^{-ix\tau }d\tau \\&=\lim _{\varepsilon \to 0^{+}}{\frac {1}{2\pi i}}\int _{-\infty }^{\infty }{\frac {1}{\tau -i\varepsilon }}e^{ix\tau }d\tau .\end{aligned}}} where the second representation is easy to deduce from the first, given that the step function is real and thus is its own complex conjugate. == Zero argument == Since H is usually used in integration, and the value of a function at a single point does not affect its integral, it rarely matters what particular value is chosen of H(0). Indeed when H is considered as a distribution or an element of L∞ (see Lp space) it does not even make sense to talk of a value at zero, since such objects are only defined almost everywhere. If using some analytic approximation (as in the examples above) then often whatever happens to be the relevant limit at zero is used. There exist various reasons for choosing a particular value. H(0) = ⁠1/2⁠ is often used since the graph then has rotational symmetry; put another way, H − ⁠1/2⁠ is then an odd function. In this case the following relation with the sign function holds for all x: H ( x ) = 1 2 ( 1 + sgn ⁡ x ) . {\displaystyle H(x)={\tfrac {1}{2}}(1+\operatorname {sgn} x).} Also, H(x) + H(-x) = 1 for all x. H(0) = 1 is used when H needs to be right-continuous. For instance cumulative distribution functions are usually taken to be right continuous, as are functions integrated against in Lebesgue–Stieltjes integration. In this case H is the indicator function of a closed semi-infinite interval: H ( x ) = 1 [ 0 , ∞ ) ( x ) . {\displaystyle H(x)=\mathbf {1} _{[0,\infty )}(x).} The corresponding probability distribution is the degenerate distribution. H(0) = 0 is used when H needs to be left-continuous. In this case H is an indicator function of an open semi-infinite interval: H ( x ) = 1 ( 0 , ∞ ) ( x ) . {\displaystyle H(x)=\mathbf {1} _{(0,\infty )}(x).} In functional-analysis contexts from optimization and game theory, it is often useful to define the Heaviside function as a set-valued function to preserve the continuity of the limiting functions and ensure the existence of certain solutions. In these cases, the Heaviside function returns a whole interval of possible solutions, H(0) = [0,1]. == Discrete form == An alternative form of the unit step, defined instead as a function H : Z → R {\displaystyle H:\mathbb {Z} \rightarrow \mathbb {R} } (that is, taking in a discrete variable n), is: H [ n ] = { 0 , n < 0 , 1 , n ≥ 0 , {\displaystyle H[n]={\begin{cases}0,&n<0,\\1,&n\geq 0,\end{cases}}} or using the half-maximum convention: H [ n ] = { 0 , n < 0 , 1 2 , n = 0 , 1 , n > 0 , {\displaystyle H[n]={\begin{cases}0,&n<0,\\{\tfrac {1}{2}},&n=0,\\1,&n>0,\end{cases}}} where n is an integer. If n is an integer, then n < 0 must imply that n ≤ −1, while n > 0 must imply that the function attains unity at n = 1. Therefore the "step function" exhibits ramp-like behavior over the domain of [−1, 1], and cannot authentically be a step function, using the half-maximum convention. Unlike the continuous case, the definition of H[0] is significant. The discrete-time unit impulse is the first difference of the discrete-time step δ [ n ] = H [ n ] − H [ n − 1 ] . {\displaystyle \delta [n]=H[n]-H[n-1].} This function is the cumulative summation of the Kronecker delta: H [ n ] = ∑ k = − ∞ n δ [ k ] {\displaystyle H[n]=\sum _{k=-\infty }^{n}\delta [k]} where δ [ k ] = δ k , 0 {\displaystyle \delta [k]=\delta _{k,0}} is the discrete unit impulse function. == Antiderivative and derivative == The ramp function is an antiderivative of the Heaviside step function: ∫ − ∞ x H ( ξ ) d ξ = x H ( x ) = max { 0 , x } . {\displaystyle \int _{-\infty }^{x}H(\xi )\,d\xi =xH(x)=\max\{0,x\}\,.} The distributional derivative of the Heaviside step function is the Dirac delta function: d H ( x ) d x = δ ( x ) . {\displaystyle {\frac {dH(x)}{dx}}=\delta (x)\,.} == Fourier transform == The Fourier transform of the Heaviside step function is a distribution. Using one choice of constants for the definition of the Fourier transform we have H ^ ( s ) = lim N → ∞ ∫ − N N e − 2 π i x s H ( x ) d x = 1 2 ( δ ( s ) − i π p . v . ⁡ 1 s ) . {\displaystyle {\hat {H}}(s)=\lim _{N\to \infty }\int _{-N}^{N}e^{-2\pi ixs}H(x)\,dx={\frac {1}{2}}\left(\delta (s)-{\frac {i}{\pi }}\operatorname {p.v.} {\frac {1}{s}}\right).} Here p.v.⁠1/s⁠ is the distribution that takes a test function φ to the Cauchy principal value of ∫ − ∞ ∞ φ ( s ) s d s {\displaystyle \textstyle \int _{-\infty }^{\infty }{\frac {\varphi (s)}{s}}\,ds} . The limit appearing in the integral is also taken in the sense of (tempered) distributions. == Unilateral Laplace transform == The Laplace transform of the Heaviside step function is a meromorphic function. Using the unilateral Laplace transform we have: H ^ ( s ) = lim N → ∞ ∫ 0 N e − s x H ( x ) d x = lim N → ∞ ∫ 0 N e − s x d x = 1 s {\displaystyle {\begin{aligned}{\hat {H}}(s)&=\lim _{N\to \infty }\int _{0}^{N}e^{-sx}H(x)\,dx\\&=\lim _{N\to \infty }\int _{0}^{N}e^{-sx}\,dx\\&={\frac {1}{s}}\end{aligned}}} When the bilateral transform is used, the integral can be split in two parts and the result will be the same. == See also == == References == == External links == Digital Library of Mathematical Functions, NIST, [1]. Berg, Ernst Julius (1936). "Unit function". Heaviside's Operational Calculus, as applied to Engineering and Physics. McGraw-Hill Education. p. 5. Calvert, James B. (2002). "Heaviside, Laplace, and the Inversion Integral". University of Denver. Davies, Brian (2002). "Heaviside step function". Integral Transforms and their Applications (3rd ed.). Springer. p. 28. Duff, George F. D.; Naylor, D. (1966). "Heaviside unit function". Differential Equations of Applied Mathematics. John Wiley & Sons. p. 42.
Wikipedia/Heaviside_step_function
Kermack–McKendrick theory is a hypothesis that predicts the number and distribution of cases of an immunizing infectious disease over time as it is transmitted through a population based on characteristics of infectivity and recovery, under a strong-mixing assumption. Building on the research of Ronald Ross and Hilda Hudson, A. G. McKendrick and W. O. Kermack published their theory in a set of three articles from 1927, 1932, and 1933. Kermack–McKendrick theory is one of the sources of the SIR model and other related compartmental models. This theory was the first to explicitly account for the dependence of infection characteristics and transmissibility on the age of infection. Because of their seminal importance to the field of theoretical epidemiology, these articles were republished in the Bulletin of Mathematical Biology in 1991. == Epidemic model (1927) == In its initial form, Kermack–McKendrick theory is a partial differential-equation model that structures the infected population in terms of age-of-infection, while using simple compartments for people who are susceptible (S), infected (I), and recovered/removed (R). Specified initial conditions would change over time according to d S d t = − λ S , {\displaystyle {\frac {dS}{dt}}=-\lambda S,} ∂ i ∂ t + ∂ i ∂ a = δ ( a ) λ S − γ ( a ) i , {\displaystyle {\frac {\partial i}{\partial t}}+{\frac {\partial i}{\partial a}}=\delta (a)\lambda S-\gamma (a)i,} I ( t ) = ∫ 0 ∞ i ( a , t ) d a , {\displaystyle I(t)=\int _{0}^{\infty }i(a,t)\,da,} d R d t = ∫ 0 ∞ γ ( a ) i ( a , t ) d a , {\displaystyle {\frac {dR}{dt}}=\int _{0}^{\infty }\gamma (a)i(a,t)\,da,} where δ ( a ) {\displaystyle \delta (a)} is a Dirac delta-function and the infection pressure λ = ∫ 0 ∞ β ( a ) i ( a , t ) d a . {\displaystyle \lambda =\int _{0}^{\infty }\beta (a)i(a,t)\,da.} This formulation is equivalent to defining the incidence of infection i ( t , 0 ) = λ S {\displaystyle i(t,0)=\lambda S} . Only in the special case when the removal rate γ ( a ) {\displaystyle \gamma (a)} and the transmission rate β ( a ) {\displaystyle \beta (a)} are constant for all ages can the epidemic dynamics be expressed in terms of the prevalence I ( t ) {\displaystyle I(t)} , leading to the standard compartmental SIR model. This model only accounts for infection and removal events, which are sufficient to describe a simple epidemic, including the threshold condition necessary for an epidemic to start, but can not explain endemic disease transmission or recurring epidemics. == Endemic disease (1932, 1933) == In their subsequent articles, Kermack and McKendrick extended their theory to allow for birth, migration, and death, as well as imperfect immunity. In modern notation, their model can be represented as d S d t = b 0 + b S S + b I I + b R R − λ S − m S S , {\displaystyle {\frac {dS}{dt}}=b_{0}+b_{S}S+b_{I}I+b_{R}R-\lambda S-m_{S}S,} ∂ i ∂ t + ∂ i ∂ a = δ ( a ) λ ( S + σ R ) − γ ( a ) i − μ ( a ) i − m i ( a ) i , {\displaystyle {\frac {\partial i}{\partial t}}+{\frac {\partial i}{\partial a}}=\delta (a)\lambda (S+\sigma R)-\gamma (a)i-\mu (a)i-m_{i}(a)i,} I ( t ) = ∫ 0 ∞ i ( a , t ) d a {\displaystyle I(t)=\int _{0}^{\infty }i(a,t)\,da} d R d t = ∫ 0 ∞ γ ( a ) i ( a , t ) d a − σ λ R − m R R , {\displaystyle {\frac {dR}{dt}}=\int _{0}^{\infty }\gamma (a)i(a,t)\,da-\sigma \lambda R-m_{R}R,} where b 0 {\displaystyle b_{0}} is the immigration rate of susceptibles, bj is the per-capita birth rate for state j, mj is the per-capita mortality rate of individuals in state j, σ {\displaystyle \sigma } is the relative-risk of infection to recovered individuals who are partially immune, and the infection pressure λ = ∫ 0 ∞ β ( a ) i ( a , t ) d a . {\displaystyle \lambda =\int _{0}^{\infty }\beta (a)i(a,t)\,da.} Kermack and McKendrick were able to show that it admits a stationary solution where disease is endemic, as long as the supply of susceptible individuals is sufficiently large. This model is difficult to analyze in its full generality, and a number of open questions remain regarding its dynamics. == See also == Compartmental models in epidemiology Integro-differential equation == References ==
Wikipedia/Kermack–McKendrick_theory
In the mathematical field of complex analysis, contour integration is a method of evaluating certain integrals along paths in the complex plane. Contour integration is closely related to the calculus of residues, a method of complex analysis. One use for contour integrals is the evaluation of integrals along the real line that are not readily found by using only real variable methods. It also has various applications in physics. Contour integration methods include: direct integration of a complex-valued function along a curve in the complex plane application of the Cauchy integral formula application of the residue theorem One method can be used, or a combination of these methods, or various limiting processes, for the purpose of finding these integrals or sums. == Curves in the complex plane == In complex analysis, a contour is a type of curve in the complex plane. In contour integration, contours provide a precise definition of the curves on which an integral may be suitably defined. A curve in the complex plane is defined as a continuous function from a closed interval of the real line to the complex plane: z : [ a , b ] → C {\displaystyle z:[a,b]\to \mathbb {C} } . This definition of a curve coincides with the intuitive notion of a curve, but includes a parametrization by a continuous function from a closed interval. This more precise definition allows us to consider what properties a curve must have for it to be useful for integration. In the following subsections we narrow down the set of curves that we can integrate to include only those that can be built up out of a finite number of continuous curves that can be given a direction. Moreover, we will restrict the "pieces" from crossing over themselves, and we require that each piece have a finite (non-vanishing) continuous derivative. These requirements correspond to requiring that we consider only curves that can be traced, such as by a pen, in a sequence of even, steady strokes, which stop only to start a new piece of the curve, all without picking up the pen. === Directed smooth curves === Contours are often defined in terms of directed smooth curves. These provide a precise definition of a "piece" of a smooth curve, of which a contour is made. A smooth curve is a curve z : [ a , b ] → C {\displaystyle z:[a,b]\to \mathbb {C} } with a non-vanishing, continuous derivative such that each point is traversed only once (z is one-to-one), with the possible exception of a curve such that the endpoints match ( z ( a ) = z ( b ) {\displaystyle z(a)=z(b)} ). In the case where the endpoints match, the curve is called closed, and the function is required to be one-to-one everywhere else and the derivative must be continuous at the identified point ( z ′ ( a ) = z ′ ( b ) {\displaystyle z'(a)=z'(b)} ). A smooth curve that is not closed is often referred to as a smooth arc. The parametrization of a curve provides a natural ordering of points on the curve: z ( x ) {\displaystyle z(x)} comes before z ( y ) {\displaystyle z(y)} if x < y {\displaystyle x<y} . This leads to the notion of a directed smooth curve. It is most useful to consider curves independent of the specific parametrization. This can be done by considering equivalence classes of smooth curves with the same direction. A directed smooth curve can then be defined as an ordered set of points in the complex plane that is the image of some smooth curve in their natural order (according to the parametrization). Note that not all orderings of the points are the natural ordering of a smooth curve. In fact, a given smooth curve has only two such orderings. Also, a single closed curve can have any point as its endpoint, while a smooth arc has only two choices for its endpoints. === Contours === Contours are the class of curves on which we define contour integration. A contour is a directed curve which is made up of a finite sequence of directed smooth curves whose endpoints are matched to give a single direction. This requires that the sequence of curves γ 1 , … , γ n {\displaystyle \gamma _{1},\dots ,\gamma _{n}} be such that the terminal point of γ i {\displaystyle \gamma _{i}} coincides with the initial point of γ i + 1 {\displaystyle \gamma _{i+1}} for all i {\displaystyle i} such that 1 ≤ i < n {\displaystyle 1\leq i<n} . This includes all directed smooth curves. Also, a single point in the complex plane is considered a contour. The symbol + {\displaystyle +} is often used to denote the piecing of curves together to form a new curve. Thus we could write a contour Γ {\displaystyle \Gamma } that is made up of n {\displaystyle n} curves as Γ = γ 1 + γ 2 + ⋯ + γ n . {\displaystyle \Gamma =\gamma _{1}+\gamma _{2}+\cdots +\gamma _{n}.} == Contour integrals == The contour integral of a complex function f : C → C {\displaystyle f:\mathbb {C} \to \mathbb {C} } is a generalization of the integral for real-valued functions. For continuous functions in the complex plane, the contour integral can be defined in analogy to the line integral by first defining the integral along a directed smooth curve in terms of an integral over a real valued parameter. A more general definition can be given in terms of partitions of the contour in analogy with the partition of an interval and the Riemann integral. In both cases the integral over a contour is defined as the sum of the integrals over the directed smooth curves that make up the contour. === For continuous functions === To define the contour integral in this way one must first consider the integral, over a real variable, of a complex-valued function. Let f : R → C {\displaystyle f:\mathbb {R} \to \mathbb {C} } be a complex-valued function of a real variable, t {\displaystyle t} . The real and imaginary parts of f {\displaystyle f} are often denoted as u ( t ) {\displaystyle u(t)} and v ( t ) {\displaystyle v(t)} , respectively, so that f ( t ) = u ( t ) + i v ( t ) . {\displaystyle f(t)=u(t)+iv(t).} Then the integral of the complex-valued function f {\displaystyle f} over the interval [ a , b ] {\displaystyle [a,b]} is given by ∫ a b f ( t ) d t = ∫ a b ( u ( t ) + i v ( t ) ) d t = ∫ a b u ( t ) d t + i ∫ a b v ( t ) d t . {\displaystyle {\begin{aligned}\int _{a}^{b}f(t)\,dt&=\int _{a}^{b}{\big (}u(t)+iv(t){\big )}\,dt\\&=\int _{a}^{b}u(t)\,dt+i\int _{a}^{b}v(t)\,dt.\end{aligned}}} Now, to define the contour integral, let f : C → C {\displaystyle f:\mathbb {C} \to \mathbb {C} } be a continuous function on the directed smooth curve γ {\displaystyle \gamma } . Let z : [ a , b ] → C {\displaystyle z:[a,b]\to \mathbb {C} } be any parametrization of γ {\displaystyle \gamma } that is consistent with its order (direction). Then the integral along γ {\displaystyle \gamma } is denoted ∫ γ f ( z ) d z {\displaystyle \int _{\gamma }f(z)\,dz\,} and is given by ∫ γ f ( z ) d z := ∫ a b f ( z ( t ) ) z ′ ( t ) d t . {\displaystyle \int _{\gamma }f(z)\,dz:=\int _{a}^{b}f{\big (}z(t){\big )}z'(t)\,dt.} This definition is well defined. That is, the result is independent of the parametrization chosen. In the case where the real integral on the right side does not exist the integral along γ {\displaystyle \gamma } is said not to exist. === As a generalization of the Riemann integral === The generalization of the Riemann integral to functions of a complex variable is done in complete analogy to its definition for functions from the real numbers. The partition of a directed smooth curve γ {\displaystyle \gamma } is defined as a finite, ordered set of points on γ {\displaystyle \gamma } . The integral over the curve is the limit of finite sums of function values, taken at the points on the partition, in the limit that the maximum distance between any two successive points on the partition (in the two-dimensional complex plane), also known as the mesh, goes to zero. == Direct methods == Direct methods involve the calculation of the integral through methods similar to those in calculating line integrals in multivariate calculus. This means that we use the following method: parametrizing the contour The contour is parametrized by a differentiable complex-valued function of real variables, or the contour is broken up into pieces and parametrized separately. substitution of the parametrization into the integrand Substituting the parametrization into the integrand transforms the integral into an integral of one real variable. direct evaluation The integral is evaluated in a method akin to a real-variable integral. === Example === A fundamental result in complex analysis is that the contour integral of ⁠1/z⁠ is 2πi, where the path of the contour is taken to be the unit circle traversed counterclockwise (or any positively oriented Jordan curve about 0). In the case of the unit circle there is a direct method to evaluate the integral ∮ C 1 z d z . {\displaystyle \oint _{C}{\frac {1}{z}}\,dz.} In evaluating this integral, use the unit circle |z| = 1 as a contour, parametrized by z(t) = eit, with t ∈ [0, 2π], then ⁠dz/dt⁠ = ieit and ∮ C 1 z d z = ∫ 0 2 π 1 e i t i e i t d t = i ∫ 0 2 π 1 d t = i t | 0 2 π = ( 2 π − 0 ) i = 2 π i {\displaystyle \oint _{C}{\frac {1}{z}}\,dz=\int _{0}^{2\pi }{\frac {1}{e^{it}}}ie^{it}\,dt=i\int _{0}^{2\pi }1\,dt=i\,t{\Big |}_{0}^{2\pi }=\left(2\pi -0\right)i=2\pi i} which is the value of the integral. This result only applies to the case in which z is raised to power of -1. If the power is not equal to -1, then the result will always be zero. == Applications of integral theorems == Applications of integral theorems are also often used to evaluate the contour integral along a contour, which means that the real-valued integral is calculated simultaneously along with calculating the contour integral. Integral theorems such as the Cauchy integral formula or residue theorem are generally used in the following method: a specific contour is chosen: The contour is chosen so that the contour follows the part of the complex plane that describes the real-valued integral, and also encloses singularities of the integrand so application of the Cauchy integral formula or residue theorem is possible application of Cauchy's integral theorem The integral is reduced to only an integration around a small circle about each pole. application of the Cauchy integral formula or residue theorem Application of these integral formulae gives us a value for the integral around the whole of the contour. division of the contour into a contour along the real part and imaginary part The whole of the contour can be divided into the contour that follows the part of the complex plane that describes the real-valued integral as chosen before (call it R), and the integral that crosses the complex plane (call it I). The integral over the whole of the contour is the sum of the integral over each of these contours. demonstration that the integral that crosses the complex plane plays no part in the sum If the integral I can be shown to be zero, or if the real-valued integral that is sought is improper, then if we demonstrate that the integral I as described above tends to 0, the integral along R will tend to the integral around the contour R + I. conclusion If we can show the above step, then we can directly calculate R, the real-valued integral. === Example 1 === Consider the integral ∫ − ∞ ∞ 1 ( x 2 + 1 ) 2 d x , {\displaystyle \int _{-\infty }^{\infty }{\frac {1}{\left(x^{2}+1\right)^{2}}}\,dx,} To evaluate this integral, we look at the complex-valued function f ( z ) = 1 ( z 2 + 1 ) 2 {\displaystyle f(z)={\frac {1}{\left(z^{2}+1\right)^{2}}}} which has singularities at i and −i. We choose a contour that will enclose the real-valued integral, here a semicircle with boundary diameter on the real line (going from, say, −a to a) will be convenient. Call this contour C. There are two ways of proceeding, using the Cauchy integral formula or by the method of residues: ==== Using the Cauchy integral formula ==== Note that: ∮ C f ( z ) d z = ∫ − a a f ( z ) d z + ∫ Arc f ( z ) d z {\displaystyle \oint _{C}f(z)\,dz=\int _{-a}^{a}f(z)\,dz+\int _{\text{Arc}}f(z)\,dz} thus ∫ − a a f ( z ) d z = ∮ C f ( z ) d z − ∫ Arc f ( z ) d z {\displaystyle \int _{-a}^{a}f(z)\,dz=\oint _{C}f(z)\,dz-\int _{\text{Arc}}f(z)\,dz} Furthermore, observe that f ( z ) = 1 ( z 2 + 1 ) 2 = 1 ( z + i ) 2 ( z − i ) 2 . {\displaystyle f(z)={\frac {1}{\left(z^{2}+1\right)^{2}}}={\frac {1}{(z+i)^{2}(z-i)^{2}}}.} Since the only singularity in the contour is the one at i, then we can write f ( z ) = 1 ( z + i ) 2 ( z − i ) 2 , {\displaystyle f(z)={\frac {\frac {1}{(z+i)^{2}}}{(z-i)^{2}}},} which puts the function in the form for direct application of the formula. Then, by using Cauchy's integral formula, ∮ C f ( z ) d z = ∮ C 1 ( z + i ) 2 ( z − i ) 2 d z = 2 π i d d z 1 ( z + i ) 2 | z = i = 2 π i [ − 2 ( z + i ) 3 ] z = i = π 2 {\displaystyle \oint _{C}f(z)\,dz=\oint _{C}{\frac {\frac {1}{(z+i)^{2}}}{(z-i)^{2}}}\,dz=2\pi i\,\left.{\frac {d}{dz}}{\frac {1}{(z+i)^{2}}}\right|_{z=i}=2\pi i\left[{\frac {-2}{(z+i)^{3}}}\right]_{z=i}={\frac {\pi }{2}}} We take the first derivative, in the above steps, because the pole is a second-order pole. That is, (z − i) is taken to the second power, so we employ the first derivative of f(z). If it were (z − i) taken to the third power, we would use the second derivative and divide by 2!, etc. The case of (z − i) to the first power corresponds to a zero order derivative—just f(z) itself. We need to show that the integral over the arc of the semicircle tends to zero as a → ∞, using the estimation lemma | ∫ Arc f ( z ) d z | ≤ M L {\displaystyle \left|\int _{\text{Arc}}f(z)\,dz\right|\leq ML} where M is an upper bound on |f(z)| along the arc and L the length of the arc. Now, | ∫ Arc f ( z ) d z | ≤ a π ( a 2 − 1 ) 2 → 0 as a → ∞ . {\displaystyle \left|\int _{\text{Arc}}f(z)\,dz\right|\leq {\frac {a\pi }{\left(a^{2}-1\right)^{2}}}\to 0{\text{ as }}a\to \infty .} So ∫ − ∞ ∞ 1 ( x 2 + 1 ) 2 d x = ∫ − ∞ ∞ f ( z ) d z = lim a → + ∞ ∫ − a a f ( z ) d z = π 2 . ◻ {\displaystyle \int _{-\infty }^{\infty }{\frac {1}{\left(x^{2}+1\right)^{2}}}\,dx=\int _{-\infty }^{\infty }f(z)\,dz=\lim _{a\to +\infty }\int _{-a}^{a}f(z)\,dz={\frac {\pi }{2}}.\quad \square } ==== Using the method of residues ==== Consider the Laurent series of f(z) about i, the only singularity we need to consider. We then have f ( z ) = − 1 4 ( z − i ) 2 + − i 4 ( z − i ) + 3 16 + i 8 ( z − i ) + − 5 64 ( z − i ) 2 + ⋯ {\displaystyle f(z)={\frac {-1}{4(z-i)^{2}}}+{\frac {-i}{4(z-i)}}+{\frac {3}{16}}+{\frac {i}{8}}(z-i)+{\frac {-5}{64}}(z-i)^{2}+\cdots } (See the sample Laurent calculation from Laurent series for the derivation of this series.) It is clear by inspection that the residue is −⁠i/4⁠, so, by the residue theorem, we have ∮ C f ( z ) d z = ∮ C 1 ( z 2 + 1 ) 2 d z = 2 π i Res z = i ⁡ f ( z ) = 2 π i ( − i 4 ) = π 2 ◻ {\displaystyle \oint _{C}f(z)\,dz=\oint _{C}{\frac {1}{\left(z^{2}+1\right)^{2}}}\,dz=2\pi i\,\operatorname {Res} _{z=i}f(z)=2\pi i\left(-{\frac {i}{4}}\right)={\frac {\pi }{2}}\quad \square } Thus we get the same result as before. ==== Contour note ==== As an aside, a question can arise whether we do not take the semicircle to include the other singularity, enclosing −i. To have the integral along the real axis moving in the correct direction, the contour must travel clockwise, i.e., in a negative direction, reversing the sign of the integral overall. This does not affect the use of the method of residues by series. === Example 2 – Cauchy distribution === The integral ∫ − ∞ ∞ e i t x x 2 + 1 d x {\displaystyle \int _{-\infty }^{\infty }{\frac {e^{itx}}{x^{2}+1}}\,dx} (which arises in probability theory as a scalar multiple of the characteristic function of the Cauchy distribution) resists the techniques of elementary calculus. We will evaluate it by expressing it as a limit of contour integrals along the contour C that goes along the real line from −a to a and then counterclockwise along a semicircle centered at 0 from a to −a. Take a to be greater than 1, so that the imaginary unit i is enclosed within the curve. The contour integral is ∫ C e i t z z 2 + 1 d z . {\displaystyle \int _{C}{\frac {e^{itz}}{z^{2}+1}}\,dz.} Since eitz is an entire function (having no singularities at any point in the complex plane), this function has singularities only where the denominator z2 + 1 is zero. Since z2 + 1 = (z + i)(z − i), that happens only where z = i or z = −i. Only one of those points is in the region bounded by this contour. The residue of f(z) at z = i is lim z → i ( z − i ) f ( z ) = lim z → i ( z − i ) e i t z z 2 + 1 = lim z → i ( z − i ) e i t z ( z − i ) ( z + i ) = lim z → i e i t z z + i = e − t 2 i . {\displaystyle \lim _{z\to i}(z-i)f(z)=\lim _{z\to i}(z-i){\frac {e^{itz}}{z^{2}+1}}=\lim _{z\to i}(z-i){\frac {e^{itz}}{(z-i)(z+i)}}=\lim _{z\to i}{\frac {e^{itz}}{z+i}}={\frac {e^{-t}}{2i}}.} According to the residue theorem, then, we have ∫ C f ( z ) d z = 2 π i Res z = i ⁡ f ( z ) = 2 π i e − t 2 i = π e − t . {\displaystyle \int _{C}f(z)\,dz=2\pi i\operatorname {Res} _{z=i}f(z)=2\pi i{\frac {e^{-t}}{2i}}=\pi e^{-t}.} The contour C may be split into a "straight" part and a curved arc, so that ∫ straight + ∫ arc = π e − t , {\displaystyle \int _{\text{straight}}+\int _{\text{arc}}=\pi e^{-t},} and thus ∫ − a a = π e − t − ∫ arc . {\displaystyle \int _{-a}^{a}=\pi e^{-t}-\int _{\text{arc}}.} According to Jordan's lemma, if t > 0 then ∫ arc e i t z z 2 + 1 d z → 0 as a → ∞ . {\displaystyle \int _{\text{arc}}{\frac {e^{itz}}{z^{2}+1}}\,dz\rightarrow 0{\mbox{ as }}a\rightarrow \infty .} Therefore, if t > 0 then ∫ − ∞ ∞ e i t x x 2 + 1 d x = π e − t . {\displaystyle \int _{-\infty }^{\infty }{\frac {e^{itx}}{x^{2}+1}}\,dx=\pi e^{-t}.} A similar argument with an arc that winds around −i rather than i shows that if t < 0 then ∫ − ∞ ∞ e i t x x 2 + 1 d x = π e t , {\displaystyle \int _{-\infty }^{\infty }{\frac {e^{itx}}{x^{2}+1}}\,dx=\pi e^{t},} and finally we have this: ∫ − ∞ ∞ e i t x x 2 + 1 d x = π e − | t | . {\displaystyle \int _{-\infty }^{\infty }{\frac {e^{itx}}{x^{2}+1}}\,dx=\pi e^{-|t|}.} (If t = 0 then the integral yields immediately to real-valued calculus methods and its value is π.) === Example 3 – trigonometric integrals === Certain substitutions can be made to integrals involving trigonometric functions, so the integral is transformed into a rational function of a complex variable and then the above methods can be used in order to evaluate the integral. As an example, consider ∫ − π π 1 1 + 3 ( cos ⁡ t ) 2 d t . {\displaystyle \int _{-\pi }^{\pi }{\frac {1}{1+3(\cos t)^{2}}}\,dt.} We seek to make a substitution of z = eit. Now, recall cos ⁡ t = 1 2 ( e i t + e − i t ) = 1 2 ( z + 1 z ) {\displaystyle \cos t={\frac {1}{2}}\left(e^{it}+e^{-it}\right)={\frac {1}{2}}\left(z+{\frac {1}{z}}\right)} and d z d t = i z , d t = d z i z . {\displaystyle {\frac {dz}{dt}}=iz,\ dt={\frac {dz}{iz}}.} Taking C to be the unit circle, we substitute to get: ∮ C 1 1 + 3 ( 1 2 ( z + 1 z ) ) 2 d z i z = ∮ C 1 1 + 3 4 ( z + 1 z ) 2 1 i z d z = ∮ C − i z + 3 4 z ( z + 1 z ) 2 d z = − i ∮ C d z z + 3 4 z ( z 2 + 2 + 1 z 2 ) = − i ∮ C d z z + 3 4 ( z 3 + 2 z + 1 z ) = − i ∮ C d z 3 4 z 3 + 5 2 z + 3 4 z = − i ∮ C 4 3 z 3 + 10 z + 3 z d z = − 4 i ∮ C d z 3 z 3 + 10 z + 3 z = − 4 i ∮ C z 3 z 4 + 10 z 2 + 3 d z = − 4 i ∮ C z 3 ( z + 3 i ) ( z − 3 i ) ( z + i 3 ) ( z − i 3 ) d z = − 4 i 3 ∮ C z ( z + 3 i ) ( z − 3 i ) ( z + i 3 ) ( z − i 3 ) d z . {\displaystyle {\begin{aligned}\oint _{C}{\frac {1}{1+3\left({\frac {1}{2}}\left(z+{\frac {1}{z}}\right)\right)^{2}}}\,{\frac {dz}{iz}}&=\oint _{C}{\frac {1}{1+{\frac {3}{4}}\left(z+{\frac {1}{z}}\right)^{2}}}{\frac {1}{iz}}\,dz\\&=\oint _{C}{\frac {-i}{z+{\frac {3}{4}}z\left(z+{\frac {1}{z}}\right)^{2}}}\,dz\\&=-i\oint _{C}{\frac {dz}{z+{\frac {3}{4}}z\left(z^{2}+2+{\frac {1}{z^{2}}}\right)}}\\&=-i\oint _{C}{\frac {dz}{z+{\frac {3}{4}}\left(z^{3}+2z+{\frac {1}{z}}\right)}}\\&=-i\oint _{C}{\frac {dz}{{\frac {3}{4}}z^{3}+{\frac {5}{2}}z+{\frac {3}{4z}}}}\\&=-i\oint _{C}{\frac {4}{3z^{3}+10z+{\frac {3}{z}}}}\,dz\\&=-4i\oint _{C}{\frac {dz}{3z^{3}+10z+{\frac {3}{z}}}}\\&=-4i\oint _{C}{\frac {z}{3z^{4}+10z^{2}+3}}\,dz\\&=-4i\oint _{C}{\frac {z}{3\left(z+{\sqrt {3}}i\right)\left(z-{\sqrt {3}}i\right)\left(z+{\frac {i}{\sqrt {3}}}\right)\left(z-{\frac {i}{\sqrt {3}}}\right)}}\,dz\\&=-{\frac {4i}{3}}\oint _{C}{\frac {z}{\left(z+{\sqrt {3}}i\right)\left(z-{\sqrt {3}}i\right)\left(z+{\frac {i}{\sqrt {3}}}\right)\left(z-{\frac {i}{\sqrt {3}}}\right)}}\,dz.\end{aligned}}} The singularities to be considered are at ± i 3 . {\displaystyle {\tfrac {\pm i}{\sqrt {3}}}.} Let C1 be a small circle about i 3 , {\displaystyle {\tfrac {i}{\sqrt {3}}},} and C2 be a small circle about − i 3 . {\displaystyle {\tfrac {-i}{\sqrt {3}}}.} Then we arrive at the following: − 4 i 3 [ ∮ C 1 z ( z + 3 i ) ( z − 3 i ) ( z + i 3 ) z − i 3 d z + ∮ C 2 z ( z + 3 i ) ( z − 3 i ) ( z − i 3 ) z + i 3 d z ] = − 4 i 3 [ 2 π i [ z ( z + 3 i ) ( z − 3 i ) ( z + i 3 ) ] z = i 3 + 2 π i [ z ( z + 3 i ) ( z − 3 i ) ( z − i 3 ) ] z = − i 3 ] = 8 π 3 [ i 3 ( i 3 + 3 i ) ( i 3 − 3 i ) ( i 3 + i 3 ) + − i 3 ( − i 3 + 3 i ) ( − i 3 − 3 i ) ( − i 3 − i 3 ) ] = 8 π 3 [ i 3 ( 4 3 i ) ( − 2 i 3 ) ( 2 3 i ) + − i 3 ( 2 3 i ) ( − 4 3 i ) ( − 2 3 i ) ] = 8 π 3 [ i 3 i ( 4 3 ) ( 2 3 ) ( 2 3 ) + − i 3 − i ( 2 3 ) ( 4 3 ) ( 2 3 ) ] = 8 π 3 [ 1 3 ( 4 3 ) ( 2 3 ) ( 2 3 ) + 1 3 ( 2 3 ) ( 4 3 ) ( 2 3 ) ] = 8 π 3 [ 1 3 16 3 3 + 1 3 16 3 3 ] = 8 π 3 [ 3 16 + 3 16 ] = π . {\displaystyle {\begin{aligned}&-{\frac {4i}{3}}\left[\oint _{C_{1}}{\frac {\frac {z}{\left(z+{\sqrt {3}}i\right)\left(z-{\sqrt {3}}i\right)\left(z+{\frac {i}{\sqrt {3}}}\right)}}{z-{\frac {i}{\sqrt {3}}}}}\,dz+\oint _{C_{2}}{\frac {\frac {z}{\left(z+{\sqrt {3}}i\right)\left(z-{\sqrt {3}}i\right)\left(z-{\frac {i}{\sqrt {3}}}\right)}}{z+{\frac {i}{\sqrt {3}}}}}\,dz\right]\\={}&-{\frac {4i}{3}}\left[2\pi i\left[{\frac {z}{\left(z+{\sqrt {3}}i\right)\left(z-{\sqrt {3}}i\right)\left(z+{\frac {i}{\sqrt {3}}}\right)}}\right]_{z={\frac {i}{\sqrt {3}}}}+2\pi i\left[{\frac {z}{\left(z+{\sqrt {3}}i\right)\left(z-{\sqrt {3}}i\right)\left(z-{\frac {i}{\sqrt {3}}}\right)}}\right]_{z=-{\frac {i}{\sqrt {3}}}}\right]\\={}&{\frac {8\pi }{3}}\left[{\frac {\frac {i}{\sqrt {3}}}{\left({\frac {i}{\sqrt {3}}}+{\sqrt {3}}i\right)\left({\frac {i}{\sqrt {3}}}-{\sqrt {3}}i\right)\left({\frac {i}{\sqrt {3}}}+{\frac {i}{\sqrt {3}}}\right)}}+{\frac {-{\frac {i}{\sqrt {3}}}}{\left(-{\frac {i}{\sqrt {3}}}+{\sqrt {3}}i\right)\left(-{\frac {i}{\sqrt {3}}}-{\sqrt {3}}i\right)\left(-{\frac {i}{\sqrt {3}}}-{\frac {i}{\sqrt {3}}}\right)}}\right]\\={}&{\frac {8\pi }{3}}\left[{\frac {\frac {i}{\sqrt {3}}}{\left({\frac {4}{\sqrt {3}}}i\right)\left(-{\frac {2}{i{\sqrt {3}}}}\right)\left({\frac {2}{{\sqrt {3}}i}}\right)}}+{\frac {-{\frac {i}{\sqrt {3}}}}{\left({\frac {2}{\sqrt {3}}}i\right)\left(-{\frac {4}{\sqrt {3}}}i\right)\left(-{\frac {2}{\sqrt {3}}}i\right)}}\right]\\={}&{\frac {8\pi }{3}}\left[{\frac {\frac {i}{\sqrt {3}}}{i\left({\frac {4}{\sqrt {3}}}\right)\left({\frac {2}{\sqrt {3}}}\right)\left({\frac {2}{\sqrt {3}}}\right)}}+{\frac {-{\frac {i}{\sqrt {3}}}}{-i\left({\frac {2}{\sqrt {3}}}\right)\left({\frac {4}{\sqrt {3}}}\right)\left({\frac {2}{\sqrt {3}}}\right)}}\right]\\={}&{\frac {8\pi }{3}}\left[{\frac {\frac {1}{\sqrt {3}}}{\left({\frac {4}{\sqrt {3}}}\right)\left({\frac {2}{\sqrt {3}}}\right)\left({\frac {2}{\sqrt {3}}}\right)}}+{\frac {\frac {1}{\sqrt {3}}}{\left({\frac {2}{\sqrt {3}}}\right)\left({\frac {4}{\sqrt {3}}}\right)\left({\frac {2}{\sqrt {3}}}\right)}}\right]\\={}&{\frac {8\pi }{3}}\left[{\frac {\frac {1}{\sqrt {3}}}{\frac {16}{3{\sqrt {3}}}}}+{\frac {\frac {1}{\sqrt {3}}}{\frac {16}{3{\sqrt {3}}}}}\right]\\={}&{\frac {8\pi }{3}}\left[{\frac {3}{16}}+{\frac {3}{16}}\right]\\={}&\pi .\end{aligned}}} === Example 3a – trigonometric integrals, the general procedure === The above method may be applied to all integrals of the type ∫ 0 2 π P ( sin ⁡ ( t ) , sin ⁡ ( 2 t ) , … , cos ⁡ ( t ) , cos ⁡ ( 2 t ) , … ) Q ( sin ⁡ ( t ) , sin ⁡ ( 2 t ) , … , cos ⁡ ( t ) , cos ⁡ ( 2 t ) , … ) d t {\displaystyle \int _{0}^{2\pi }{\frac {P{\big (}\sin(t),\sin(2t),\ldots ,\cos(t),\cos(2t),\ldots {\big )}}{Q{\big (}\sin(t),\sin(2t),\ldots ,\cos(t),\cos(2t),\ldots {\big )}}}\,dt} where P and Q are polynomials, i.e. a rational function in trigonometric terms is being integrated. Note that the bounds of integration may as well be π and −π, as in the previous example, or any other pair of endpoints 2π apart. The trick is to use the substitution z = eit where dz = ieit dt and hence 1 i z d z = d t . {\displaystyle {\frac {1}{iz}}\,dz=dt.} This substitution maps the interval [0, 2π] to the unit circle. Furthermore, sin ⁡ ( k t ) = e i k t − e − i k t 2 i = z k − z − k 2 i {\displaystyle \sin(kt)={\frac {e^{ikt}-e^{-ikt}}{2i}}={\frac {z^{k}-z^{-k}}{2i}}} and cos ⁡ ( k t ) = e i k t + e − i k t 2 = z k + z − k 2 {\displaystyle \cos(kt)={\frac {e^{ikt}+e^{-ikt}}{2}}={\frac {z^{k}+z^{-k}}{2}}} so that a rational function f(z) in z results from the substitution, and the integral becomes ∮ | z | = 1 f ( z ) 1 i z d z {\displaystyle \oint _{|z|=1}f(z){\frac {1}{iz}}\,dz} which is in turn computed by summing the residues of f(z)⁠1/iz⁠ inside the unit circle. The image at right illustrates this for I = ∫ 0 π 2 1 1 + ( sin ⁡ t ) 2 d t , {\displaystyle I=\int _{0}^{\frac {\pi }{2}}{\frac {1}{1+(\sin t)^{2}}}\,dt,} which we now compute. The first step is to recognize that I = 1 4 ∫ 0 2 π 1 1 + ( sin ⁡ t ) 2 d t . {\displaystyle I={\frac {1}{4}}\int _{0}^{2\pi }{\frac {1}{1+(\sin t)^{2}}}\,dt.} The substitution yields 1 4 ∮ | z | = 1 4 i z z 4 − 6 z 2 + 1 d z = ∮ | z | = 1 i z z 4 − 6 z 2 + 1 d z . {\displaystyle {\frac {1}{4}}\oint _{|z|=1}{\frac {4iz}{z^{4}-6z^{2}+1}}\,dz=\oint _{|z|=1}{\frac {iz}{z^{4}-6z^{2}+1}}\,dz.} The poles of this function are at 1 ± √2 and −1 ± √2. Of these, 1 + √2 and −1 − √2 are outside the unit circle (shown in red, not to scale), whereas 1 − √2 and −1 + √2 are inside the unit circle (shown in blue). The corresponding residues are both equal to −⁠i√2/16⁠, so that the value of the integral is I = 2 π i 2 ( − 2 16 i ) = π 2 4 . {\displaystyle I=2\pi i\;2\left(-{\frac {\sqrt {2}}{16}}i\right)=\pi {\frac {\sqrt {2}}{4}}.} === Example 4 – branch cuts === Consider the real integral ∫ 0 ∞ x x 2 + 6 x + 8 d x . {\displaystyle \int _{0}^{\infty }{\frac {\sqrt {x}}{x^{2}+6x+8}}\,dx.} We can begin by formulating the complex integral ∫ C z z 2 + 6 z + 8 d z = I . {\displaystyle \int _{C}{\frac {\sqrt {z}}{z^{2}+6z+8}}\,dz=I.} We can use the Cauchy integral formula or residue theorem again to obtain the relevant residues. However, the important thing to note is that z1/2 = e(Log z)/2, so z1/2 has a branch cut. This affects our choice of the contour C. Normally the logarithm branch cut is defined as the negative real axis, however, this makes the calculation of the integral slightly more complicated, so we define it to be the positive real axis. Then, we use the so-called keyhole contour, which consists of a small circle about the origin of radius ε say, extending to a line segment parallel and close to the positive real axis but not touching it, to an almost full circle, returning to a line segment parallel, close, and below the positive real axis in the negative sense, returning to the small circle in the middle. Note that z = −2 and z = −4 are inside the big circle. These are the two remaining poles, derivable by factoring the denominator of the integrand. The branch point at z = 0 was avoided by detouring around the origin. Let γ be the small circle of radius ε, Γ the larger, with radius R, then ∫ C = ∫ ε R + ∫ Γ + ∫ R ε + ∫ γ . {\displaystyle \int _{C}=\int _{\varepsilon }^{R}+\int _{\Gamma }+\int _{R}^{\varepsilon }+\int _{\gamma }.} It can be shown that the integrals over Γ and γ both tend to zero as ε → 0 and R → ∞, by an estimation argument above, that leaves two terms. Now since z1/2 = e(Log z)/2, on the contour outside the branch cut, we have gained 2π in argument along γ. (By Euler's identity, eiπ represents the unit vector, which therefore has π as its log. This π is what is meant by the argument of z. The coefficient of ⁠1/2⁠ forces us to use 2π.) So ∫ R ε z z 2 + 6 z + 8 d z = ∫ R ε e 1 2 Log ⁡ z z 2 + 6 z + 8 d z = ∫ R ε e 1 2 ( log ⁡ | z | + i arg ⁡ z ) z 2 + 6 z + 8 d z = ∫ R ε e 1 2 log ⁡ | z | e 1 2 ( 2 π i ) z 2 + 6 z + 8 d z = ∫ R ε e 1 2 log ⁡ | z | e π i z 2 + 6 z + 8 d z = ∫ R ε − z z 2 + 6 z + 8 d z = ∫ ε R z z 2 + 6 z + 8 d z . {\displaystyle {\begin{aligned}\int _{R}^{\varepsilon }{\frac {\sqrt {z}}{z^{2}+6z+8}}\,dz&=\int _{R}^{\varepsilon }{\frac {e^{{\frac {1}{2}}\operatorname {Log} z}}{z^{2}+6z+8}}\,dz\\[6pt]&=\int _{R}^{\varepsilon }{\frac {e^{{\frac {1}{2}}(\log |z|+i\arg {z})}}{z^{2}+6z+8}}\,dz\\[6pt]&=\int _{R}^{\varepsilon }{\frac {e^{{\frac {1}{2}}\log |z|}e^{{\frac {1}{2}}(2\pi i)}}{z^{2}+6z+8}}\,dz\\[6pt]&=\int _{R}^{\varepsilon }{\frac {e^{{\frac {1}{2}}\log |z|}e^{\pi i}}{z^{2}+6z+8}}\,dz\\[6pt]&=\int _{R}^{\varepsilon }{\frac {-{\sqrt {z}}}{z^{2}+6z+8}}\,dz\\[6pt]&=\int _{\varepsilon }^{R}{\frac {\sqrt {z}}{z^{2}+6z+8}}\,dz.\end{aligned}}} Therefore: ∫ C z z 2 + 6 z + 8 d z = 2 ∫ 0 ∞ x x 2 + 6 x + 8 d x . {\displaystyle \int _{C}{\frac {\sqrt {z}}{z^{2}+6z+8}}\,dz=2\int _{0}^{\infty }{\frac {\sqrt {x}}{x^{2}+6x+8}}\,dx.} By using the residue theorem or the Cauchy integral formula (first employing the partial fractions method to derive a sum of two simple contour integrals) one obtains π i ( i 2 − i ) = ∫ 0 ∞ x x 2 + 6 x + 8 d x = π ( 1 − 1 2 ) . ◻ {\displaystyle \pi i\left({\frac {i}{\sqrt {2}}}-i\right)=\int _{0}^{\infty }{\frac {\sqrt {x}}{x^{2}+6x+8}}\,dx=\pi \left(1-{\frac {1}{\sqrt {2}}}\right).\quad \square } === Example 5 – the square of the logarithm === This section treats a type of integral of which ∫ 0 ∞ log ⁡ x ( 1 + x 2 ) 2 d x {\displaystyle \int _{0}^{\infty }{\frac {\log x}{\left(1+x^{2}\right)^{2}}}\,dx} is an example. To calculate this integral, one uses the function f ( z ) = ( log ⁡ z 1 + z 2 ) 2 {\displaystyle f(z)=\left({\frac {\log z}{1+z^{2}}}\right)^{2}} and the branch of the logarithm corresponding to −π < arg z ≤ π. We will calculate the integral of f(z) along the keyhole contour shown at right. As it turns out this integral is a multiple of the initial integral that we wish to calculate and by the Cauchy residue theorem we have ( ∫ R + ∫ M + ∫ N + ∫ r ) f ( z ) d z = 2 π i ( Res z = i ⁡ f ( z ) + Res z = − i ⁡ f ( z ) ) = 2 π i ( − π 4 + 1 16 i π 2 − π 4 − 1 16 i π 2 ) = − i π 2 . {\displaystyle {\begin{aligned}\left(\int _{R}+\int _{M}+\int _{N}+\int _{r}\right)f(z)\,dz=&\ 2\pi i{\big (}\operatorname {Res} _{z=i}f(z)+\operatorname {Res} _{z=-i}f(z){\big )}\\=&\ 2\pi i\left(-{\frac {\pi }{4}}+{\frac {1}{16}}i\pi ^{2}-{\frac {\pi }{4}}-{\frac {1}{16}}i\pi ^{2}\right)\\=&\ -i\pi ^{2}.\end{aligned}}} Let R be the radius of the large circle, and r the radius of the small one. We will denote the upper line by M, and the lower line by N. As before we take the limit when R → ∞ and r → 0. The contributions from the two circles vanish. For example, one has the following upper bound with the ML lemma: | ∫ R f ( z ) d z | ≤ 2 π R ( log ⁡ R ) 2 + π 2 ( R 2 − 1 ) 2 → 0. {\displaystyle \left|\int _{R}f(z)\,dz\right|\leq 2\pi R{\frac {(\log R)^{2}+\pi ^{2}}{\left(R^{2}-1\right)^{2}}}\to 0.} In order to compute the contributions of M and N we set z = −x + iε on M and z = −x − iε on N, with 0 < x < ∞: − i π 2 = ( ∫ R + ∫ M + ∫ N + ∫ r ) f ( z ) d z = ( ∫ M + ∫ N ) f ( z ) d z ∫ R , ∫ r vanish = − ∫ ∞ 0 ( log ⁡ ( − x + i ε ) 1 + ( − x + i ε ) 2 ) 2 d x − ∫ 0 ∞ ( log ⁡ ( − x − i ε ) 1 + ( − x − i ε ) 2 ) 2 d x = ∫ 0 ∞ ( log ⁡ ( − x + i ε ) 1 + ( − x + i ε ) 2 ) 2 d x − ∫ 0 ∞ ( log ⁡ ( − x − i ε ) 1 + ( − x − i ε ) 2 ) 2 d x = ∫ 0 ∞ ( log ⁡ x + i π 1 + x 2 ) 2 d x − ∫ 0 ∞ ( log ⁡ x − i π 1 + x 2 ) 2 d x ε → 0 = ∫ 0 ∞ ( log ⁡ x + i π ) 2 − ( log ⁡ x − i π ) 2 ( 1 + x 2 ) 2 d x = ∫ 0 ∞ 4 π i log ⁡ x ( 1 + x 2 ) 2 d x = 4 π i ∫ 0 ∞ log ⁡ x ( 1 + x 2 ) 2 d x {\displaystyle {\begin{aligned}-i\pi ^{2}&=\left(\int _{R}+\int _{M}+\int _{N}+\int _{r}\right)f(z)\,dz\\[6pt]&=\left(\int _{M}+\int _{N}\right)f(z)\,dz&&\int _{R},\int _{r}{\mbox{ vanish}}\\[6pt]&=-\int _{\infty }^{0}\left({\frac {\log(-x+i\varepsilon )}{1+(-x+i\varepsilon )^{2}}}\right)^{2}\,dx-\int _{0}^{\infty }\left({\frac {\log(-x-i\varepsilon )}{1+(-x-i\varepsilon )^{2}}}\right)^{2}\,dx\\[6pt]&=\int _{0}^{\infty }\left({\frac {\log(-x+i\varepsilon )}{1+(-x+i\varepsilon )^{2}}}\right)^{2}\,dx-\int _{0}^{\infty }\left({\frac {\log(-x-i\varepsilon )}{1+(-x-i\varepsilon )^{2}}}\right)^{2}\,dx\\[6pt]&=\int _{0}^{\infty }\left({\frac {\log x+i\pi }{1+x^{2}}}\right)^{2}\,dx-\int _{0}^{\infty }\left({\frac {\log x-i\pi }{1+x^{2}}}\right)^{2}\,dx&&\varepsilon \to 0\\&=\int _{0}^{\infty }{\frac {(\log x+i\pi )^{2}-(\log x-i\pi )^{2}}{\left(1+x^{2}\right)^{2}}}\,dx\\[6pt]&=\int _{0}^{\infty }{\frac {4\pi i\log x}{\left(1+x^{2}\right)^{2}}}\,dx\\[6pt]&=4\pi i\int _{0}^{\infty }{\frac {\log x}{\left(1+x^{2}\right)^{2}}}\,dx\end{aligned}}} which gives ∫ 0 ∞ log ⁡ x ( 1 + x 2 ) 2 d x = − π 4 . {\displaystyle \int _{0}^{\infty }{\frac {\log x}{\left(1+x^{2}\right)^{2}}}\,dx=-{\frac {\pi }{4}}.} === Example 6 – logarithms and the residue at infinity === We seek to evaluate I = ∫ 0 3 x 3 4 ( 3 − x ) 1 4 5 − x d x . {\displaystyle I=\int _{0}^{3}{\frac {x^{\frac {3}{4}}(3-x)^{\frac {1}{4}}}{5-x}}\,dx.} This requires a close study of f ( z ) = z 3 4 ( 3 − z ) 1 4 . {\displaystyle f(z)=z^{\frac {3}{4}}(3-z)^{\frac {1}{4}}.} We will construct f(z) so that it has a branch cut on [0, 3], shown in red in the diagram. To do this, we choose two branches of the logarithm, setting z 3 4 = exp ⁡ ( 3 4 log ⁡ z ) where − π ≤ arg ⁡ z < π {\displaystyle z^{\frac {3}{4}}=\exp \left({\frac {3}{4}}\log z\right)\quad {\mbox{where }}-\pi \leq \arg z<\pi } and ( 3 − z ) 1 4 = exp ⁡ ( 1 4 log ⁡ ( 3 − z ) ) where 0 ≤ arg ⁡ ( 3 − z ) < 2 π . {\displaystyle (3-z)^{\frac {1}{4}}=\exp \left({\frac {1}{4}}\log(3-z)\right)\quad {\mbox{where }}0\leq \arg(3-z)<2\pi .} The cut of z3⁄4 is therefore (−∞, 0] and the cut of (3 − z)1/4 is (−∞, 3]. It is easy to see that the cut of the product of the two, i.e. f(z), is [0, 3], because f(z) is actually continuous across (−∞, 0). This is because when z = −r < 0 and we approach the cut from above, f(z) has the value r 3 4 e 3 4 π i ( 3 + r ) 1 4 e 2 4 π i = r 3 4 ( 3 + r ) 1 4 e 5 4 π i . {\displaystyle r^{\frac {3}{4}}e^{{\frac {3}{4}}\pi i}(3+r)^{\frac {1}{4}}e^{{\frac {2}{4}}\pi i}=r^{\frac {3}{4}}(3+r)^{\frac {1}{4}}e^{{\frac {5}{4}}\pi i}.} When we approach from below, f(z) has the value r 3 4 e − 3 4 π i ( 3 + r ) 1 4 e 0 4 π i = r 3 4 ( 3 + r ) 1 4 e − 3 4 π i . {\displaystyle r^{\frac {3}{4}}e^{-{\frac {3}{4}}\pi i}(3+r)^{\frac {1}{4}}e^{{\frac {0}{4}}\pi i}=r^{\frac {3}{4}}(3+r)^{\frac {1}{4}}e^{-{\frac {3}{4}}\pi i}.} But e − 3 4 π i = e 5 4 π i , {\displaystyle e^{-{\frac {3}{4}}\pi i}=e^{{\frac {5}{4}}\pi i},} so that we have continuity across the cut. This is illustrated in the diagram, where the two black oriented circles are labelled with the corresponding value of the argument of the logarithm used in z3⁄4 and (3 − z)1/4. We will use the contour shown in green in the diagram. To do this we must compute the value of f(z) along the line segments just above and just below the cut. Let z = r (in the limit, i.e. as the two green circles shrink to radius zero), where 0 ≤ r ≤ 3. Along the upper segment, we find that f(z) has the value r 3 4 e 0 4 π i ( 3 − r ) 1 4 e 2 4 π i = i r 3 4 ( 3 − r ) 1 4 {\displaystyle r^{\frac {3}{4}}e^{{\frac {0}{4}}\pi i}(3-r)^{\frac {1}{4}}e^{{\frac {2}{4}}\pi i}=ir^{\frac {3}{4}}(3-r)^{\frac {1}{4}}} and along the lower segment, r 3 4 e 0 4 π i ( 3 − r ) 1 4 e 0 4 π i = r 3 4 ( 3 − r ) 1 4 . {\displaystyle r^{\frac {3}{4}}e^{{\frac {0}{4}}\pi i}(3-r)^{\frac {1}{4}}e^{{\frac {0}{4}}\pi i}=r^{\frac {3}{4}}(3-r)^{\frac {1}{4}}.} It follows that the integral of ⁠f(z)/5 − z⁠ along the upper segment is −iI in the limit, and along the lower segment, I. If we can show that the integrals along the two green circles vanish in the limit, then we also have the value of I, by the Cauchy residue theorem. Let the radius of the green circles be ρ, where ρ < 0.001 and ρ → 0, and apply the ML inequality. For the circle CL on the left, we find | ∫ C L f ( z ) 5 − z d z | ≤ 2 π ρ ρ 3 4 3.001 1 4 4.999 ∈ O ( ρ 7 4 ) → 0. {\displaystyle \left|\int _{C_{\mathrm {L} }}{\frac {f(z)}{5-z}}dz\right|\leq 2\pi \rho {\frac {\rho ^{\frac {3}{4}}3.001^{\frac {1}{4}}}{4.999}}\in {\mathcal {O}}\left(\rho ^{\frac {7}{4}}\right)\to 0.} Similarly, for the circle CR on the right, we have | ∫ C R f ( z ) 5 − z d z | ≤ 2 π ρ 3.001 3 4 ρ 1 4 1.999 ∈ O ( ρ 5 4 ) → 0. {\displaystyle \left|\int _{C_{\mathrm {R} }}{\frac {f(z)}{5-z}}dz\right|\leq 2\pi \rho {\frac {3.001^{\frac {3}{4}}\rho ^{\frac {1}{4}}}{1.999}}\in {\mathcal {O}}\left(\rho ^{\frac {5}{4}}\right)\to 0.} Now using the Cauchy residue theorem, we have ( − i + 1 ) I = − 2 π i ( Res z = 5 ⁡ f ( z ) 5 − z + Res z = ∞ ⁡ f ( z ) 5 − z ) . {\displaystyle (-i+1)I=-2\pi i\left(\operatorname {Res} _{z=5}{\frac {f(z)}{5-z}}+\operatorname {Res} _{z=\infty }{\frac {f(z)}{5-z}}\right).} where the minus sign is due to the clockwise direction around the residues. Using the branch of the logarithm from before, clearly Res z = 5 ⁡ f ( z ) 5 − z = − 5 3 4 e 1 4 log ⁡ ( − 2 ) . {\displaystyle \operatorname {Res} _{z=5}{\frac {f(z)}{5-z}}=-5^{\frac {3}{4}}e^{{\frac {1}{4}}\log(-2)}.} The pole is shown in blue in the diagram. The value simplifies to − 5 3 4 e 1 4 ( log ⁡ 2 + π i ) = − e 1 4 π i 5 3 4 2 1 4 . {\displaystyle -5^{\frac {3}{4}}e^{{\frac {1}{4}}(\log 2+\pi i)}=-e^{{\frac {1}{4}}\pi i}5^{\frac {3}{4}}2^{\frac {1}{4}}.} We use the following formula for the residue at infinity: Res z = ∞ ⁡ h ( z ) = Res z = 0 ⁡ ( − 1 z 2 h ( 1 z ) ) . {\displaystyle \operatorname {Res} _{z=\infty }h(z)=\operatorname {Res} _{z=0}\left(-{\frac {1}{z^{2}}}h\left({\frac {1}{z}}\right)\right).} Substituting, we find 1 5 − 1 z = − z ( 1 + 5 z + 5 2 z 2 + 5 3 z 3 + ⋯ ) {\displaystyle {\frac {1}{5-{\frac {1}{z}}}}=-z\left(1+5z+5^{2}z^{2}+5^{3}z^{3}+\cdots \right)} and ( 1 z 3 ( 3 − 1 z ) ) 1 4 = 1 z ( 3 z − 1 ) 1 4 = 1 z e 1 4 π i ( 1 − 3 z ) 1 4 , {\displaystyle \left({\frac {1}{z^{3}}}\left(3-{\frac {1}{z}}\right)\right)^{\frac {1}{4}}={\frac {1}{z}}(3z-1)^{\frac {1}{4}}={\frac {1}{z}}e^{{\frac {1}{4}}\pi i}(1-3z)^{\frac {1}{4}},} where we have used the fact that −1 = eπi for the second branch of the logarithm. Next we apply the binomial expansion, obtaining 1 z e 1 4 π i ( 1 − ( 1 / 4 1 ) 3 z + ( 1 / 4 2 ) 3 2 z 2 − ( 1 / 4 3 ) 3 3 z 3 + ⋯ ) . {\displaystyle {\frac {1}{z}}e^{{\frac {1}{4}}\pi i}\left(1-{1/4 \choose 1}3z+{1/4 \choose 2}3^{2}z^{2}-{1/4 \choose 3}3^{3}z^{3}+\cdots \right).} The conclusion is that Res z = ∞ ⁡ f ( z ) 5 − z = e 1 4 π i ( 5 − 3 4 ) = e 1 4 π i 17 4 . {\displaystyle \operatorname {Res} _{z=\infty }{\frac {f(z)}{5-z}}=e^{{\frac {1}{4}}\pi i}\left(5-{\frac {3}{4}}\right)=e^{{\frac {1}{4}}\pi i}{\frac {17}{4}}.} Finally, it follows that the value of I is I = 2 π i e 1 4 π i − 1 + i ( 17 4 − 5 3 4 2 1 4 ) = 2 π 2 − 1 2 ( 17 4 − 5 3 4 2 1 4 ) {\displaystyle I=2\pi i{\frac {e^{{\frac {1}{4}}\pi i}}{-1+i}}\left({\frac {17}{4}}-5^{\frac {3}{4}}2^{\frac {1}{4}}\right)=2\pi 2^{-{\frac {1}{2}}}\left({\frac {17}{4}}-5^{\frac {3}{4}}2^{\frac {1}{4}}\right)} which yields I = π 2 2 ( 17 − 5 3 4 2 9 4 ) = π 2 2 ( 17 − 40 3 4 ) . {\displaystyle I={\frac {\pi }{2{\sqrt {2}}}}\left(17-5^{\frac {3}{4}}2^{\frac {9}{4}}\right)={\frac {\pi }{2{\sqrt {2}}}}\left(17-40^{\frac {3}{4}}\right).} == Evaluation with residue theorem == Using the residue theorem, we can evaluate closed contour integrals. The following are examples on evaluating contour integrals with the residue theorem. Using the residue theorem, let us evaluate this contour integral. ∮ C e z z 3 d z {\displaystyle \oint _{C}{\frac {e^{z}}{z^{3}}}\,dz} Recall that the residue theorem states ∮ C f ( z ) d z = 2 π i ⋅ ∑ Res ⁡ ( f , a k ) {\displaystyle \oint _{C}f(z)dz=2\pi i\cdot \sum \operatorname {Res} (f,a_{k})} where Res {\displaystyle \operatorname {Res} } is the residue of f ( z ) {\displaystyle f(z)} , and the a k {\displaystyle a_{k}} are the singularities of f ( z ) {\displaystyle f(z)} lying inside the contour C {\displaystyle C} (with none of them lying directly on C {\displaystyle C} ). f ( z ) {\displaystyle f(z)} has only one pole, 0 {\displaystyle 0} . From that, we determine that the residue of f ( z ) {\displaystyle f(z)} to be 1 2 {\displaystyle {\tfrac {1}{2}}} ∮ C f ( z ) d z = ∮ C e z z 3 d z = 2 π i ⋅ Res z = 0 ⁡ f ( z ) = 2 π i Res z = 0 ⁡ e z z 3 = 2 π i ⋅ 1 2 = π i {\displaystyle {\begin{aligned}\oint _{C}f(z)dz&=\oint _{C}{\frac {e^{z}}{z^{3}}}dz\\&=2\pi i\cdot \operatorname {Res} _{z=0}f(z)\\&=2\pi i\operatorname {Res} _{z=0}{\frac {e^{z}}{z^{3}}}\\&=2\pi i\cdot {\frac {1}{2}}\\&=\pi i\end{aligned}}} Thus, using the residue theorem, we can determine: ∮ C e z z 3 d z = π i . {\displaystyle \oint _{C}{\frac {e^{z}}{z^{3}}}dz=\pi i.} == Multivariable contour integrals == To solve multivariable contour integrals (i.e. surface integrals, complex volume integrals, and higher order integrals), we must use the divergence theorem. For now, let ∇ ⋅ {\displaystyle \nabla \cdot } be interchangeable with div {\displaystyle \operatorname {div} } . These will both serve as the divergence of the vector field denoted as F {\displaystyle \mathbf {F} } . This theorem states: ∫ ⋯ ∫ U ⏟ n div ⁡ ( F ) d V = ∮ ⋯ ∮ ∂ U ⏟ n − 1 F ⋅ n d S {\displaystyle \underbrace {\int \cdots \int _{U}} _{n}\operatorname {div} (\mathbf {F} )\,dV=\underbrace {\oint \cdots \oint _{\partial U}} _{n-1}\mathbf {F} \cdot \mathbf {n} \,dS} In addition, we also need to evaluate ∇ ⋅ F {\displaystyle \nabla \cdot \mathbf {F} } where ∇ ⋅ F {\displaystyle \nabla \cdot \mathbf {F} } is an alternate notation of div ⁡ ( F ) {\displaystyle \operatorname {div} (\mathbf {F} )} . The divergence of any dimension can be described as div ⁡ ( F ) = ∇ ⋅ F = ( ∂ ∂ u , ∂ ∂ x , ∂ ∂ y , ∂ ∂ z , … ) ⋅ ( F u , F x , F y , F z , … ) = ( ∂ F u ∂ u + ∂ F x ∂ x + ∂ F y ∂ y + ∂ F z ∂ z + ⋯ ) {\displaystyle {\begin{aligned}\operatorname {div} (\mathbf {F} )&=\nabla \cdot \mathbf {F} \\&=\left({\frac {\partial }{\partial u}},{\frac {\partial }{\partial x}},{\frac {\partial }{\partial y}},{\frac {\partial }{\partial z}},\dots \right)\cdot (F_{u},F_{x},F_{y},F_{z},\dots )\\&=\left({\frac {\partial F_{u}}{\partial u}}+{\frac {\partial F_{x}}{\partial x}}+{\frac {\partial F_{y}}{\partial y}}+{\frac {\partial F_{z}}{\partial z}}+\cdots \right)\end{aligned}}} === Example 1 === Let the vector field F = sin ⁡ ( 2 x ) e x + sin ⁡ ( 2 y ) e y + sin ⁡ ( 2 z ) e z {\displaystyle \mathbf {F} =\sin(2x)\mathbf {e} _{x}+\sin(2y)\mathbf {e} _{y}+\sin(2z)\mathbf {e} _{z}} and be bounded by the following 0 ≤ x ≤ 1 0 ≤ y ≤ 3 − 1 ≤ z ≤ 4 {\displaystyle {0\leq x\leq 1}\quad {0\leq y\leq 3}\quad {-1\leq z\leq 4}} The corresponding double contour integral would be set up as such: We now evaluate ∇ ⋅ F {\displaystyle \nabla \cdot \mathbf {F} } . Meanwhile, set up the corresponding triple integral: = ∭ V ( ∂ F x ∂ x + ∂ F y ∂ y + ∂ F z ∂ z ) d V = ∭ V ( ∂ sin ⁡ ( 2 x ) ∂ x + ∂ sin ⁡ ( 2 y ) ∂ y + ∂ sin ⁡ ( 2 z ) ∂ z ) d V = ∭ V 2 ( cos ⁡ ( 2 x ) + cos ⁡ ( 2 y ) + cos ⁡ ( 2 z ) ) d V = ∫ 0 1 ∫ 0 3 ∫ − 1 4 2 ( cos ⁡ ( 2 x ) + cos ⁡ ( 2 y ) + cos ⁡ ( 2 z ) ) d x d y d z = ∫ 0 1 ∫ 0 3 ( 10 cos ⁡ ( 2 y ) + sin ⁡ ( 8 ) + sin ⁡ ( 2 ) + 10 cos ⁡ ( z ) ) d y d z = ∫ 0 1 ( 30 cos ⁡ ( 2 z ) + 3 sin ⁡ ( 2 ) + 3 sin ⁡ ( 8 ) + 5 sin ⁡ ( 6 ) ) d z = 18 sin ⁡ ( 2 ) + 3 sin ⁡ ( 8 ) + 5 sin ⁡ ( 6 ) {\displaystyle {\begin{aligned}&=\iiint _{V}\left({\frac {\partial F_{x}}{\partial x}}+{\frac {\partial F_{y}}{\partial y}}+{\frac {\partial F_{z}}{\partial z}}\right)dV\\[6pt]&=\iiint _{V}\left({\frac {\partial \sin(2x)}{\partial x}}+{\frac {\partial \sin(2y)}{\partial y}}+{\frac {\partial \sin(2z)}{\partial z}}\right)dV\\[6pt]&=\iiint _{V}2\left(\cos(2x)+\cos(2y)+\cos(2z)\right)dV\\[6pt]&=\int _{0}^{1}\int _{0}^{3}\int _{-1}^{4}2(\cos(2x)+\cos(2y)+\cos(2z))\,dx\,dy\,dz\\[6pt]&=\int _{0}^{1}\int _{0}^{3}(10\cos(2y)+\sin(8)+\sin(2)+10\cos(z))\,dy\,dz\\[6pt]&=\int _{0}^{1}(30\cos(2z)+3\sin(2)+3\sin(8)+5\sin(6))\,dz\\[6pt]&=18\sin(2)+3\sin(8)+5\sin(6)\end{aligned}}} === Example 2 === Let the vector field F = u 4 e u + x 5 e x + y 6 e y + z − 3 e z {\displaystyle \mathbf {F} =u^{4}\mathbf {e} _{u}+x^{5}\mathbf {e} _{x}+y^{6}\mathbf {e} _{y}+z^{-3}\mathbf {e} _{z}} , and remark that there are 4 parameters in this case. Let this vector field be bounded by the following: 0 ≤ x ≤ 1 − 10 ≤ y ≤ 2 π 4 ≤ z ≤ 5 − 1 ≤ u ≤ 3 {\displaystyle {0\leq x\leq 1}\quad {-10\leq y\leq 2\pi }\quad {4\leq z\leq 5}\quad {-1\leq u\leq 3}} To evaluate this, we must utilize the divergence theorem as stated before, and we must evaluate ∇ ⋅ F {\displaystyle \nabla \cdot \mathbf {F} } . Let d V = d x d y d z d u {\displaystyle dV=dx\,dy\,dz\,du} = ⨌ V ( ∂ F u ∂ u + ∂ F x ∂ x + ∂ F y ∂ y + ∂ F z ∂ z ) d V = ⨌ V ( ∂ u 4 ∂ u + ∂ x 5 ∂ x + ∂ y 6 ∂ y + ∂ z − 3 ∂ z ) d V = ⨌ V 4 u 3 z 4 + 5 x 4 z 4 + 5 y 4 z 4 − 3 z 4 d V = ⨌ V 4 u 3 z 4 + 5 x 4 z 4 + 5 y 4 z 4 − 3 z 4 d V = ∫ 0 1 ∫ − 10 2 π ∫ 4 5 ∫ − 1 3 4 u 3 z 4 + 5 x 4 z 4 + 5 y 4 z 4 − 3 z 4 d V = ∫ 0 1 ∫ − 10 2 π ∫ 4 5 ( 4 ( 3 u 4 z 3 + 3 y 6 + 91 z 3 + 3 ) 3 z 3 ) d y d z d u = ∫ 0 1 ∫ − 10 2 π ( 4 u 4 + 743440 21 + 4 z 3 ) d z d u = ∫ 0 1 ( − 1 2 π 2 + 1486880 π 21 + 8 π u 4 + 40 u 4 + 371720021 1050 ) d u = 371728421 1050 + 14869136 π 3 − 105 210 π 2 ≈ 576468.77 {\displaystyle {\begin{aligned}&=\iiiint _{V}\left({\frac {\partial F_{u}}{\partial u}}+{\frac {\partial F_{x}}{\partial x}}+{\frac {\partial F_{y}}{\partial y}}+{\frac {\partial F_{z}}{\partial z}}\right)\,dV\\[6pt]&=\iiiint _{V}\left({\frac {\partial u^{4}}{\partial u}}+{\frac {\partial x^{5}}{\partial x}}+{\frac {\partial y^{6}}{\partial y}}+{\frac {\partial z^{-3}}{\partial z}}\right)\,dV\\[6pt]&=\iiiint _{V}{\frac {4u^{3}z^{4}+5x^{4}z^{4}+5y^{4}z^{4}-3}{z^{4}}}\,dV\\[6pt]&=\iiiint _{V}{\frac {4u^{3}z^{4}+5x^{4}z^{4}+5y^{4}z^{4}-3}{z^{4}}}\,dV\\[6pt]&=\int _{0}^{1}\int _{-10}^{2\pi }\int _{4}^{5}\int _{-1}^{3}{\frac {4u^{3}z^{4}+5x^{4}z^{4}+5y^{4}z^{4}-3}{z^{4}}}\,dV\\[6pt]&=\int _{0}^{1}\int _{-10}^{2\pi }\int _{4}^{5}\left({\frac {4(3u^{4}z^{3}+3y^{6}+91z^{3}+3)}{3z^{3}}}\right)\,dy\,dz\,du\\[6pt]&=\int _{0}^{1}\int _{-10}^{2\pi }\left(4u^{4}+{\frac {743440}{21}}+{\frac {4}{z^{3}}}\right)\,dz\,du\\[6pt]&=\int _{0}^{1}\left(-{\frac {1}{2\pi ^{2}}}+{\frac {1486880\pi }{21}}+8\pi u^{4}+40u^{4}+{\frac {371720021}{1050}}\right)\,du\\[6pt]&={\frac {371728421}{1050}}+{\frac {14869136\pi ^{3}-105}{210\pi ^{2}}}\\[6pt]&\approx {576468.77}\end{aligned}}} Thus, we can evaluate a contour integral with n = 4 {\displaystyle n=4} . We can use the same method to evaluate contour integrals for any vector field with n > 4 {\displaystyle n>4} as well. == Integral representation == An integral representation of a function is an expression of the function involving a contour integral. Various integral representations are known for many special functions. Integral representations can be important for theoretical reasons, e.g. giving analytic continuation or functional equations, or sometimes for numerical evaluations. For example, the original definition of the Riemann zeta function ζ(s) via a Dirichlet series, ζ ( s ) = ∑ n = 1 ∞ 1 n s , {\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}},} is valid only for Re(s) > 1. But ζ ( s ) = − Γ ( 1 − s ) 2 π i ∫ H ( − t ) s − 1 e t − 1 d t , {\displaystyle \zeta (s)=-{\frac {\Gamma (1-s)}{2\pi i}}\int _{H}{\frac {(-t)^{s-1}}{e^{t}-1}}dt,} where the integration is done over the Hankel contour H, is valid for all complex s not equal to 1. == See also == Residue (complex analysis) Cauchy principal value Poisson integral Pochhammer contour == References == == Further reading == Titchmarsh, E. C. (1939), The Theory of Functions (2nd ed.), Oxford University Press; reprinted, 1968, ISBN 0-19-853349-7 Marko Riedel et al., Problème d'intégrale, Les-Mathematiques.net, in French. Marko Riedel et al., Integral by residue, math.stackexchange.com. W W L Chen, Introduction to Complex Analysis Various authors, sin límites ni cotas, es.ciencia.matematicas, in Spanish. == External links == "Complex integration, method of", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Methods_of_contour_integration
In mathematics, an integrodifference equation is a recurrence relation on a function space, of the following form: n t + 1 ( x ) = ∫ Ω k ( x , y ) f ( n t ( y ) ) d y , {\displaystyle n_{t+1}(x)=\int _{\Omega }k(x,y)\,f(n_{t}(y))\,dy,} where { n t } {\displaystyle \{n_{t}\}\,} is a sequence in the function space and Ω {\displaystyle \Omega \,} is the domain of those functions. In most applications, for any y ∈ Ω {\displaystyle y\in \Omega \,} , k ( x , y ) {\displaystyle k(x,y)\,} is a probability density function on Ω {\displaystyle \Omega \,} . Note that in the definition above, n t {\displaystyle n_{t}} can be vector valued, in which case each element of { n t } {\displaystyle \{n_{t}\}} has a scalar valued integrodifference equation associated with it. Integrodifference equations are widely used in mathematical biology, especially theoretical ecology, to model the dispersal and growth of populations. In this case, n t ( x ) {\displaystyle n_{t}(x)} is the population size or density at location x {\displaystyle x} at time t {\displaystyle t} , f ( n t ( x ) ) {\displaystyle f(n_{t}(x))} describes the local population growth at location x {\displaystyle x} and k ( x , y ) {\displaystyle k(x,y)} , is the probability of moving from point y {\displaystyle y} to point x {\displaystyle x} , often referred to as the dispersal kernel. Integrodifference equations are most commonly used to describe univoltine populations, including, but not limited to, many arthropod, and annual plant species. However, multivoltine populations can also be modeled with integrodifference equations, as long as the organism has non-overlapping generations. In this case, t {\displaystyle t} is not measured in years, but rather the time increment between broods. == Convolution kernels and invasion speeds == In one spatial dimension, the dispersal kernel often depends only on the distance between the source and the destination, and can be written as k ( x − y ) {\displaystyle k(x-y)} . In this case, some natural conditions on f and k imply that there is a well-defined spreading speed for waves of invasion generated from compact initial conditions. The wave speed is often calculated by studying the linearized equation n t + 1 = ∫ − ∞ ∞ k ( x − y ) R n t ( y ) d y {\displaystyle n_{t+1}=\int _{-\infty }^{\infty }k(x-y)Rn_{t}(y)dy} where R = d f d n | n = 0 {\displaystyle R=\left.{\dfrac {df}{dn}}\right|_{n=0}} . This can be written as the convolution n t + 1 = f ′ ( 0 ) k ∗ n t {\displaystyle n_{t+1}=f'(0)k*n_{t}} Using a moment-generating-function transformation M ( s ) = ∫ − ∞ ∞ e s x n ( x ) d x {\displaystyle M(s)=\int _{-\infty }^{\infty }e^{sx}n(x)dx} it has been shown that the critical wave speed c ∗ = min w > 0 [ 1 w ln ⁡ ( R ∫ − ∞ ∞ k ( s ) e w s d s ) ] {\displaystyle c^{*}=\min _{w>0}\left[{\frac {1}{w}}\ln \left(R\int _{-\infty }^{\infty }k(s)e^{ws}ds\right)\right]} Other types of equations used to model population dynamics through space include reaction–diffusion equations and metapopulation equations. However, diffusion equations do not as easily allow for the inclusion of explicit dispersal patterns and are only biologically accurate for populations with overlapping generations. Metapopulation equations are different from integrodifference equations in the fact that they break the population down into discrete patches rather than a continuous landscape. == References ==
Wikipedia/Integrodifference_equation
Chapman–Enskog theory provides a framework in which equations of hydrodynamics for a gas can be derived from the Boltzmann equation. The technique justifies the otherwise phenomenological constitutive relations appearing in hydrodynamical descriptions such as the Navier–Stokes equations. In doing so, expressions for various transport coefficients such as thermal conductivity and viscosity are obtained in terms of molecular parameters. Thus, Chapman–Enskog theory constitutes an important step in the passage from a microscopic, particle-based description to a continuum hydrodynamical one. The theory is named for Sydney Chapman and David Enskog, who introduced it independently in 1916 and 1917. == Description == The starting point of Chapman–Enskog theory is the Boltzmann equation for the 1-particle distribution function f ( r , v , t ) {\displaystyle f(\mathbf {r} ,\mathbf {v} ,t)} : ∂ f ∂ t + v ⋅ ∂ f ∂ r + F m ⋅ ∂ f ∂ v = C ^ f , {\displaystyle {\frac {\partial f}{\partial t}}+\mathbf {v} \cdot {\frac {\partial f}{\partial \mathbf {r} }}+{\frac {\mathbf {F} }{m}}\cdot {\frac {\partial f}{\partial \mathbf {v} }}={\hat {C}}f,} where C ^ {\displaystyle {\hat {C}}} is a nonlinear integral operator which models the evolution of f {\displaystyle f} under interparticle collisions. This nonlinearity makes solving the full Boltzmann equation difficult, and motivates the development of approximate techniques such as the one provided by Chapman–Enskog theory. Given this starting point, the various assumptions underlying the Boltzmann equation carry over to Chapman–Enskog theory as well. The most basic of these requires a separation of scale between the collision duration τ c {\displaystyle \tau _{\mathrm {c} }} and the mean free time between collisions τ f {\displaystyle \tau _{\mathrm {f} }} : τ c ≪ τ f {\displaystyle \tau _{\mathrm {c} }\ll \tau _{\mathrm {f} }} . This condition ensures that collisions are well-defined events in space and time, and holds if the dimensionless parameter γ ≡ r c 3 n {\displaystyle \gamma \equiv r_{\mathrm {c} }^{3}n} is small, where r c {\displaystyle r_{\mathrm {c} }} is the range of interparticle interactions and n {\displaystyle n} is the number density. In addition to this assumption, Chapman–Enskog theory also requires that τ f {\displaystyle \tau _{\mathrm {f} }} is much smaller than any extrinsic timescales τ ext {\displaystyle \tau _{\text{ext}}} . These are the timescales associated with the terms on the left hand side of the Boltzmann equation, which describe variations of the gas state over macroscopic lengths. Typically, their values are determined by initial/boundary conditions and/or external fields. This separation of scales implies that the collisional term on the right hand side of the Boltzmann equation is much larger than the streaming terms on the left hand side. Thus, an approximate solution can be found from C ^ f = 0. {\displaystyle {\hat {C}}f=0.} It can be shown that the solution to this equation is a Gaussian: f = n ( r , t ) ( m 2 π k B T ( r , t ) ) 3 / 2 exp ⁡ [ − m | v − v 0 ( r , t ) | 2 2 k B T ( r , t ) ] , {\displaystyle f=n(\mathbf {r} ,t)\left({\frac {m}{2\pi k_{\text{B}}T(\mathbf {r} ,t)}}\right)^{3/2}\exp \left[-{\frac {m{\left|\mathbf {v} -\mathbf {v} _{0}(\mathbf {r} ,t)\right|}^{2}}{2k_{\text{B}}T(\mathbf {r} ,t)}}\right],} where m {\displaystyle m} is the molecule mass and k B {\displaystyle k_{\text{B}}} is the Boltzmann constant. A gas is said to be in local equilibrium if it satisfies this equation. The assumption of local equilibrium leads directly to the Euler equations, which describe fluids without dissipation, i.e. with thermal conductivity and viscosity equal to 0 {\displaystyle 0} . The primary goal of Chapman–Enskog theory is to systematically obtain generalizations of the Euler equations which incorporate dissipation. This is achieved by expressing deviations from local equilibrium as a perturbative series in Knudsen number Kn {\displaystyle {\text{Kn}}} , which is small if τ f ≪ τ ext {\displaystyle \tau _{\mathrm {f} }\ll \tau _{\text{ext}}} . Conceptually, the resulting hydrodynamic equations describe the dynamical interplay between free streaming and interparticle collisions. The latter tend to drive the gas towards local equilibrium, while the former acts across spatial inhomogeneities to drive the gas away from local equilibrium. When the Knudsen number is of the order of 1 or greater, the gas in the system being considered cannot be described as a fluid. To first order in Kn {\displaystyle {\text{Kn}}} one obtains the Navier–Stokes equations. Second and third orders give rise, respectively, to the Burnett equations and super-Burnett equations. == Mathematical formulation == Since the Knudsen number does not appear explicitly in the Boltzmann equation, but rather implicitly in terms of the distribution function and boundary conditions, a dummy variable ε {\displaystyle \varepsilon } is introduced to keep track of the appropriate orders in the Chapman–Enskog expansion: ∂ f ∂ t + v ⋅ ∂ f ∂ r + F m ⋅ ∂ f ∂ v = 1 ε C ^ f . {\displaystyle {\frac {\partial f}{\partial t}}+\mathbf {v\cdot } {\frac {\partial f}{\partial \mathbf {r} }}+{\frac {\mathbf {F} }{m}}\cdot {\frac {\partial f}{\partial \mathbf {v} }}={\frac {1}{\varepsilon }}{\hat {C}}f.} Small ε {\displaystyle \varepsilon } implies the collisional term C ^ f {\displaystyle {\hat {C}}f} dominates the streaming term v ⋅ ∂ f ∂ r + F m ⋅ ∂ f ∂ v {\displaystyle \mathbf {v\cdot } {\frac {\partial f}{\partial \mathbf {r} }}+{\frac {\mathbf {F} }{m}}\cdot {\frac {\partial f}{\partial \mathbf {v} }}} , which is the same as saying the Knudsen number is small. Thus, the appropriate form for the Chapman–Enskog expansion is f = f ( 0 ) + ε f ( 1 ) + ε 2 f ( 2 ) + ⋯ . {\displaystyle f=f^{(0)}+\varepsilon f^{(1)}+\varepsilon ^{2}f^{(2)}+\cdots \ .} Solutions that can be formally expanded in this way are known as normal solutions to the Boltzmann equation. This class of solutions excludes non-perturbative contributions (such as e − 1 / ε {\displaystyle e^{-1/\varepsilon }} ), which appear in boundary layers or near internal shock layers. Thus, Chapman–Enskog theory is restricted to situations in which such solutions are negligible. Substituting this expansion and equating orders of ε {\displaystyle \varepsilon } leads to the hierarchy J ( f ( 0 ) , f ( 0 ) ) = 0 2 J ( f ( 0 ) , f ( n ) ) = ( ∂ ∂ t + v ⋅ ∂ ∂ r + F m ⋅ ∂ ∂ v ) f ( n − 1 ) − ∑ m = 1 n − 1 J ( f ( n ) , f ( n − m ) ) , n > 0 , {\displaystyle {\begin{aligned}J(f^{(0)},f^{(0)})&=0\\2J(f^{(0)},f^{(n)})&=\left({\frac {\partial }{\partial t}}+\mathbf {v\cdot } {\frac {\partial }{\partial \mathbf {r} }}+{\frac {\mathbf {F} }{m}}\cdot {\frac {\partial }{\partial \mathbf {v} }}\right)f^{(n-1)}-\sum _{m=1}^{n-1}J(f^{(n)},f^{(n-m)}),\qquad n>0,\end{aligned}}} where J {\displaystyle J} is an integral operator, linear in both its arguments, which satisfies J ( f , g ) = J ( g , f ) {\displaystyle J(f,g)=J(g,f)} and J ( f , f ) = C ^ f {\displaystyle J(f,f)={\hat {C}}f} . The solution to the first equation is a Gaussian: f ( 0 ) = n ′ ( r , t ) ( m 2 π k B T ′ ( r , t ) ) 3 / 2 exp ⁡ [ − m | v − v 0 ′ ( r , t ) | 2 2 k B T ′ ( r , t ) ] . {\displaystyle f^{(0)}=n'(\mathbf {r} ,t)\left({\frac {m}{2\pi k_{\text{B}}T'(\mathbf {r} ,t)}}\right)^{3/2}\exp \left[-{\frac {m\left|\mathbf {v} -\mathbf {v} '_{0}(\mathbf {r} ,t)\right|^{2}}{2k_{\text{B}}T'(\mathbf {r} ,t)}}\right].} for some functions n ′ ( r , t ) {\displaystyle n'(\mathbf {r} ,t)} , v 0 ′ ( r , t ) {\displaystyle \mathbf {v} '_{0}(\mathbf {r} ,t)} , and T ′ ( r , t ) {\displaystyle T'(\mathbf {r} ,t)} . The expression for f ( 0 ) {\displaystyle f^{(0)}} suggests a connection between these functions and the physical hydrodynamic fields defined as moments of f ( r , v , t ) {\displaystyle f(\mathbf {r} ,\mathbf {v} ,t)} : n ( r , t ) = ∫ f ( r , v , t ) d v n ( r , t ) v 0 ( r , t ) = ∫ v f ( r , v , t ) d v n ( r , t ) T ( r , t ) = ∫ m 3 k B v 2 f ( r , v , t ) d v . {\displaystyle {\begin{aligned}n(\mathbf {r} ,t)&=\int f(\mathbf {r} ,\mathbf {v} ,t)\,d\mathbf {v} \\n(\mathbf {r} ,t)\mathbf {v} _{0}(\mathbf {r} ,t)&=\int \mathbf {v} f(\mathbf {r} ,\mathbf {v} ,t)\,d\mathbf {v} \\n(\mathbf {r} ,t)T(\mathbf {r} ,t)&=\int {\frac {m}{3k_{\text{B}}}}v^{2}f(\mathbf {r} ,\mathbf {v} ,t)\,d\mathbf {v} .\end{aligned}}} From a purely mathematical point of view, however, the two sets of functions are not necessarily the same for ε > 0 {\displaystyle \varepsilon >0} (for ε = 0 {\displaystyle \varepsilon =0} they are equal by definition). Indeed, proceeding systematically in the hierarchy, one finds that similarly to f ( 0 ) {\displaystyle f^{(0)}} , each f ( n ) {\displaystyle f^{(n)}} also contains arbitrary functions of r {\displaystyle \mathbf {r} } and t {\displaystyle t} whose relation to the physical hydrodynamic fields is a priori unknown. One of the key simplifying assumptions of Chapman–Enskog theory is to assume that these otherwise arbitrary functions can be written in terms of the exact hydrodynamic fields and their spatial gradients. In other words, the space and time dependence of f {\displaystyle f} enters only implicitly through the hydrodynamic fields. This statement is physically plausible because small Knudsen numbers correspond to the hydrodynamic regime, in which the state of the gas is determined solely by the hydrodynamic fields. In the case of f ( 0 ) {\displaystyle f^{(0)}} , the functions n ′ ( r , t ) {\displaystyle n'(\mathbf {r} ,t)} , v 0 ′ ( r , t ) {\displaystyle \mathbf {v} '_{0}(\mathbf {r} ,t)} , and T ′ ( r , t ) {\displaystyle T'(\mathbf {r} ,t)} are assumed exactly equal to the physical hydrodynamic fields. While these assumptions are physically plausible, there is the question of whether solutions which satisfy these properties actually exist. More precisely, one must show that solutions exist satisfying ∫ ∑ n = 1 ∞ ε n f ( n ) d v = 0 = ∫ ∑ n = 1 ∞ ε n f ( n ) v 2 d v ∫ ∑ n = 1 ∞ ε n f ( n ) v i d v = 0 , i ∈ { x , y , z } . {\displaystyle {\begin{aligned}\int \sum _{n=1}^{\infty }\varepsilon ^{n}f^{(n)}\,d\mathbf {v} =0=\int \sum _{n=1}^{\infty }\varepsilon ^{n}f^{(n)}\mathbf {v} ^{2}\,d\mathbf {v} \\[1ex]\int \sum _{n=1}^{\infty }\varepsilon ^{n}f^{(n)}v_{i}\,d\mathbf {v} =0,\qquad i\in \{x,y,z\}.\end{aligned}}} Moreover, even if such solutions exist, there remains the additional question of whether they span the complete set of normal solutions to the Boltzmann equation, i.e. do not represent an artificial restriction of the original expansion in ε {\displaystyle \varepsilon } . One of the key technical achievements of Chapman–Enskog theory is to answer both of these questions in the positive. Thus, at least at the formal level, there is no loss of generality in the Chapman–Enskog approach. With these formal considerations established, one can proceed to calculate f ( 1 ) {\displaystyle f^{(1)}} . The result is f ( 1 ) = [ − 1 n ( 2 k B T m ) 1 / 2 A ( v ) ⋅ ∇ ln ⁡ T − 2 n B ( v ) : ∇ v 0 ] f ( 0 ) , {\displaystyle f^{(1)}=\left[-{\frac {1}{n}}\left({\frac {2k_{\text{B}}T}{m}}\right)^{1/2}\mathbf {A} (\mathbf {v} )\cdot \nabla \ln T-{\frac {2}{n}}\mathbb {B(\mathbf {v} )\colon \nabla } \mathbf {v} _{0}\right]f^{(0)},} where A ( v ) {\displaystyle \mathbf {A} (\mathbf {v} )} is a vector and B ( v ) {\displaystyle \mathbb {B} (\mathbf {v} )} a tensor, each a solution of a linear inhomogeneous integral equation that can be solved explicitly by a polynomial expansion. Here, the colon denotes the double dot product, T : T ′ = ∑ i , j T i j T j i ′ {\textstyle \mathbb {T} :\mathbb {T'} =\sum _{i,j}T_{ij}T'_{ji}} for tensors T {\displaystyle \mathbb {T} } , T ′ {\displaystyle \mathbb {T'} } . == Predictions == To first order in the Knudsen number, the heat flux q = m 2 ∫ f ( r , v , t ) v 2 v d v {\textstyle \mathbf {q} ={\frac {m}{2}}\int f(\mathbf {r} ,\mathbf {v} ,t)\,v^{2}\mathbf {v} \,d\mathbf {v} } is found to obey Fourier's law of heat conduction, q = − λ ∇ T , {\displaystyle \mathbf {q} =-\lambda \nabla T,} and the momentum-flux tensor σ = m ∫ ( v − v 0 ) ( v − v 0 ) T f ( r , v , t ) d v {\textstyle \mathbf {\sigma } =m\int (\mathbf {v} -\mathbf {v} _{0})(\mathbf {v} -\mathbf {v} _{0})^{\mathsf {T}}f(\mathbf {r} ,\mathbf {v} ,t)\,d\mathbf {v} } is that of a Newtonian fluid, σ = p I − μ ( ∇ v 0 + ∇ v 0 T ) + 2 3 μ ( ∇ ⋅ v 0 ) I , {\displaystyle \mathbf {\sigma } =p\mathbb {I} -\mu \left(\nabla \mathbf {v_{0}} +\nabla \mathbf {v_{0}} ^{T}\right)+{\frac {2}{3}}\mu (\nabla \cdot \mathbf {v_{0}} )\mathbb {I} ,} with I {\displaystyle \mathbb {I} } the identity tensor. Here, λ {\displaystyle \lambda } and μ {\displaystyle \mu } are the thermal conductivity and viscosity. They can be calculated explicitly in terms of molecular parameters by solving a linear integral equation; the table below summarizes the results for a few important molecular models ( m {\displaystyle m} is the molecule mass and k B {\displaystyle k_{\text{B}}} is the Boltzmann constant). With these results, it is straightforward to obtain the Navier–Stokes equations. Taking velocity moments of the Boltzmann equation leads to the exact balance equations for the hydrodynamic fields n ( r , t ) {\displaystyle n(\mathbf {r} ,t)} , v 0 ( r , t ) {\displaystyle \mathbf {v} _{0}(\mathbf {r} ,t)} , and T ( r , t ) {\displaystyle T(\mathbf {r} ,t)} : ∂ n ∂ t + ∇ ⋅ ( n v 0 ) = 0 ∂ v 0 ∂ t + v 0 ⋅ ∇ v 0 − F m + 1 n ∇ ⋅ σ = 0 ∂ T ∂ t + v 0 ⋅ ∇ T + 2 3 k B n ( σ : ∇ v 0 + ∇ ⋅ q ) = 0. {\displaystyle {\begin{aligned}{\frac {\partial n}{\partial t}}+\nabla \cdot \left(n\mathbf {v} _{0}\right)&=0\\{\frac {\partial \mathbf {v} _{0}}{\partial t}}+\mathbf {v} _{0}\cdot \nabla \mathbf {v} _{0}-{\frac {\mathbf {F} }{m}}+{\frac {1}{n}}\nabla \cdot \mathbf {\sigma } &=0\\{\frac {\partial T}{\partial t}}+\mathbf {v} _{0}\cdot \nabla T+{\frac {2}{3k_{\text{B}}n}}\left(\mathbf {\sigma :} \nabla \mathbf {v} _{0}+\nabla \cdot \mathbf {q} \right)&=0.\end{aligned}}} As in the previous section the colon denotes the double dot product, T : T ′ = ∑ i , j T i j T j i ′ {\textstyle \mathbb {T} :\mathbb {T'} =\sum _{i,j}T_{ij}T'_{ji}} . Substituting the Chapman–Enskog expressions for q {\displaystyle \mathbf {q} } and σ {\displaystyle \sigma } , one arrives at the Navier–Stokes equations. === Comparison with experiment === An important prediction of Chapman–Enskog theory is that viscosity, μ {\displaystyle \mu } , is independent of density (this can be seen for each molecular model in table 1, but is actually model-independent). This counterintuitive result traces back to James Clerk Maxwell, who inferred it in 1860 on the basis of more elementary kinetic arguments. It is well-verified experimentally for gases at ordinary densities. On the other hand, the theory predicts that μ {\displaystyle \mu } does depend on temperature. For rigid elastic spheres, the predicted scaling is μ ∝ T 1 / 2 {\displaystyle \mu \propto T^{1/2}} , while other models typically show greater variation with temperature. For instance, for molecules repelling each other with force ∝ r − ν {\displaystyle \propto r^{-\nu }} the predicted scaling is μ ∝ T s {\displaystyle \mu \propto T^{s}} , where s = 1 / 2 + 2 / ( ν − 1 ) {\displaystyle s=1/2+2/(\nu -1)} . Taking s = 0.668 {\displaystyle s=0.668} , corresponding to ν ≈ 12.9 {\displaystyle \nu \approx 12.9} , shows reasonable agreement with the experimentally observed scaling for helium. For more complex gases the agreement is not as good, most likely due to the neglect of attractive forces. Indeed, the Lennard-Jones model, which does incorporate attractions, can be brought into closer agreement with experiment (albeit at the cost of a more opaque T {\displaystyle T} dependence; see the Lennard-Jones entry in table 1). For better agreement with experimental data than that which has been obtained using the Lennard-Jones model, the more flexible Mie potential has been used, the added flexibility of this potential allows for accurate prediction of the transport properties of mixtures of a variety of spherically symmetric molecules. Chapman–Enskog theory also predicts a simple relation between thermal conductivity, λ {\displaystyle \lambda } , and viscosity, μ {\displaystyle \mu } , in the form λ = f μ c v {\displaystyle \lambda =f\mu c_{v}} , where c v {\displaystyle c_{v}} is the specific heat at constant volume and f {\displaystyle f} is a purely numerical factor. For spherically symmetric molecules, its value is predicted to be very close to 2.5 {\displaystyle 2.5} in a slightly model-dependent way. For instance, rigid elastic spheres have f ≈ 2.522 {\displaystyle f\approx 2.522} , and molecules with repulsive force ∝ r − 13 {\displaystyle \propto r^{-13}} have f ≈ 2.511 {\displaystyle f\approx 2.511} (the latter deviation is ignored in table 1). The special case of Maxwell molecules (repulsive force ∝ r − 5 {\displaystyle \propto r^{-5}} ) has f = 2.5 {\displaystyle f=2.5} exactly. Since λ {\displaystyle \lambda } , μ {\displaystyle \mu } , and c v {\displaystyle c_{v}} can be measured directly in experiments, a simple experimental test of Chapman–Enskog theory is to measure f {\displaystyle f} for the spherically symmetric noble gases. Table 2 shows that there is reasonable agreement between theory and experiment. == Extensions == The basic principles of Chapman–Enskog theory can be extended to more diverse physical models, including gas mixtures and molecules with internal degrees of freedom. In the high-density regime, the theory can be adapted to account for collisional transport of momentum and energy, i.e. transport over a molecular diameter during a collision, rather than over a mean free path (in between collisions). Including this mechanism predicts a density dependence of the viscosity at high enough density, which is also observed experimentally. Obtaining the corrections used to account for transport during a collision for soft molecules (i.e. Lennard-Jones or Mie molecules) is in general non-trivial, but success has been achieved at applying Barker-Henderson perturbation theory to accurately describe these effects up to the critical density of various fluid mixtures. One can also carry out the theory to higher order in the Knudsen number. In particular, the second-order contribution f ( 2 ) {\displaystyle f^{(2)}} has been calculated by Burnett. In general circumstances, however, these higher-order corrections may not give reliable improvements to the first-order theory, due to the fact that the Chapman–Enskog expansion does not always converge. (On the other hand, the expansion is thought to be at least asymptotic to solutions of the Boltzmann equation, in which case truncating at low order still gives accurate results.) Even if the higher order corrections do afford improvement in a given system, the interpretation of the corresponding hydrodynamical equations is still debated. === Revised Enskog theory === The extension of Chapman–Enskog theory for multicomponent mixtures to elevated densities, in particular, densities at which the covolume of the mixture is non-negligible was carried out in a series of works by E. G. D. Cohen and others, and was coined Revised Enskog theory (RET). The successful derivation of RET followed several previous attempt at the same, but which gave results that were shown to be inconsistent with irreversible thermodynamics. The starting point for developing the RET is a modified form of the Boltzmann Equation for the s {\displaystyle s} -particle velocity distribution function, ( ∂ ∂ t + v i ⋅ ∂ ∂ r + F i m i ⋅ ∂ ∂ v i ) f i = ∑ j S i j ( f i , f j ) {\displaystyle \left({\frac {\partial }{\partial t}}+\mathbf {v} _{i}\cdot {\frac {\partial }{\partial \mathbf {r} }}+{\frac {\mathbf {F} _{i}}{m_{i}}}\cdot {\frac {\partial }{\partial \mathbf {v} _{i}}}\right)f_{i}=\sum _{j}S_{ij}(f_{i},f_{j})} where v i ( r , t ) {\displaystyle \mathbf {v} _{i}(\mathbf {r} ,t)} is the velocity of particles of species i {\displaystyle i} , at position r {\displaystyle \mathbf {r} } and time t {\displaystyle t} , m i {\displaystyle m_{i}} is the particle mass, F i {\displaystyle \mathbf {F} _{i}} is the external force, and S i j ( f i , f j ) = ∭ [ g i j ( σ i j k ) f i ′ ( r ) f j ′ ( r + σ i j k ) − g i j ( − σ i j k ) f i ( r ) f j ( r − σ i j k ) ] d τ {\displaystyle S_{ij}(f_{i},f_{j})=\iiint \left[g_{ij}(\sigma _{ij}\mathbf {k} )\,f_{i}'(\mathbf {r} )\,f_{j}'(\mathbf {r} +\sigma _{ij}\mathbf {k} )-g_{ij}(-\sigma _{ij}\mathbf {k} )\,f_{i}(\mathbf {r} )\,f_{j}(\mathbf {r} -\sigma _{ij}\mathbf {k} )\right]d\tau } The difference in this equation from classical Chapman–Enskog theory lies in the streaming operator S i j {\displaystyle S_{ij}} , within which the velocity distribution of the two particles are evaluated at different points in space, separated by σ i j k {\displaystyle \sigma _{ij}\mathbf {k} } , where k {\displaystyle \mathbf {k} } is the unit vector along the line connecting the two particles centre of mass. Another significant difference comes from the introduction of the factors g i j {\displaystyle g_{ij}} , which represent the enhanced probability of collisions due to excluded volume. The classical Chapman–Enskog equations are recovered by setting σ i j = 0 {\displaystyle \sigma _{ij}=0} and g i j ( σ i j k ) = 1 {\displaystyle g_{ij}(\sigma _{ij}\mathbf {k} )=1} . A point of significance for the success of the RET is the choice of the factors g i j {\displaystyle g_{ij}} , which is interpreted as the pair distribution function evaluated at the contact distance σ i j {\displaystyle \sigma _{ij}} . An important factor to note here is that in order to obtain results in agreement with irreversible thermodynamics, the g i j {\displaystyle g_{ij}} must be treated as functionals of the density fields, rather than as functions of the local density. ==== Results from Revised Enskog theory ==== One of the first results obtained from RET that deviates from the results from the classical Chapman–Enskog theory is the Equation of State. While from classical Chapman–Enskog theory the ideal gas law is recovered, RET developed for rigid elastic spheres yields the pressure equation p n k T = 1 + 2 π n 3 ∑ i ∑ j x i x j σ i j 3 g i j , {\displaystyle {\frac {p}{nkT}}=1+{\frac {2\pi n}{3}}\sum _{i}\sum _{j}x_{i}x_{j}\sigma _{ij}^{3}g_{ij},} which is consistent with the Carnahan-Starling Equation of State, and reduces to the ideal gas law in the limit of infinite dilution (i.e. when n ∑ i , j x i x j σ i j 3 ≪ 1 {\textstyle n\sum _{i,j}x_{i}x_{j}\sigma _{ij}^{3}\ll 1} ) For the transport coefficients: viscosity, thermal conductivity, diffusion and thermal diffusion, RET provides expressions that exactly reduce to those obtained from classical Chapman–Enskog theory in the limit of infinite dilution. However, RET predicts a density dependence of the thermal conductivity, which can be expressed as λ = ( 1 + n α λ ) λ 0 + n 2 T 1 / 2 λ σ {\displaystyle \lambda =(1+n\alpha _{\lambda })\lambda _{0}+n^{2}T^{1/2}\lambda _{\sigma }} where α λ {\displaystyle \alpha _{\lambda }} and λ σ {\displaystyle \lambda _{\sigma }} are relatively weak functions of the composition, temperature and density, and λ 0 {\displaystyle \lambda _{0}} is the thermal conductivity obtained from classical Chapman–Enskog theory. Similarly, the expression obtained for viscosity can be written as μ = ( 1 + n T α μ ) μ 0 + n 2 T 1 / 2 μ σ {\displaystyle \mu =(1+nT\alpha _{\mu })\mu _{0}+n^{2}T^{1/2}\mu _{\sigma }} with α μ {\displaystyle \alpha _{\mu }} and μ σ {\displaystyle \mu _{\sigma }} weak functions of composition, temperature and density, and μ 0 {\displaystyle \mu _{0}} the value obtained from classical Chapman–Enskog theory. For diffusion coefficients and thermal diffusion coefficients the picture is somewhat more complex. However, one of the major advantages of RET over classical Chapman–Enskog theory is that the dependence of diffusion coefficients on the thermodynamic factors, i.e. the derivatives of the chemical potentials with respect to composition, is predicted. In addition, RET does not predict a strict dependence of D ∼ 1 n , D T ∼ 1 n {\displaystyle D\sim {\frac {1}{n}},\quad D_{T}\sim {\frac {1}{n}}} for all densities, but rather predicts that the coefficients will decrease more slowly with density at high densities, which is in good agreement with experiments. These modified density dependencies also lead RET to predict a density dependence of the Soret coefficient, S T = D T D , ( ∂ S T ∂ n ) T ≠ 0 , {\displaystyle S_{T}={\frac {D_{T}}{D}},\quad \left({\frac {\partial S_{T}}{\partial n}}\right)_{T}\neq 0,} while classical Chapman–Enskog theory predicts that the Soret coefficient, like the viscosity and thermal conductivity, is independent of density. ==== Applications ==== While Revised Enskog theory provides many advantages over classical Chapman–Enskog theory, this comes at the price of being significantly more difficult to apply in practice. While classical Chapman–Enskog theory can be applied to arbitrarily complex spherical potentials, given sufficiently accurate and fast integration routines to evaluate the required collision integrals, Revised Enskog Theory, in addition to this, requires knowledge of the contact value of the pair distribution function. For mixtures of hard spheres, this value can be computed without large difficulties, but for more complex intermolecular potentials it is generally non-trivial to obtain. However, some success has been achieved at estimating the contact value of the pair distribution function for Mie fluids (which consists of particles interacting through a generalised Lennard-Jones potential) and using these estimates to predict the transport properties of dense gas mixtures and supercritical fluids. Applying RET to particles interacting through realistic potentials also exposes one to the issue of determining a reasonable "contact diameter" for the soft particles. While these are unambiguously defined for hard spheres, there is still no generally agreed upon value that one should use for the contact diameter of soft particles. == See also == Transport phenomena Kinetic theory of gases Boltzmann equation Navier–Stokes equations Fourier's law Newtonian fluid == Notes == == References == The classic monograph on the topic: Chapman, Sydney; Cowling, T.G. (1970), The Mathematical Theory of Non-Uniform Gases (3rd ed.), Cambridge University Press Contains a technical introduction to normal solutions of the Boltzmann equation: Grad, Harold (1958), "Principles of the Kinetic Theory of Gases", in Flügge, S. (ed.), Encyclopedia of Physics, vol. XII, Springer-Verlag, pp. 205–294
Wikipedia/Chapman–Enskog_theory
Economics () is a behavioral science that studies the production, distribution, and consumption of goods and services. Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics analyses what is viewed as basic elements within economies, including individual agents and markets, their interactions, and the outcomes of interactions. Individual agents may include, for example, households, firms, buyers, and sellers. Macroeconomics analyses economies as systems where production, distribution, consumption, savings, and investment expenditure interact; and the factors of production affecting them, such as: labour, capital, land, and enterprise, inflation, economic growth, and public policies that impact these elements. It also seeks to analyse and describe the global economy. Other broad distinctions within economics include those between positive economics, describing "what is", and normative economics, advocating "what ought to be"; between economic theory and applied economics; between rational and behavioural economics; and between mainstream economics and heterodox economics. Economic analysis can be applied throughout society, including business, finance, cybersecurity, health care, engineering and government. It is also applied to such diverse subjects as crime, education, the family, feminism, law, philosophy, politics, religion, social institutions, war, science, and the environment. == Definitions of economics == The earlier term for the discipline was "political economy", but since the late 19th century, it has commonly been called "economics". The term is ultimately derived from Ancient Greek οἰκονομία (oikonomia) which is a term for the "way (nomos) to run a household (oikos)", or in other words the know-how of an οἰκονομικός (oikonomikos), or "household or homestead manager". Derived terms such as "economy" can therefore often mean "frugal" or "thrifty". By extension then, "political economy" was the way to manage a polis or state. There are a variety of modern definitions of economics; some reflect evolving views of the subject or different views among economists. Scottish philosopher Adam Smith (1776) defined what was then called political economy as "an inquiry into the nature and causes of the wealth of nations", in particular as: a branch of the science of a statesman or legislator [with the twofold objectives of providing] a plentiful revenue or subsistence for the people ... [and] to supply the state or commonwealth with a revenue for the publick services. Jean-Baptiste Say (1803), distinguishing the subject matter from its public-policy uses, defined it as the science of production, distribution, and consumption of wealth. On the satirical side, Thomas Carlyle (1849) coined "the dismal science" as an epithet for classical economics, in this context, commonly linked to the pessimistic analysis of Malthus (1798). John Stuart Mill (1844) delimited the subject matter further: The science which traces the laws of such of the phenomena of society as arise from the combined operations of mankind for the production of wealth, in so far as those phenomena are not modified by the pursuit of any other object. Alfred Marshall provided a still widely cited definition in his textbook Principles of Economics (1890) that extended analysis beyond wealth and from the societal to the microeconomic level: Economics is a study of man in the ordinary business of life. It enquires how he gets his income and how he uses it. Thus, it is on the one side, the study of wealth and on the other and more important side, a part of the study of man. Lionel Robbins (1932) developed implications of what has been termed "[p]erhaps the most commonly accepted current definition of the subject": Economics is the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses. Robbins described the definition as not classificatory in "pick[ing] out certain kinds of behaviour" but rather analytical in "focus[ing] attention on a particular aspect of behaviour, the form imposed by the influence of scarcity." He affirmed that previous economists have usually centred their studies on the analysis of wealth: how wealth is created (production), distributed, and consumed; and how wealth can grow. But he said that economics can be used to study other things, such as war, that are outside its usual focus. This is because war has as the goal winning it (as a sought-after end), generates both cost and benefits; and, resources (human life and other costs) are used to attain the goal. If the war is not winnable or if the expected costs outweigh the benefits, the deciding actors (assuming they are rational) may never go to war (a decision) but rather explore other alternatives. Economics cannot be defined as the science that studies wealth, war, crime, education, and any other field economic analysis can be applied to; but, as the science that studies a particular common aspect of each of those subjects (they all use scarce resources to attain a sought-after end). Some subsequent comments criticised the definition as overly broad in failing to limit its subject matter to analysis of markets. From the 1960s, however, such comments abated as the economic theory of maximizing behaviour and rational-choice modelling expanded the domain of the subject to areas previously treated in other fields. There are other criticisms as well, such as in scarcity not accounting for the macroeconomics of high unemployment. Gary Becker, a contributor to the expansion of economics into new areas, described the approach he favoured as "combin[ing the] assumptions of maximizing behaviour, stable preferences, and market equilibrium, used relentlessly and unflinchingly." One commentary characterises the remark as making economics an approach rather than a subject matter but with great specificity as to the "choice process and the type of social interaction that [such] analysis involves." The same source reviews a range of definitions included in principles of economics textbooks and concludes that the lack of agreement need not affect the subject-matter that the texts treat. Among economists more generally, it argues that a particular definition presented may reflect the direction toward which the author believes economics is evolving, or should evolve. Many economists including Nobel Prize winners James M. Buchanan and Ronald Coase reject the method-based definition of Robbins and continue to prefer definitions like those of Say, in terms of its subject matter. Ha-Joon Chang has for example argued that the definition of Robbins would make economics very peculiar because all other sciences define themselves in terms of the area of inquiry or object of inquiry rather than the methodology. In the biology department, it is not said that all biology should be studied with DNA analysis. People study living organisms in many different ways, so some people will perform DNA analysis, others might analyse anatomy, and still others might build game theoretic models of animal behaviour. But they are all called biology because they all study living organisms. According to Ha Joon Chang, this view that the economy can and should be studied in only one way (for example by studying only rational choices), and going even one step further and basically redefining economics as a theory of everything, is peculiar. == History of economic thought == === From antiquity through the physiocrats === Questions regarding distribution of resources are found throughout the writings of the Boeotian poet Hesiod and several economic historians have described Hesiod as the "first economist". However, the word Oikos, the Greek word from which the word economy derives, was used for issues regarding how to manage a household (which was understood to be the landowner, his family, and his slaves) rather than to refer to some normative societal system of distribution of resources, which is a more recent phenomenon. Xenophon, the author of the Oeconomicus, is credited by philologues for being the source of the word economy. Joseph Schumpeter described 16th and 17th century scholastic writers, including Tomás de Mercado, Luis de Molina, and Juan de Lugo, as "coming nearer than any other group to being the 'founders' of scientific economics" as to monetary, interest, and value theory within a natural-law perspective. Two groups, who later were called "mercantilists" and "physiocrats", more directly influenced the subsequent development of the subject. Both groups were associated with the rise of economic nationalism and modern capitalism in Europe. Mercantilism was an economic doctrine that flourished from the 16th to 18th century in a prolific pamphlet literature, whether of merchants or statesmen. It held that a nation's wealth depended on its accumulation of gold and silver. Nations without access to mines could obtain gold and silver from trade only by selling goods abroad and restricting imports other than of gold and silver. The doctrine called for importing inexpensive raw materials to be used in manufacturing goods, which could be exported, and for state regulation to impose protective tariffs on foreign manufactured goods and prohibit manufacturing in the colonies. Physiocrats, a group of 18th-century French thinkers and writers, developed the idea of the economy as a circular flow of income and output. Physiocrats believed that only agricultural production generated a clear surplus over cost, so that agriculture was the basis of all wealth. Thus, they opposed the mercantilist policy of promoting manufacturing and trade at the expense of agriculture, including import tariffs. Physiocrats advocated replacing administratively costly tax collections with a single tax on income of land owners. In reaction against copious mercantilist trade regulations, the physiocrats advocated a policy of laissez-faire, which called for minimal government intervention in the economy. Adam Smith (1723–1790) was an early economic theorist. Smith was harshly critical of the mercantilists but described the physiocratic system "with all its imperfections" as "perhaps the purest approximation to the truth that has yet been published" on the subject. === Classical political economy === The publication of Adam Smith's The Wealth of Nations in 1776, has been described as "the effective birth of economics as a separate discipline." The book identified land, labour, and capital as the three factors of production and the major contributors to a nation's wealth, as distinct from the physiocratic idea that only agriculture was productive. Smith discusses potential benefits of specialisation by division of labour, including increased labour productivity and gains from trade, whether between town and country or across countries. His "theorem" that "the division of labor is limited by the extent of the market" has been described as the "core of a theory of the functions of firm and industry" and a "fundamental principle of economic organization." To Smith has also been ascribed "the most important substantive proposition in all of economics" and foundation of resource-allocation theory—that, under competition, resource owners (of labour, land, and capital) seek their most profitable uses, resulting in an equal rate of return for all uses in equilibrium (adjusted for apparent differences arising from such factors as training and unemployment). In an argument that includes "one of the most famous passages in all economics," Smith represents every individual as trying to employ any capital they might command for their own advantage, not that of the society, and for the sake of profit, which is necessary at some level for employing capital in domestic industry, and positively related to the value of produce. In this: He generally, indeed, neither intends to promote the public interest, nor knows how much he is promoting it. By preferring the support of domestic to that of foreign industry, he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for the society that it was no part of it. By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it. The Reverend Thomas Robert Malthus (1798) used the concept of diminishing returns to explain low living standards. Human population, he argued, tended to increase geometrically, outstripping the production of food, which increased arithmetically. The force of a rapidly growing population against a limited amount of land meant diminishing returns to labour. The result, he claimed, was chronically low wages, which prevented the standard of living for most of the population from rising above the subsistence level. Economist Julian Simon has criticised Malthus's conclusions. While Adam Smith emphasised production and income, David Ricardo (1817) focused on the distribution of income among landowners, workers, and capitalists. Ricardo saw an inherent conflict between landowners on the one hand and labour and capital on the other. He posited that the growth of population and capital, pressing against a fixed supply of land, pushes up rents and holds down wages and profits. Ricardo was also the first to state and prove the principle of comparative advantage, according to which each country should specialise in producing and exporting goods in that it has a lower relative cost of production, rather relying only on its own production. It has been termed a "fundamental analytical explanation" for gains from trade. Coming at the end of the classical tradition, John Stuart Mill (1848) parted company with the earlier classical economists on the inevitability of the distribution of income produced by the market system. Mill pointed to a distinct difference between the market's two roles: allocation of resources and distribution of income. The market might be efficient in allocating resources but not in distributing income, he wrote, making it necessary for society to intervene. Value theory was important in classical theory. Smith wrote that the "real price of every thing ... is the toil and trouble of acquiring it". Smith maintained that, with rent and profit, other costs besides wages also enter the price of a commodity. Other classical economists presented variations on Smith, termed the 'labour theory of value'. Classical economics focused on the tendency of any market economy to settle in a final stationary state made up of a constant stock of physical wealth (capital) and a constant population size. === Marxian economics === Marxist (later, Marxian) economics descends from classical economics and it derives from the work of Karl Marx. The first volume of Marx's major work, Das Kapital, was published in 1867. Marx focused on the labour theory of value and theory of surplus value. Marx wrote that they were mechanisms used by capital to exploit labour. The labour theory of value held that the value of an exchanged commodity was determined by the labour that went into its production, and the theory of surplus value demonstrated how workers were only paid a proportion of the value their work had created. Marxian economics was further developed by Karl Kautsky (1854–1938)'s The Economic Doctrines of Karl Marx and The Class Struggle (Erfurt Program), Rudolf Hilferding's (1877–1941) Finance Capital, Vladimir Lenin (1870–1924)'s The Development of Capitalism in Russia and Imperialism, the Highest Stage of Capitalism, and Rosa Luxemburg (1871–1919)'s The Accumulation of Capital. === Neoclassical economics === At its inception as a social science, economics was defined and discussed at length as the study of production, distribution, and consumption of wealth by Jean-Baptiste Say in his Treatise on Political Economy or, The Production, Distribution, and Consumption of Wealth (1803). These three items were considered only in relation to the increase or diminution of wealth, and not in reference to their processes of execution. Say's definition has survived in part up to the present, modified by substituting the word "wealth" for "goods and services" meaning that wealth may include non-material objects as well. One hundred and thirty years later, Lionel Robbins noticed that this definition no longer sufficed, because many economists were making theoretical and philosophical inroads in other areas of human activity. In his Essay on the Nature and Significance of Economic Science, he proposed a definition of economics as a study of human behaviour, subject to and constrained by scarcity, which forces people to choose, allocate scarce resources to competing ends, and economise (seeking the greatest welfare while avoiding the wasting of scarce resources). According to Robbins: "Economics is the science which studies human behavior as a relationship between ends and scarce means which have alternative uses". Robbins' definition eventually became widely accepted by mainstream economists, and found its way into current textbooks. Although far from unanimous, most mainstream economists would accept some version of Robbins' definition, even though many have raised serious objections to the scope and method of economics, emanating from that definition. A body of theory later termed "neoclassical economics" formed from about 1870 to 1910. The term "economics" was popularised by such neoclassical economists as Alfred Marshall and Mary Paley Marshall as a concise synonym for "economic science" and a substitute for the earlier "political economy". This corresponded to the influence on the subject of mathematical methods used in the natural sciences. Neoclassical economics systematically integrated supply and demand as joint determinants of both price and quantity in market equilibrium, influencing the allocation of output and income distribution. It rejected the classical economics' labour theory of value in favour of a marginal utility theory of value on the demand side and a more comprehensive theory of costs on the supply side. In the 20th century, neoclassical theorists departed from an earlier idea that suggested measuring total utility for a society, opting instead for ordinal utility, which posits behaviour-based relations across individuals. In microeconomics, neoclassical economics represents incentives and costs as playing a pervasive role in shaping decision making. An immediate example of this is the consumer theory of individual demand, which isolates how prices (as costs) and income affect quantity demanded. In macroeconomics it is reflected in an early and lasting neoclassical synthesis with Keynesian macroeconomics. Neoclassical economics is occasionally referred as orthodox economics whether by its critics or sympathisers. Modern mainstream economics builds on neoclassical economics but with many refinements that either supplement or generalise earlier analysis, such as econometrics, game theory, analysis of market failure and imperfect competition, and the neoclassical model of economic growth for analysing long-run variables affecting national income. Neoclassical economics studies the behaviour of individuals, households, and organisations (called economic actors, players, or agents), when they manage or use scarce resources, which have alternative uses, to achieve desired ends. Agents are assumed to act rationally, have multiple desirable ends in sight, limited resources to obtain these ends, a set of stable preferences, a definite overall guiding objective, and the capability of making a choice. There exists an economic problem, subject to study by economic science, when a decision (choice) is made by one or more players to attain the best possible outcome. === Keynesian economics === Keynesian economics derives from John Maynard Keynes, in particular his book The General Theory of Employment, Interest and Money (1936), which ushered in contemporary macroeconomics as a distinct field. The book focused on determinants of national income in the short run when prices are relatively inflexible. Keynes attempted to explain in broad theoretical detail why high labour-market unemployment might not be self-correcting due to low "effective demand" and why even price flexibility and monetary policy might be unavailing. The term "revolutionary" has been applied to the book in its impact on economic analysis. During the following decades, many economists followed Keynes' ideas and expanded on his works. John Hicks and Alvin Hansen developed the IS–LM model which was a simple formalisation of some of Keynes' insights on the economy's short-run equilibrium. Franco Modigliani and James Tobin developed important theories of private consumption and investment, respectively, two major components of aggregate demand. Lawrence Klein built the first large-scale macroeconometric model, applying the Keynesian thinking systematically to the US economy. === Post-WWII economics === Immediately after World War II, Keynesian was the dominant economic view of the United States establishment and its allies, Marxian economics was the dominant economic view of the Soviet Union nomenklatura and its allies. ==== Monetarism ==== Monetarism appeared in the 1950s and 1960s, its intellectual leader being Milton Friedman. Monetarists contended that monetary policy and other monetary shocks, as represented by the growth in the money stock, was an important cause of economic fluctuations, and consequently that monetary policy was more important than fiscal policy for purposes of stabilisation. Friedman was also skeptical about the ability of central banks to conduct a sensible active monetary policy in practice, advocating instead using simple rules such as a steady rate of money growth. Monetarism rose to prominence in the 1970s and 1980s, when several major central banks followed a monetarist-inspired policy, but was later abandoned because the results were unsatisfactory. ==== New classical economics ==== A more fundamental challenge to the prevailing Keynesian paradigm came in the 1970s from new classical economists like Robert Lucas, Thomas Sargent and Edward Prescott. They introduced the notion of rational expectations in economics, which had profound implications for many economic discussions, among which were the so-called Lucas critique and the presentation of real business cycle models. ==== New Keynesians ==== During the 1980s, a group of researchers appeared being called New Keynesian economists, including among others George Akerlof, Janet Yellen, Gregory Mankiw and Olivier Blanchard. They adopted the principle of rational expectations and other monetarist or new classical ideas such as building upon models employing micro foundations and optimizing behaviour, but simultaneously emphasised the importance of various market failures for the functioning of the economy, as had Keynes. Not least, they proposed various reasons that potentially explained the empirically observed features of price and wage rigidity, usually made to be endogenous features of the models, rather than simply assumed as in older Keynesian-style ones. ==== New neoclassical synthesis ==== After decades of often heated discussions between Keynesians, monetarists, new classical and new Keynesian economists, a synthesis emerged by the 2000s, often given the name the new neoclassical synthesis. It integrated the rational expectations and optimizing framework of the new classical theory with a new Keynesian role for nominal rigidities and other market imperfections like imperfect information in goods, labour and credit markets. The monetarist importance of monetary policy in stabilizing the economy and in particular controlling inflation was recognised as well as the traditional Keynesian insistence that fiscal policy could also play an influential role in affecting aggregate demand. Methodologically, the synthesis led to a new class of applied models, known as dynamic stochastic general equilibrium or DSGE models, descending from real business cycles models, but extended with several new Keynesian and other features. These models proved useful and influential in the design of modern monetary policy and are now standard workhorses in most central banks. ==== After the 2008 financial crisis ==== After the 2008 financial crisis, macroeconomic research has put greater emphasis on understanding and integrating the financial system into models of the general economy and shedding light on the ways in which problems in the financial sector can turn into major macroeconomic recessions. In this and other research branches, inspiration from behavioural economics has started playing a more important role in mainstream economic theory. Also, heterogeneity among the economic agents, e.g. differences in income, plays an increasing role in recent economic research. === Other schools and approaches === Other schools or trends of thought referring to a particular style of economics practised at and disseminated from well-defined groups of academicians that have become known worldwide, include the Freiburg School, the School of Lausanne, the Stockholm school and the Chicago school of economics. During the 1970s and 1980s mainstream economics was sometimes separated into the Saltwater approach of those universities along the Eastern and Western coasts of the US, and the Freshwater, or Chicago school approach. Within macroeconomics there is, in general order of their historical appearance in the literature; classical economics, neoclassical economics, Keynesian economics, the neoclassical synthesis, monetarism, new classical economics, New Keynesian economics and the new neoclassical synthesis. Beside the mainstream development of economic thought, various alternative or heterodox economic theories have evolved over time, positioning themselves in contrast to mainstream theory. These include: Austrian School, emphasizing human action, property rights and the freedom to contract and transact to have a thriving and successful economy. It also emphasises that the state should play as small role as possible (if any role) in the regulation of economic activity between two transacting parties. Friedrich Hayek and Ludwig von Mises are the two most prominent representatives of the Austrian school. Post-Keynesian economics concentrates on macroeconomic rigidities and adjustment processes. It is generally associated with the University of Cambridge and the work of Joan Robinson. Ecological economics like environmental economics studies the interactions between human economies and the ecosystems in which they are embedded, but in contrast to environmental economics takes an oppositional position towards general mainstream economic principles. A major difference between the two subdisciplines is their assumptions about the substitution possibilities between human-made and natural capital. Additionally, alternative developments include Marxian economics, constitutional economics, institutional economics, evolutionary economics, dependency theory, structuralist economics, world systems theory, econophysics, econodynamics, feminist economics and biophysical economics. Feminist economics emphasises the role that gender plays in economies, challenging analyses that render gender invisible or support gender-oppressive economic systems. The goal is to create economic research and policy analysis that is inclusive and gender-aware to encourage gender equality and improve the well-being of marginalised groups. == Methodology == === Theoretical research === Mainstream economic theory relies upon analytical economic models. When creating theories, the objective is to find assumptions which are at least as simple in information requirements, more precise in predictions, and more fruitful in generating additional research than prior theories. While neoclassical economic theory constitutes both the dominant or orthodox theoretical as well as methodological framework, economic theory can also take the form of other schools of thought such as in heterodox economic theories. In microeconomics, principal concepts include supply and demand, marginalism, rational choice theory, opportunity cost, budget constraints, utility, and the theory of the firm. Early macroeconomic models focused on modelling the relationships between aggregate variables, but as the relationships appeared to change over time macroeconomists, including new Keynesians, reformulated their models with microfoundations, in which microeconomic concepts play a major part. Sometimes an economic hypothesis is only qualitative, not quantitative. Expositions of economic reasoning often use two-dimensional graphs to illustrate theoretical relationships. At a higher level of generality, mathematical economics is the application of mathematical methods to represent theories and analyse problems in economics. Paul Samuelson's treatise Foundations of Economic Analysis (1947) exemplifies the method, particularly as to maximizing behavioural relations of agents reaching equilibrium. The book focused on examining the class of statements called operationally meaningful theorems in economics, which are theorems that can conceivably be refuted by empirical data. === Empirical research === Economic theories are frequently tested empirically, largely through the use of econometrics using economic data. The controlled experiments common to the physical sciences are difficult and uncommon in economics, and instead broad data is observationally studied; this type of testing is typically regarded as less rigorous than controlled experimentation, and the conclusions typically more tentative. However, the field of experimental economics is growing, and increasing use is being made of natural experiments. Statistical methods such as regression analysis are common. Practitioners use such methods to estimate the size, economic significance, and statistical significance ("signal strength") of the hypothesised relation(s) and to adjust for noise from other variables. By such means, a hypothesis may gain acceptance, although in a probabilistic, rather than certain, sense. Acceptance is dependent upon the falsifiable hypothesis surviving tests. Use of commonly accepted methods need not produce a final conclusion or even a consensus on a particular question, given different tests, data sets, and prior beliefs. Experimental economics has promoted the use of scientifically controlled experiments. This has reduced the long-noted distinction of economics from natural sciences because it allows direct tests of what were previously taken as axioms. In some cases these have found that the axioms are not entirely correct. In behavioural economics, psychologist Daniel Kahneman won the Nobel Prize in economics in 2002 for his and Amos Tversky's empirical discovery of several cognitive biases and heuristics. Similar empirical testing occurs in neuroeconomics. Another example is the assumption of narrowly selfish preferences versus a model that tests for selfish, altruistic, and cooperative preferences. These techniques have led some to argue that economics is a "genuine science". == Microeconomics == Microeconomics examines how entities, forming a market structure, interact within a market to create a market system. These entities include private and public players with various classifications, typically operating under scarcity of tradable units and regulation. The item traded may be a tangible product such as apples or a service such as repair services, legal counsel, or entertainment. Various market structures exist. In perfectly competitive markets, no participants are large enough to have the market power to set the price of a homogeneous product. In other words, every participant is a "price taker" as no participant influences the price of a product. In the real world, markets often experience imperfect competition. Forms of imperfect competition include monopoly (in which there is only one seller of a good), duopoly (in which there are only two sellers of a good), oligopoly (in which there are few sellers of a good), monopolistic competition (in which there are many sellers producing highly differentiated goods), monopsony (in which there is only one buyer of a good), and oligopsony (in which there are few buyers of a good). Firms under imperfect competition have the potential to be "price makers", which means that they can influence the prices of their products. In partial equilibrium method of analysis, it is assumed that activity in the market being analysed does not affect other markets. This method aggregates (the sum of all activity) in only one market. General-equilibrium theory studies various markets and their behaviour. It aggregates (the sum of all activity) across all markets. This method studies both changes in markets and their interactions leading towards equilibrium. === Production, cost, and efficiency === In microeconomics, production is the conversion of inputs into outputs. It is an economic process that uses inputs to create a commodity or a service for exchange or direct use. Production is a flow and thus a rate of output per period of time. Distinctions include such production alternatives as for consumption (food, haircuts, etc.) vs. investment goods (new tractors, buildings, roads, etc.), public goods (national defence, smallpox vaccinations, etc.) or private goods, and "guns" vs "butter". Inputs used in the production process include such primary factors of production as labour services, capital (durable produced goods used in production, such as an existing factory), and land (including natural resources). Other inputs may include intermediate goods used in production of final goods, such as the steel in a new car. Economic efficiency measures how well a system generates desired output with a given set of inputs and available technology. Efficiency is improved if more output is generated without changing inputs. A widely accepted general standard is Pareto efficiency, which is reached when no further change can make someone better off without making someone else worse off. The production–possibility frontier (PPF) is an expository figure for representing scarcity, cost, and efficiency. In the simplest case, an economy can produce just two goods (say "guns" and "butter"). The PPF is a table or graph (as at the right) that shows the different quantity combinations of the two goods producible with a given technology and total factor inputs, which limit feasible total output. Each point on the curve shows potential total output for the economy, which is the maximum feasible output of one good, given a feasible output quantity of the other good. Scarcity is represented in the figure by people being willing but unable in the aggregate to consume beyond the PPF (such as at X) and by the negative slope of the curve. If production of one good increases along the curve, production of the other good decreases, an inverse relationship. This is because increasing output of one good requires transferring inputs to it from production of the other good, decreasing the latter. The slope of the curve at a point on it gives the trade-off between the two goods. It measures what an additional unit of one good costs in units forgone of the other good, an example of a real opportunity cost. Thus, if one more Gun costs 100 units of butter, the opportunity cost of one Gun is 100 Butter. Along the PPF, scarcity implies that choosing more of one good in the aggregate entails doing with less of the other good. Still, in a market economy, movement along the curve may indicate that the choice of the increased output is anticipated to be worth the cost to the agents. By construction, each point on the curve shows productive efficiency in maximizing output for given total inputs. A point inside the curve (as at A), is feasible but represents production inefficiency (wasteful use of inputs), in that output of one or both goods could increase by moving in a northeast direction to a point on the curve. Examples cited of such inefficiency include high unemployment during a business-cycle recession or economic organisation of a country that discourages full use of resources. Being on the curve might still not fully satisfy allocative efficiency (also called Pareto efficiency) if it does not produce a mix of goods that consumers prefer over other points. Much applied economics in public policy is concerned with determining how the efficiency of an economy can be improved. Recognizing the reality of scarcity and then figuring out how to organise society for the most efficient use of resources has been described as the "essence of economics", where the subject "makes its unique contribution." === Specialisation === Specialisation is considered key to economic efficiency based on theoretical and empirical considerations. Different individuals or nations may have different real opportunity costs of production, say from differences in stocks of human capital per worker or capital/labour ratios. According to theory, this may give a comparative advantage in production of goods that make more intensive use of the relatively more abundant, thus relatively cheaper, input. Even if one region has an absolute advantage as to the ratio of its outputs to inputs in every type of output, it may still specialise in the output in which it has a comparative advantage and thereby gain from trading with a region that lacks any absolute advantage but has a comparative advantage in producing something else. It has been observed that a high volume of trade occurs among regions even with access to a similar technology and mix of factor inputs, including high-income countries. This has led to investigation of economies of scale and agglomeration to explain specialisation in similar but differentiated product lines, to the overall benefit of respective trading parties or regions. The general theory of specialisation applies to trade among individuals, farms, manufacturers, service providers, and economies. Among each of these production systems, there may be a corresponding division of labour with different work groups specializing, or correspondingly different types of capital equipment and differentiated land uses. An example that combines features above is a country that specialises in the production of high-tech knowledge products, as developed countries do, and trades with developing nations for goods produced in factories where labour is relatively cheap and plentiful, resulting in different in opportunity costs of production. More total output and utility thereby results from specializing in production and trading than if each country produced its own high-tech and low-tech products. Theory and observation set out the conditions such that market prices of outputs and productive inputs select an allocation of factor inputs by comparative advantage, so that (relatively) low-cost inputs go to producing low-cost outputs. In the process, aggregate output may increase as a by-product or by design. Such specialisation of production creates opportunities for gains from trade whereby resource owners benefit from trade in the sale of one type of output for other, more highly valued goods. A measure of gains from trade is the increased income levels that trade may facilitate. === Supply and demand === Prices and quantities have been described as the most directly observable attributes of goods produced and exchanged in a market economy. The theory of supply and demand is an organizing principle for explaining how prices coordinate the amounts produced and consumed. In microeconomics, it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price-setting power. For a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. Demand is often represented by a table or a graph showing price and quantity demanded (as in the figure). Demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. A term for this is "constrained utility maximisation" (with income and wealth as the constraints on demand). Here, utility refers to the hypothesised relation of each individual consumer for ranking different commodity bundles as more or less preferred. The law of demand states that, in general, price and quantity demanded in a given market are inversely related. That is, the higher the price of a product, the less of it people would be prepared to buy (other things unchanged). As the price of a commodity falls, consumers move toward it from relatively more expensive goods (the substitution effect). In addition, purchasing power from the price decline increases ability to buy (the income effect). Other factors can change demand; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. All determinants are predominantly taken as constant factors of demand and supply. Supply is the relation between the price of a good and the quantity available for sale at that price. It may be represented as a table or graph relating price and quantity supplied. Producers, for example business firms, are hypothesised to be profit maximisers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. Supply is typically represented as a function relating price and quantity, if other factors are unchanged. That is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. The higher price makes it profitable to increase production. Just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. The "Law of Supply" states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. Here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors inputs of production are all taken to be constant for a specific time period of evaluation of supply. Market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. At a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. This is posited to bid the price up. At a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. This pushes the price down. The model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilise at the price that makes quantity supplied equal to quantity demanded. Similarly, demand-and-supply theory predicts a new price-quantity combination from a shift in demand (as to the figure), or in supply. === Firms === People frequently do not trade directly on markets. Instead, on the supply side, they may work in and produce through firms. The most obvious kinds of firms are corporations, partnerships and trusts. According to Ronald Coase, people begin to organise their production in firms when the costs of doing business becomes lower than doing it on the market. Firms combine labour and capital, and can achieve far greater economies of scale (when the average cost per unit declines as more units are produced) than individual market trading. In perfectly competitive markets studied in the theory of supply and demand, there are many producers, none of which significantly influence price. Industrial organisation generalises from that special case to study the strategic behaviour of firms that do have significant control of price. It considers the structure of such markets and their interactions. Common market structures studied besides perfect competition include monopolistic competition, various forms of oligopoly, and monopoly. Managerial economics applies microeconomic analysis to specific decisions in business firms or other management units. It draws heavily from quantitative methods such as operations research and programming and from statistical methods such as regression analysis in the absence of certainty and perfect knowledge. A unifying theme is the attempt to optimise business decisions, including unit-cost minimisation and profit maximisation, given the firm's objectives and constraints imposed by technology and market conditions. === Uncertainty and game theory === Uncertainty in economics is an unknown prospect of gain or loss, whether quantifiable as risk or not. Without it, household behaviour would be unaffected by uncertain employment and income prospects, financial and capital markets would reduce to exchange of a single instrument in each market period, and there would be no communications industry. Given its different forms, there are various ways of representing uncertainty and modelling economic agents' responses to it. Game theory is a branch of applied mathematics that considers strategic interactions between agents, one kind of uncertainty. It provides a mathematical foundation of industrial organisation, discussed above, to model different types of firm behaviour, for example in a solipsistic industry (few sellers), but equally applicable to wage negotiations, bargaining, contract design, and any situation where individual agents are few enough to have perceptible effects on each other. In behavioural economics, it has been used to model the strategies agents choose when interacting with others whose interests are at least partially adverse to their own. In this, it generalises maximisation approaches developed to analyse market actors such as in the supply and demand model and allows for incomplete information of actors. The field dates from the 1944 classic Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern. It has significant applications seemingly outside of economics in such diverse subjects as the formulation of nuclear strategies, ethics, political science, and evolutionary biology. Risk aversion may stimulate activity that in well-functioning markets smooths out risk and communicates information about risk, as in markets for insurance, commodity futures contracts, and financial instruments. Financial economics or simply finance describes the allocation of financial resources. It also analyses the pricing of financial instruments, the financial structure of companies, the efficiency and fragility of financial markets, financial crises, and related government policy or regulation. Some market organisations may give rise to inefficiencies associated with uncertainty. Based on George Akerlof's "Market for Lemons" article, the paradigm example is of a dodgy second-hand car market. Customers without knowledge of whether a car is a "lemon" depress its price below what a quality second-hand car would be. Information asymmetry arises here, if the seller has more relevant information than the buyer but no incentive to disclose it. Related problems in insurance are adverse selection, such that those at most risk are most likely to insure (say reckless drivers), and moral hazard, such that insurance results in riskier behaviour (say more reckless driving). Both problems may raise insurance costs and reduce efficiency by driving otherwise willing transactors from the market ("incomplete markets"). Moreover, attempting to reduce one problem, say adverse selection by mandating insurance, may add to another, say moral hazard. Information economics, which studies such problems, has relevance in subjects such as insurance, contract law, mechanism design, monetary economics, and health care. Applied subjects include market and legal remedies to spread or reduce risk, such as warranties, government-mandated partial insurance, restructuring or bankruptcy law, inspection, and regulation for quality and information disclosure. === Market failure === The term "market failure" encompasses several problems which may undermine standard economic assumptions. Although economists categorise market failures differently, the following categories emerge in the main texts. Information asymmetries and incomplete markets may result in economic inefficiency but also a possibility of improving efficiency through market, legal, and regulatory remedies, as discussed above. Natural monopoly, or the overlapping concepts of "practical" and "technical" monopoly, is an extreme case of failure of competition as a restraint on producers. Extreme economies of scale are one possible cause. Public goods are goods which are under-supplied in a typical market. The defining features are that people can consume public goods without having to pay for them and that more than one person can consume the good at the same time. Externalities occur where there are significant social costs or benefits from production or consumption that are not reflected in market prices. For example, air pollution may generate a negative externality, and education may generate a positive externality (less crime, etc.). Governments often tax and otherwise restrict the sale of goods that have negative externalities and subsidise or otherwise promote the purchase of goods that have positive externalities in an effort to correct the price distortions caused by these externalities. Elementary demand-and-supply theory predicts equilibrium but not the speed of adjustment for changes of equilibrium due to a shift in demand or supply. In many areas, some form of price stickiness is postulated to account for quantities, rather than prices, adjusting in the short run to changes on the demand side or the supply side. This includes standard analysis of the business cycle in macroeconomics. Analysis often revolves around causes of such price stickiness and their implications for reaching a hypothesised long-run equilibrium. Examples of such price stickiness in particular markets include wage rates in labour markets and posted prices in markets deviating from perfect competition. Some specialised fields of economics deal in market failure more than others. The economics of the public sector is one example. Much environmental economics concerns externalities or "public bads". Policy options include regulations that reflect cost–benefit analysis or market solutions that change incentives, such as emission fees or redefinition of property rights. === Welfare === Welfare economics uses microeconomics techniques to evaluate well-being from allocation of productive factors as to desirability and economic efficiency within an economy, often relative to competitive general equilibrium. It analyses social welfare, however measured, in terms of economic activities of the individuals that compose the theoretical society considered. Accordingly, individuals, with associated economic activities, are the basic units for aggregating to social welfare, whether of a group, a community, or a society, and there is no "social welfare" apart from the "welfare" associated with its individual units. == Macroeconomics == Macroeconomics, another branch of economics, examines the economy as a whole to explain broad aggregates and their interactions "top down", that is, using a simplified form of general-equilibrium theory. Such aggregates include national income and output, the unemployment rate, and price inflation and subaggregates like total consumption and investment spending and their components. It also studies effects of monetary policy and fiscal policy. Since at least the 1960s, macroeconomics has been characterised by further integration as to micro-based modelling of sectors, including rationality of players, efficient use of market information, and imperfect competition. This has addressed a long-standing concern about inconsistent developments of the same subject. Macroeconomic analysis also considers factors affecting the long-term level and growth of national income. Such factors include capital accumulation, technological change and labour force growth. === Growth === Growth economics studies factors that explain economic growth – the increase in output per capita of a country over a long period of time. The same factors are used to explain differences in the level of output per capita between countries, in particular why some countries grow faster than others, and whether countries converge at the same rates of growth. Much-studied factors include the rate of investment, population growth, and technological change. These are represented in theoretical and empirical forms (as in the neoclassical and endogenous growth models) and in growth accounting. === Business cycle === The economics of a depression were the spur for the creation of "macroeconomics" as a separate discipline. During the Great Depression of the 1930s, John Maynard Keynes authored a book entitled The General Theory of Employment, Interest and Money outlining the key theories of Keynesian economics. Keynes contended that aggregate demand for goods might be insufficient during economic downturns, leading to unnecessarily high unemployment and losses of potential output. He therefore advocated active policy responses by the public sector, including monetary policy actions by the central bank and fiscal policy actions by the government to stabilise output over the business cycle. Thus, a central conclusion of Keynesian economics is that, in some situations, no strong automatic mechanism moves output and employment towards full employment levels. John Hicks' IS/LM model has been the most influential interpretation of The General Theory. Over the years, understanding of the business cycle has branched into various research programmes, mostly related to or distinct from Keynesianism. The neoclassical synthesis refers to the reconciliation of Keynesian economics with classical economics, stating that Keynesianism is correct in the short run but qualified by classical-like considerations in the intermediate and long run. New classical macroeconomics, as distinct from the Keynesian view of the business cycle, posits market clearing with imperfect information. It includes Friedman's permanent income hypothesis on consumption and "rational expectations" theory, led by Robert Lucas, and real business cycle theory. In contrast, the new Keynesian approach retains the rational expectations assumption, however it assumes a variety of market failures. In particular, New Keynesians assume prices and wages are "sticky", which means they do not adjust instantaneously to changes in economic conditions. Thus, the new classicals assume that prices and wages adjust automatically to attain full employment, whereas the new Keynesians see full employment as being automatically achieved only in the long run, and hence government and central-bank policies are needed because the "long run" may be very long. === Unemployment === The amount of unemployment in an economy is measured by the unemployment rate, the percentage of workers without jobs in the labour force. The labour force only includes workers actively looking for jobs. People who are retired, pursuing education, or discouraged from seeking work by a lack of job prospects are excluded from the labour force. Unemployment can be generally broken down into several types that are related to different causes. Classical models of unemployment occurs when wages are too high for employers to be willing to hire more workers. Consistent with classical unemployment, frictional unemployment occurs when appropriate job vacancies exist for a worker, but the length of time needed to search for and find the job leads to a period of unemployment. Structural unemployment covers a variety of possible causes of unemployment including a mismatch between workers' skills and the skills required for open jobs. Large amounts of structural unemployment can occur when an economy is transitioning industries and workers find their previous set of skills are no longer in demand. Structural unemployment is similar to frictional unemployment since both reflect the problem of matching workers with job vacancies, but structural unemployment covers the time needed to acquire new skills not just the short term search process. While some types of unemployment may occur regardless of the condition of the economy, cyclical unemployment occurs when growth stagnates. Okun's law represents the empirical relationship between unemployment and economic growth. The original version of Okun's law states that a 3% increase in output would lead to a 1% decrease in unemployment. === Money and monetary policy === Money is a means of final payment for goods in most price system economies, and is the unit of account in which prices are typically stated. Money has general acceptability, relative consistency in value, divisibility, durability, portability, elasticity in supply, and longevity with mass public confidence. It includes currency held by the nonbank public and checkable deposits. It has been described as a social convention, like language, useful to one largely because it is useful to others. In the words of Francis Amasa Walker, a well-known 19th-century economist, "Money is what money does" ("Money is that money does" in the original). As a medium of exchange, money facilitates trade. It is essentially a measure of value and more importantly, a store of value being a basis for credit creation. Its economic function can be contrasted with barter (non-monetary exchange). Given a diverse array of produced goods and specialised producers, barter may entail a hard-to-locate double coincidence of wants as to what is exchanged, say apples and a book. Money can reduce the transaction cost of exchange because of its ready acceptability. Then it is less costly for the seller to accept money in exchange, rather than what the buyer produces. Monetary policy is the policy that central banks conduct to accomplish their broader objectives. Most central banks in developed countries follow inflation targeting, whereas the main objective for many central banks in development countries is to uphold a fixed exchange rate system. The primary monetary tool is normally the adjustment of interest rates, either directly via administratively changing the central bank's own interest rates or indirectly via open market operations. Via the monetary transmission mechanism, interest rate changes affect investment, consumption and net export, and hence aggregate demand, output and employment, and ultimately the development of wages and inflation. === Fiscal policy === Governments implement fiscal policy to influence macroeconomic conditions by adjusting spending and taxation policies to alter aggregate demand. When aggregate demand falls below the potential output of the economy, there is an output gap where some productive capacity is left unemployed. Governments increase spending and cut taxes to boost aggregate demand. Resources that have been idled can be used by the government. For example, unemployed home builders can be hired to expand highways. Tax cuts allow consumers to increase their spending, which boosts aggregate demand. Both tax cuts and spending have multiplier effects where the initial increase in demand from the policy percolates through the economy and generates additional economic activity. The effects of fiscal policy can be limited by crowding out. When there is no output gap, the economy is producing at full capacity and there are no excess productive resources. If the government increases spending in this situation, the government uses resources that otherwise would have been used by the private sector, so there is no increase in overall output. Some economists think that crowding out is always an issue while others do not think it is a major issue when output is depressed. Sceptics of fiscal policy also make the argument of Ricardian equivalence. They argue that an increase in debt will have to be paid for with future tax increases, which will cause people to reduce their consumption and save money to pay for the future tax increase. Under Ricardian equivalence, any boost in demand from tax cuts will be offset by the increased saving intended to pay for future higher taxes. === Inequality === Economic inequality includes income inequality, measured using the distribution of income (the amount of money people receive), and wealth inequality measured using the distribution of wealth (the amount of wealth people own), and other measures such as consumption, land ownership, and human capital. Inequality exists at different extents between countries or states, groups of people, and individuals. There are many methods for measuring inequality, the Gini coefficient being widely used for income differences among individuals. An example measure of inequality between countries is the Inequality-adjusted Human Development Index, a composite index that takes inequality into account. Important concepts of equality include equity, equality of outcome, and equality of opportunity. Research has linked economic inequality to political and social instability, including revolution, democratic breakdown and civil conflict. Research suggests that greater inequality hinders economic growth and macroeconomic stability, and that land and human capital inequality reduce growth more than inequality of income. Inequality is at the centre stage of economic policy debate across the globe, as government tax and spending policies have significant effects on income distribution. In advanced economies, taxes and transfers decrease income inequality by one-third, with most of this being achieved via public social spending (such as pensions and family benefits.) == Other branches of economics == === Public economics === Public economics is the field of economics that deals with economic activities of a public sector, usually government. The subject addresses such matters as tax incidence (who really pays a particular tax), cost–benefit analysis of government programmes, effects on economic efficiency and income distribution of different kinds of spending and taxes, and fiscal politics. The latter, an aspect of public choice theory, models public-sector behaviour analogously to microeconomics, involving interactions of self-interested voters, politicians, and bureaucrats. Much of economics is positive, seeking to describe and predict economic phenomena. Normative economics seeks to identify what economies ought to be like. Welfare economics is a normative branch of economics that uses microeconomic techniques to simultaneously determine the allocative efficiency within an economy and the income distribution associated with it. It attempts to measure social welfare by examining the economic activities of the individuals that comprise society. === International economics === International trade studies determinants of goods-and-services flows across international boundaries. It also concerns the size and distribution of gains from trade. Policy applications include estimating the effects of changing tariff rates and trade quotas. International finance is a macroeconomic field which examines the flow of capital across international borders, and the effects of these movements on exchange rates. Increased trade in goods, services and capital between countries is a major effect of contemporary globalisation. === Labour economics === Labour economics seeks to understand the functioning and dynamics of the markets for wage labour. Labour markets function through the interaction of workers and employers. Labour economics looks at the suppliers of labour services (workers), the demands of labour services (employers), and attempts to understand the resulting pattern of wages, employment, and income. In economics, labour is a measure of the work done by human beings. It is conventionally contrasted with such other factors of production as land and capital. There are theories which have developed a concept called human capital (referring to the skills that workers possess, not necessarily their actual work), although there are also counter posing macro-economic system theories that think human capital is a contradiction in terms. === Development economics === Development economics examines economic aspects of the economic development process in relatively low-income countries focusing on structural change, poverty, and economic growth. Approaches in development economics frequently incorporate social and political factors. == Related subjects == Economics is one social science among several and has fields bordering on other areas, including economic geography, economic history, public choice, energy economics, cultural economics, family economics and institutional economics. Law and economics, or economic analysis of law, is an approach to legal theory that applies methods of economics to law. It includes the use of economic concepts to explain the effects of legal rules, to assess which legal rules are economically efficient, and to predict what the legal rules will be. A seminal article by Ronald Coase published in 1961 suggested that well-defined property rights could overcome the problems of externalities. Political economy is the interdisciplinary study that combines economics, law, and political science in explaining how political institutions, the political environment, and the economic system (capitalist, socialist, mixed) influence each other. It studies questions such as how monopoly, rent-seeking behaviour, and externalities should impact government policy. Historians have employed political economy to explore the ways in the past that persons and groups with common economic interests have used politics to effect changes beneficial to their interests. Energy economics is a broad scientific subject area which includes topics related to energy supply and energy demand. Georgescu-Roegen reintroduced the concept of entropy in relation to economics and energy from thermodynamics, as distinguished from what he viewed as the mechanistic foundation of neoclassical economics drawn from Newtonian physics. His work contributed significantly to thermoeconomics and to ecological economics. He also did foundational work which later developed into evolutionary economics. The sociological subfield of economic sociology arose, primarily through the work of Émile Durkheim, Max Weber and Georg Simmel, as an approach to analysing the effects of economic phenomena in relation to the overarching social paradigm (i.e. modernity). Classic works include Max Weber's The Protestant Ethic and the Spirit of Capitalism (1905) and Georg Simmel's The Philosophy of Money (1900). More recently, the works of James S. Coleman, Mark Granovetter, Peter Hedstrom and Richard Swedberg have been influential in this field. Gary Becker in 1974 presented an economic theory of social interactions, whose applications included the family, charity, merit goods and multiperson interactions, and envy and hatred. He and Kevin Murphy authored a book in 2001 that analysed market behaviour in a social environment. == Profession == The professionalisation of economics, reflected in the growth of graduate programmes on the subject, has been described as "the main change in economics since around 1900". Most major universities and many colleges have a major, school, or department in which academic degrees are awarded in the subject, whether in the liberal arts, business, or for professional study. See Bachelor of Economics and Master of Economics. In the private sector, professional economists are employed as consultants and in industry, including banking and finance. Economists also work for various government departments and agencies, for example, the national treasury, central bank or National Bureau of Statistics. See Economic analyst. There are dozens of prizes awarded to economists each year for outstanding intellectual contributions to the field, the most prominent of which is the Nobel Memorial Prize in Economic Sciences, though it is not a Nobel Prize. Contemporary economics uses mathematics. Economists draw on the tools of calculus, linear algebra, statistics, game theory, and computer science. Professional economists are expected to be familiar with these tools, while a minority specialise in econometrics and mathematical methods. === Women in economics === Harriet Martineau (1802–1876) was a widely-read populariser of classical economic thought. Mary Paley Marshall (1850–1944), the first women lecturer at a British economics faculty, wrote The Economics of Industry with her husband Alfred Marshall. Joan Robinson (1903–1983) was an important post-Keynesian economist. The economic historian Anna Schwartz (1915–2012) coauthored A Monetary History of the United States, 1867–1960 with Milton Friedman. Three women have received the Nobel Prize in Economics: Elinor Ostrom (2009), Esther Duflo (2019) and Claudia Goldin (2023). Five have received the John Bates Clark Medal: Susan Athey (2007), Esther Duflo (2010), Amy Finkelstein (2012), Emi Nakamura (2019) and Melissa Dell (2020). Women's authorship share in prominent economic journals reduced from 1940 to the 1970s, but has subsequently risen, with different patterns of gendered coauthorship. Women remain globally under-represented in the profession (19% of authors in the RePEc database in 2018), with national variation. == See also == == Notes == == References == === Sources === Hoover, Kevin D.; Siegler, Mark V. (20 March 2008). "Sound and Fury: McCloskey and Significance Testing in Economics". Journal of Economic Methodology. 15 (1): 1–37. CiteSeerX 10.1.1.533.7658. doi:10.1080/13501780801913298. S2CID 216137286. Samuelson, Paul A; Nordhaus, William D. (2010). Economics. Boston: Irwin McGraw-Hill. ISBN 978-0073511290. OCLC 751033918. == Further reading == Anderson, David A. (2019). Survey of Economics. New York: Worth. ISBN 978-1-4292-5956-9. Blanchard, Olivier; Amighini, Alessia; Giavazzi, Francesco (2017). Macroeconomics: a European perspective (3rd ed.). Pearson. ISBN 978-1-292-08567-8. Blaug, Mark (1985). Economic Theory in Retrospect (4th ed.). Cambridge: Cambridge University Press. ISBN 978-0521316446. McCann, Charles Robert Jr. (2003). The Elgar Dictionary of Economic Quotations. Edward Elgar. ISBN 978-1840648201. Post, Louis F. (1927), The Basic Facts of Economics: A Common-Sense Primer for Advanced Students. United States: Columbian Printing Company, Incorporated. Economics public domain audiobook at LibriVox. == External links == === General information === === Institutions and organizations === === Study resources ===
Wikipedia/Economic_theory
In economics, supply is the amount of a resource that firms, producers, labourers, providers of financial assets, or other economic agents are willing and able to provide to the marketplace or to an individual. Supply can be in produced goods, labour time, raw materials, or any other scarce or valuable object. Supply is often plotted graphically as a supply curve, with the price per unit on the vertical axis and quantity supplied as a function of price on the horizontal axis. This reversal of the usual position of the dependent variable and the independent variable is an unfortunate but standard convention. The supply curve can be either for an individual seller or for the market as a whole, adding up the quantity supplied by all sellers. The quantity supplied is for a particular time period (e.g., the tons of steel a firm would supply in a year), but the units and time are often omitted in theoretical presentations. In the goods market, supply is the amount of a product per unit of time that producers are willing to sell at various given prices when all other factors are held constant. In the labor market, the supply of labor is the amount of time per week, month, or year that individuals are willing to spend working, as a function of the wage rate. In the economic and financial field, the money supply is the amount of highly liquid assets available in the money market, which is either determined or influenced by a country's monetary authority. This can vary based on which type of money supply one is discussing. M1 for example is commonly used to refer to narrow money, coins, cash, and other money equivalents that can be converted to currency nearly instantly. M2 by contrast includes all of M1 but also includes short-term deposits and certain types of market funds. == Supply schedule == A supply schedule is a table which shows how much one or more firms will be willing to supply at particular prices under the existing circumstances. Some of the more important factors affecting supply are the good's own price, the prices of related goods, production costs, technology, the production function, and expectations of sellers. === Factors affecting supply === Innumerable factors and circumstances could affect a seller's willingness or ability to produce and sell a good. Some of the more common factors are: Good's own price: The basic supply relationship is between the price of a good and the quantity supplied. According to the law of supply, keeping other factors constant, an increase in price results in an increase in quantity supplied. Prices of related goods: For purposes of supply analysis related goods refer to goods from which inputs are derived to be used in the production of the primary good. For example, Spam is made from pork shoulders and ham. Both are derived from pigs. Therefore, pigs would be considered a related good to Spam. In this case the relationship would be negative or inverse. If the price of pigs goes up the supply of Spam would decrease (supply curve shifts left) because the cost of production would have increased. A related good may also be a good that can be produced with the firm's existing factors of production. For example, suppose that a firm produces leather belts, and that the firm's managers learn that leather pouches for smartphones are more profitable than belts. The firm might reduce its production of belts and begin production of cell phone pouches based on this information. Finally, a change in the price of a joint product will affect supply. For example, beef products and leather are joint products. If a company runs both a beef processing operation and a tannery an increase in the price of steaks would mean that more cattle are processed which would increase the supply of leather. Conditions of production: The most significant factor here is the state of technology. If there is a technological advancement in one good's production, the supply increases. Other variables may also affect production conditions. For instance, for agricultural goods, weather is crucial for it may affect the production outputs. Economies of scale can also affect conditions of production. Expectations: Sellers' concern for future market conditions can directly affect supply. If the seller believes that the demand for his product will sharply increase in the foreseeable future the firm owner may immediately increase production in anticipation of future price increases. The supply curve would shift out. Price of inputs: Inputs include land, labor, energy and raw materials. If the price of inputs increases the supply curve will shift left as sellers are less willing or able to sell goods at any given price. For example, if the price of electricity increased a seller may reduce his supply of his product because of the increased costs of production. Fixed inputs can affect the price of inputs, and the scale of production can affect how much the fixed costs translate into the end price of the good. Number of suppliers: The market supply curve is the horizontal summation of the individual supply curves. As more firms enter the industry, the market supply curve will shift out, driving down prices. Government policies and regulations: Government intervention can have a significant effect on supply. Government intervention can take many forms including environmental and health regulations, hour and wage laws, taxes, electrical and natural gas rates and zoning and land use regulations. This list is not exhaustive. All facts and circumstances that are relevant to a seller's willingness or ability to produce and sell goods can affect supply. For example, if the forecast is for snow retail sellers will respond by increasing their stocks of snow sleds or skis or winter clothing or bread and milk. === Cases that violate the law of supply/ Exceptional cases to the law of supply === Agricultural products / Perishable goods: Due to their nature of having a short shelf life, immediately after harvest they are offered in the market for sale in large quantities during which prices are usually low. During dry season / planting season, it is the opposite. Commodities produced in fixed amounts: For example some commodities which depend on the machine set up, in this case at different prices in the market the commodity may be offered in the same quantity. Supply of labour in the market: The senior management/executive positions have high wages but work a few hours as compared to staff members who earn middle wage levels but work for the longest hours. == Supply function and equation == Supply functions, then, may be classified according to the source from which they come: consumers or firms. Each type of supply function is now considered in turn. In so doing, the following notational conventions are employed: There are I produced goods, each defining a single industry, and J factors. The indices i = 1,…, I and J = 1,…, J run, respectively, over produced goods (industries) and factors. Let n index all goods by first listing produced goods and then factors so that n = 1,…, I, I + 1,…, I + J. The number of firms in industry i is written L i, and these firms are indexed by l = 1,…, L i. There are K consumers enumerated as k = 1,…, K. The variable y I + j k {\displaystyle y_{I+jk}} represents the quantities of factor j consumed by consumer k. This person can have endowments of good j from y ¯ I + 1 k {\displaystyle {\bar {y}}_{I+1k}} to y ¯ I + j k {\displaystyle {\bar {y}}_{I+jk}} . If y I + j k {\displaystyle y_{I+jk}} < y ¯ I + j k {\displaystyle {\bar {y}}_{I+jk}} then person k is a supplier of j. If the opposite is true, they are a consumer of j. The supply function is the mathematical expression of the relationship between supply and those factors that affect the willingness and ability of a supplier to offer goods for sale. An example would be the curve implied by Q s = f ( P ; P rg ) {\displaystyle Q_{\text{s}}=f(P;P_{\text{rg}})} where P {\displaystyle P} is the price of the good and P rg {\displaystyle P_{\text{rg}}} is the price of a related good. The semicolon means that the variables to the right are held constant when quantity supplied is plotted against the good's own price. The supply equation is the explicit mathematical expression of the functional relationship. A linear example is Q s = 325 + P − 30 P rg {\displaystyle Q_{\text{s}}=325+P-30P_{\text{rg}}} Here 325 {\displaystyle 325} is the repository of all non-specified factors that affect supply for the product. The coefficient of P {\displaystyle P} is positive following the general rule that price and quantity supplied are directly related. P rg {\displaystyle P_{\text{rg}}} is the price of a related good. Typically, its coefficient is negative because the related good is an input or a source of inputs. == Movements versus shifts == Movements along the curve occur only if there is a change in quantity supplied caused by a change in the good's own price. A shift in the supply curve, referred to as a change in supply, occurs only if a non-price determinant of supply changes. For example, if the price of an ingredient used to produce the good, a related good, were to increase, the supply curve would shift left. == Inverse supply equation == By convention in the context of supply and demand graphs, economists graph the dependent variable (quantity) on the horizontal axis and the independent variable (price) on the vertical axis. The inverse supply equation is the equation written with the vertical-axis variable isolated on the left side: P = f ( Q ) {\displaystyle P=f(Q)} . As an example, if the supply equation is Q = 40 P − 2 P r g {\displaystyle Q=40P-2P_{rg}} then the inverse supply equation would be P = Q 40 + P r g 20 {\displaystyle P={\tfrac {Q}{40}}+{\tfrac {P_{rg}}{20}}} . == Marginal costs and short-run supply curve == A firm's short-run supply curve is the marginal cost curve above the shutdown point—the short-run marginal cost curve (SRMC) above the minimum average variable cost. The portion of the SRMC below the shutdown point is not part of the supply curve because the firm is not producing any output. The firm's long-run supply curve is that portion of the long-run marginal cost curve above the minimum of the long run average cost curve. == Shape of the short-run supply curve == The Law of Diminishing Marginal Returns (LDMR) shapes the SRMC curve. The LDMR states that as production increases eventually a point (the point of diminishing marginal returns) will be reached after which additional units of output resulting from fixed increments of the labor input will be successively smaller. That is, beyond the point of diminishing marginal returns the marginal product of labor will continually decrease and hence a continually higher selling price would be necessary to induce the firm to produce more and more output. == From firm to market supply curve == The market supply curve is the horizontal summation of firm supply curves. The market supply curve can be translated into an equation. For a factor j for example the market supply function is S j = S j ( p , r ) {\displaystyle S_{j}=S^{j}(p,r)} where S j = ∑ k = 1 j S j k {\displaystyle S_{j}=\sum _{k=1}^{j}S_{jk}} and S j ( p , r ) = ∑ k = 1 j S j k ( p , r ) {\displaystyle S^{j}(p,r)=\sum _{k=1}^{j}S^{jk}(p,r)} for all p > 0 and r > 0. Note: not all assumptions that can be made for individual supply functions translate over to market supply functions directly. == The shape of the market supply curve == The law of supply dictates that all other things remaining equal, an increase in the price of the good in question results in an increase in quantity supplied. In other words, the supply curve slopes upwards. However, there are exceptions to the law of supply. Not all supply curves slope upwards. Some heterodox economists, such as Steve Keen and Dirk Ehnts, dispute this theory of the supply curve, arguing that the supply curve for mass produced goods is often downward-sloping: as production increases, unit prices go down, and conversely, if demand is very low, unit prices go up. This corresponds to economies of scale. == Elasticity == The price elasticity of supply (PES) measures the responsiveness of quantity supplied to changes in price, as the percentage change in quantity supplied induced by a one percent change in price. It is calculated for discrete changes as ( Δ Q Δ P ) × P Q {\displaystyle \left({\tfrac {\Delta Q}{\Delta P}}\right)\times {\tfrac {P}{Q}}} and for smooth changes of differentiable supply functions as ( ∂ Q ∂ P ) × P Q {\displaystyle \left({\tfrac {\partial Q}{\partial P}}\right)\times {\tfrac {P}{Q}}} . Since supply is usually increasing in price, the price elasticity of supply is usually positive. For example, if the PES for a good is 0.67 a 1% rise in price will induce a two-thirds increase in quantity supplied. Significant determinants include: Complexity of production: Much depends on the complexity of the production process. Textile production is relatively simple. The labor is largely unskilled and production facilities are little more than buildings—no special structures are needed. Thus, the PES for textiles is elastic. On the other hand, the PES for specific types of motor vehicles is relatively inelastic. Auto manufacture is a multi-stage process that requires specialized equipment, skilled labor, a large suppliers network and large R&D costs. Time to respond: The more time a producer has to respond to price changes the more elastic the supply. For example, a cotton farmer cannot immediately respond to an increase in the price of soybeans. Excess capacity: A producer who has unused capacity can quickly respond to price changes in his market assuming that variable factors are readily available. Inventories: A producer who has a supply of goods or available storage capacity can quickly respond to price changes. Other elasticities can be calculated for non-price determinants of supply. For example, the percentage change the amount of the good supplied caused by a one percent increase in the price of a related good is an input elasticity of supply if the related good is an input in the production process. An example would be the change in the supply of cookies caused by a one percent increase in the price of sugar. === Elasticity along linear supply curves === The slope of a linear supply curve is constant; the elasticity is not. If the linear supply curve intersects the price axis, PES will be infinitely elastic at the point of intersection. The coefficient of elasticity decreases as one moves "up" the curve. However, all points on the supply curve will have a coefficient of elasticity greater than one. If the linear supply curve intersects the quantity axis PES will equal zero at the point of intersection and will increase as one moves up the curve; however, all points on the curve will have a coefficient of elasticity less than 1. If the linear supply curve intersects the origin PES equals one at the point of origin and along the curve. == Market structure and the supply curve == There is no such thing as a monopoly supply curve. Perfect competition is the only market structure for which a supply function can be derived. In a perfectly competitive market the price is given by the marketplace from the point of view of the supplier; a manager of a competitive firm can state what quantity of goods will be supplied for any price by simply referring to the firm's marginal cost curve. To generate his supply function the seller could simply initially hypothetically set the price equal to zero and then incrementally increase the price; at each price he could calculate the hypothetical quantity supplied using the marginal cost curve. Following this process the manager could trace out the complete supply function. A monopolist cannot replicate this process because price is not imposed by the marketplace and hence is not an independent variable from the point of view of the firm; instead, the firm simultaneously chooses both the price and the quantity subject to the stipulation that together they form a point on the customers' demand curve. A change in demand can result in "changes in price with no changes in output, changes in output with no changes in price or both". There is simply not a one-to-one relationship between price and quantity supplied. There is no single function that relates price to quantity supplied. == See also == Aggregate Demand AD-AS model Demand curve Law of supply Profit maximization Supply and Demand Price elasticity of supply == References ==
Wikipedia/Supply_function
In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y. The set X is called the domain of the function and the set Y is called the codomain of the function. Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly increased the possible applications of the concept. A function is often denoted by a letter such as f, g or h. The value of a function f at an element x of its domain (that is, the element of the codomain that is associated with x) is denoted by f(x); for example, the value of f at x = 4 is denoted by f(4). Commonly, a specific function is defined by means of an expression depending on x, such as f ( x ) = x 2 + 1 ; {\displaystyle f(x)=x^{2}+1;} in this case, some computation, called function evaluation, may be needed for deducing the value of the function at a particular value; for example, if f ( x ) = x 2 + 1 , {\displaystyle f(x)=x^{2}+1,} then f ( 4 ) = 4 2 + 1 = 17. {\displaystyle f(4)=4^{2}+1=17.} Given its domain and its codomain, a function is uniquely represented by the set of all pairs (x, f (x)), called the graph of the function, a popular means of illustrating the function. When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane. Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics. The concept of a function has evolved significantly over centuries, from its informal origins in ancient mathematics to its formalization in the 19th century. See History of the function concept for details. == Definition == A function f from a set X to a set Y is an assignment of one element of Y to each element of X. The set X is called the domain of the function and the set Y is called the codomain of the function. If the element y in Y is assigned to x in X by the function f, one says that f maps x to y, and this is commonly written y = f ( x ) . {\displaystyle y=f(x).} In this notation, x is the argument or variable of the function. A specific element x of X is a value of the variable, and the corresponding element of Y is the value of the function at x, or the image of x under the function. The image of a function, sometimes called its range, is the set of the images of all elements in the domain. A function f, its domain X, and its codomain Y are often specified by the notation f : X → Y . {\displaystyle f:X\to Y.} One may write x ↦ y {\displaystyle x\mapsto y} instead of y = f ( x ) {\displaystyle y=f(x)} , where the symbol ↦ {\displaystyle \mapsto } (read 'maps to') is used to specify where a particular element x in the domain is mapped to by f. This allows the definition of a function without naming. For example, the square function is the function x ↦ x 2 . {\displaystyle x\mapsto x^{2}.} The domain and codomain are not always explicitly given when a function is defined. In particular, it is common that one might only know, without some (possibly difficult) computation, that the domain of a specific function is contained in a larger set. For example, if f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is a real function, the determination of the domain of the function x ↦ 1 / f ( x ) {\displaystyle x\mapsto 1/f(x)} requires knowing the zeros of f. This is one of the reasons for which, in mathematical analysis, "a function from X to Y " may refer to a function having a proper subset of X as a domain. For example, a "function from the reals to the reals" may refer to a real-valued function of a real variable whose domain is a proper subset of the real numbers, typically a subset that contains a non-empty open interval. Such a function is then called a partial function. A function f on a set S means a function from the domain S, without specifying a codomain. However, some authors use it as shorthand for saying that the function is f : S → S. === Formal definition === The above definition of a function is essentially that of the founders of calculus, Leibniz, Newton and Euler. However, it cannot be formalized, since there is no mathematical definition of an "assignment". It is only at the end of the 19th century that the first formal definition of a function could be provided, in terms of set theory. This set-theoretic definition is based on the fact that a function establishes a relation between the elements of the domain and some (possibly all) elements of the codomain. Mathematically, a binary relation between two sets X and Y is a subset of the set of all ordered pairs ( x , y ) {\displaystyle (x,y)} such that x ∈ X {\displaystyle x\in X} and y ∈ Y . {\displaystyle y\in Y.} The set of all these pairs is called the Cartesian product of X and Y and denoted X × Y . {\displaystyle X\times Y.} Thus, the above definition may be formalized as follows. A function with domain X and codomain Y is a binary relation R between X and Y that satisfies the two following conditions: For every x {\displaystyle x} in X {\displaystyle X} there exists y {\displaystyle y} in Y {\displaystyle Y} such that ( x , y ) ∈ R . {\displaystyle (x,y)\in R.} If ( x , y ) ∈ R {\displaystyle (x,y)\in R} and ( x , z ) ∈ R , {\displaystyle (x,z)\in R,} then y = z . {\displaystyle y=z.} This definition may be rewritten more formally, without referring explicitly to the concept of a relation, but using more notation (including set-builder notation): A function is formed by three sets, the domain X , {\displaystyle X,} the codomain Y , {\displaystyle Y,} and the graph R {\displaystyle R} that satisfy the three following conditions. R ⊆ { ( x , y ) ∣ x ∈ X , y ∈ Y } {\displaystyle R\subseteq \{(x,y)\mid x\in X,y\in Y\}} ∀ x ∈ X , ∃ y ∈ Y , ( x , y ) ∈ R {\displaystyle \forall x\in X,\exists y\in Y,\left(x,y\right)\in R\qquad } ( x , y ) ∈ R ∧ ( x , z ) ∈ R ⟹ y = z {\displaystyle (x,y)\in R\land (x,z)\in R\implies y=z\qquad } === Partial functions === Partial functions are defined similarly to ordinary functions, with the "total" condition removed. That is, a partial function from X to Y is a binary relation R between X and Y such that, for every x ∈ X , {\displaystyle x\in X,} there is at most one y in Y such that ( x , y ) ∈ R . {\displaystyle (x,y)\in R.} Using functional notation, this means that, given x ∈ X , {\displaystyle x\in X,} either f ( x ) {\displaystyle f(x)} is in Y, or it is undefined. The set of the elements of X such that f ( x ) {\displaystyle f(x)} is defined and belongs to Y is called the domain of definition of the function. A partial function from X to Y is thus an ordinary function that has as its domain a subset of X called the domain of definition of the function. If the domain of definition equals X, one often says that the partial function is a total function. In several areas of mathematics, the term "function" refers to partial functions rather than to ordinary (total) functions. This is typically the case when functions may be specified in a way that makes difficult or even impossible to determine their domain. In calculus, a real-valued function of a real variable or real function is a partial function from the set R {\displaystyle \mathbb {R} } of the real numbers to itself. Given a real function f : x ↦ f ( x ) {\displaystyle f:x\mapsto f(x)} its multiplicative inverse x ↦ 1 / f ( x ) {\displaystyle x\mapsto 1/f(x)} is also a real function. The determination of the domain of definition of a multiplicative inverse of a (partial) function amounts to compute the zeros of the function, the values where the function is defined but not its multiplicative inverse. Similarly, a function of a complex variable is generally a partial function whose domain of definition is a subset of the complex numbers C {\displaystyle \mathbb {C} } . The difficulty of determining the domain of definition of a complex function is illustrated by the multiplicative inverse of the Riemann zeta function: the determination of the domain of definition of the function z ↦ 1 / ζ ( z ) {\displaystyle z\mapsto 1/\zeta (z)} is more or less equivalent to the proof or disproof of one of the major open problems in mathematics, the Riemann hypothesis. In computability theory, a general recursive function is a partial function from the integers to the integers whose values can be computed by an algorithm (roughly speaking). The domain of definition of such a function is the set of inputs for which the algorithm does not run forever. A fundamental theorem of computability theory is that there cannot exist an algorithm that takes an arbitrary general recursive function as input and tests whether 0 belongs to its domain of definition (see Halting problem). === Multivariate functions === A multivariate function, multivariable function, or function of several variables is a function that depends on several arguments. Such functions are commonly encountered. For example, the position of a car on a road is a function of the time travelled and its average speed. Formally, a function of n variables is a function whose domain is a set of n-tuples. For example, multiplication of integers is a function of two variables, or bivariate function, whose domain is the set of all ordered pairs (2-tuples) of integers, and whose codomain is the set of integers. The same is true for every binary operation. The graph of a bivariate surface over a two-dimensional real domain may be interpreted as defining a parametric surface, as used in, e.g., bivariate interpolation. Commonly, an n-tuple is denoted enclosed between parentheses, such as in ( 1 , 2 , … , n ) . {\displaystyle (1,2,\ldots ,n).} When using functional notation, one usually omits the parentheses surrounding tuples, writing f ( x 1 , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{n})} instead of f ( ( x 1 , … , x n ) ) . {\displaystyle f((x_{1},\ldots ,x_{n})).} Given n sets X 1 , … , X n , {\displaystyle X_{1},\ldots ,X_{n},} the set of all n-tuples ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} such that x 1 ∈ X 1 , … , x n ∈ X n {\displaystyle x_{1}\in X_{1},\ldots ,x_{n}\in X_{n}} is called the Cartesian product of X 1 , … , X n , {\displaystyle X_{1},\ldots ,X_{n},} and denoted X 1 × ⋯ × X n . {\displaystyle X_{1}\times \cdots \times X_{n}.} Therefore, a multivariate function is a function that has a Cartesian product or a proper subset of a Cartesian product as a domain. f : U → Y , {\displaystyle f:U\to Y,} where the domain U has the form U ⊆ X 1 × ⋯ × X n . {\displaystyle U\subseteq X_{1}\times \cdots \times X_{n}.} If all the X i {\displaystyle X_{i}} are equal to the set R {\displaystyle \mathbb {R} } of the real numbers or to the set C {\displaystyle \mathbb {C} } of the complex numbers, one talks respectively of a function of several real variables or of a function of several complex variables. == Notation == There are various standard ways for denoting functions. The most commonly used notation is functional notation, which is the first notation described below. === Functional notation === The functional notation requires that a name is given to the function, which, in the case of a unspecified function is often the letter f. Then, the application of the function to an argument is denoted by its name followed by its argument (or, in the case of a multivariate functions, its arguments) enclosed between parentheses, such as in f ( x ) , sin ⁡ ( 3 ) , or f ( x 2 + 1 ) . {\displaystyle f(x),\quad \sin(3),\quad {\text{or}}\quad f(x^{2}+1).} The argument between the parentheses may be a variable, often x, that represents an arbitrary element of the domain of the function, a specific element of the domain (3 in the above example), or an expression that can be evaluated to an element of the domain ( x 2 + 1 {\displaystyle x^{2}+1} in the above example). The use of a unspecified variable between parentheses is useful for defining a function explicitly such as in "let f ( x ) = sin ⁡ ( x 2 + 1 ) {\displaystyle f(x)=\sin(x^{2}+1)} ". When the symbol denoting the function consists of several characters and no ambiguity may arise, the parentheses of functional notation might be omitted. For example, it is common to write sin x instead of sin(x). Functional notation was first used by Leonhard Euler in 1734. Some widely used functions are represented by a symbol consisting of several letters (usually two or three, generally an abbreviation of their name). In this case, a roman type is customarily used instead, such as "sin" for the sine function, in contrast to italic font for single-letter symbols. The functional notation is often used colloquially for referring to a function and simultaneously naming its argument, such as in "let f ( x ) {\displaystyle f(x)} be a function". This is an abuse of notation that is useful for a simpler formulation. === Arrow notation === Arrow notation defines the rule of a function inline, without requiring a name to be given to the function. It uses the ↦ arrow symbol, pronounced "maps to". For example, x ↦ x + 1 {\displaystyle x\mapsto x+1} is the function which takes a real number as input and outputs that number plus 1. Again, a domain and codomain of R {\displaystyle \mathbb {R} } is implied. The domain and codomain can also be explicitly stated, for example: sqr : Z → Z x ↦ x 2 . {\displaystyle {\begin{aligned}\operatorname {sqr} \colon \mathbb {Z} &\to \mathbb {Z} \\x&\mapsto x^{2}.\end{aligned}}} This defines a function sqr from the integers to the integers that returns the square of its input. As a common application of the arrow notation, suppose f : X × X → Y ; ( x , t ) ↦ f ( x , t ) {\displaystyle f:X\times X\to Y;\;(x,t)\mapsto f(x,t)} is a function in two variables, and we want to refer to a partially applied function X → Y {\displaystyle X\to Y} produced by fixing the second argument to the value t0 without introducing a new function name. The map in question could be denoted x ↦ f ( x , t 0 ) {\displaystyle x\mapsto f(x,t_{0})} using the arrow notation. The expression x ↦ f ( x , t 0 ) {\displaystyle x\mapsto f(x,t_{0})} (read: "the map taking x to f of x comma t nought") represents this new function with just one argument, whereas the expression f(x0, t0) refers to the value of the function f at the point (x0, t0). === Index notation === Index notation may be used instead of functional notation. That is, instead of writing f (x), one writes f x . {\displaystyle f_{x}.} This is typically the case for functions whose domain is the set of the natural numbers. Such a function is called a sequence, and, in this case the element f n {\displaystyle f_{n}} is called the nth element of the sequence. The index notation can also be used for distinguishing some variables called parameters from the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. For example, the map x ↦ f ( x , t ) {\displaystyle x\mapsto f(x,t)} (see above) would be denoted f t {\displaystyle f_{t}} using index notation, if we define the collection of maps f t {\displaystyle f_{t}} by the formula f t ( x ) = f ( x , t ) {\displaystyle f_{t}(x)=f(x,t)} for all x , t ∈ X {\displaystyle x,t\in X} . === Dot notation === In the notation x ↦ f ( x ) , {\displaystyle x\mapsto f(x),} the symbol x does not represent any value; it is simply a placeholder, meaning that, if x is replaced by any value on the left of the arrow, it should be replaced by the same value on the right of the arrow. Therefore, x may be replaced by any symbol, often an interpunct " ⋅ ". This may be useful for distinguishing the function f (⋅) from its value f (x) at x. For example, a ( ⋅ ) 2 {\displaystyle a(\cdot )^{2}} may stand for the function x ↦ a x 2 {\displaystyle x\mapsto ax^{2}} , and ∫ a ( ⋅ ) f ( u ) d u {\textstyle \int _{a}^{\,(\cdot )}f(u)\,du} may stand for a function defined by an integral with variable upper bound: x ↦ ∫ a x f ( u ) d u {\textstyle x\mapsto \int _{a}^{x}f(u)\,du} . === Specialized notations === There are other, specialized notations for functions in sub-disciplines of mathematics. For example, in linear algebra and functional analysis, linear forms and the vectors they act upon are denoted using a dual pair to show the underlying duality. This is similar to the use of bra–ket notation in quantum mechanics. In logic and the theory of computation, the function notation of lambda calculus is used to explicitly express the basic notions of function abstraction and application. In category theory and homological algebra, networks of functions are described in terms of how they and their compositions commute with each other using commutative diagrams that extend and generalize the arrow notation for functions described above. === Functions of more than one variable === In some cases the argument of a function may be an ordered pair of elements taken from some set or sets. For example, a function f can be defined as mapping any pair of real numbers ( x , y ) {\displaystyle (x,y)} to the sum of their squares, x 2 + y 2 {\displaystyle x^{2}+y^{2}} . Such a function is commonly written as f ( x , y ) = x 2 + y 2 {\displaystyle f(x,y)=x^{2}+y^{2}} and referred to as "a function of two variables". Likewise one can have a function of three or more variables, with notations such as f ( w , x , y ) {\displaystyle f(w,x,y)} , f ( w , x , y , z ) {\displaystyle f(w,x,y,z)} . == Other terms == A function may also be called a map or a mapping, but some authors make a distinction between the term "map" and "function". For example, the term "map" is often reserved for a "function" with some sort of special structure (e.g. maps of manifolds). In particular map may be used in place of homomorphism for the sake of succinctness (e.g., linear map or map from G to H instead of group homomorphism from G to H). Some authors reserve the word mapping for the case where the structure of the codomain belongs explicitly to the definition of the function. Some authors, such as Serge Lang, use "function" only to refer to maps for which the codomain is a subset of the real or complex numbers, and use the term mapping for more general functions. In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. See also Poincaré map. Whichever definition of map is used, related terms like domain, codomain, injective, continuous have the same meaning as for a function. == Specifying a function == Given a function f {\displaystyle f} , by definition, to each element x {\displaystyle x} of the domain of the function f {\displaystyle f} , there is a unique element associated to it, the value f ( x ) {\displaystyle f(x)} of f {\displaystyle f} at x {\displaystyle x} . There are several ways to specify or describe how x {\displaystyle x} is related to f ( x ) {\displaystyle f(x)} , both explicitly and implicitly. Sometimes, a theorem or an axiom asserts the existence of a function having some properties, without describing it more precisely. Often, the specification or description is referred to as the definition of the function f {\displaystyle f} . === By listing function values === On a finite set a function may be defined by listing the elements of the codomain that are associated to the elements of the domain. For example, if A = { 1 , 2 , 3 } {\displaystyle A=\{1,2,3\}} , then one can define a function f : A → R {\displaystyle f:A\to \mathbb {R} } by f ( 1 ) = 2 , f ( 2 ) = 3 , f ( 3 ) = 4. {\displaystyle f(1)=2,f(2)=3,f(3)=4.} === By a formula === Functions are often defined by an expression that describes a combination of arithmetic operations and previously defined functions; such a formula allows computing the value of the function from the value of any element of the domain. For example, in the above example, f {\displaystyle f} can be defined by the formula f ( n ) = n + 1 {\displaystyle f(n)=n+1} , for n ∈ { 1 , 2 , 3 } {\displaystyle n\in \{1,2,3\}} . When a function is defined this way, the determination of its domain is sometimes difficult. If the formula that defines the function contains divisions, the values of the variable for which a denominator is zero must be excluded from the domain; thus, for a complicated function, the determination of the domain passes through the computation of the zeros of auxiliary functions. Similarly, if square roots occur in the definition of a function from R {\displaystyle \mathbb {R} } to R , {\displaystyle \mathbb {R} ,} the domain is included in the set of the values of the variable for which the arguments of the square roots are nonnegative. For example, f ( x ) = 1 + x 2 {\displaystyle f(x)={\sqrt {1+x^{2}}}} defines a function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } whose domain is R , {\displaystyle \mathbb {R} ,} because 1 + x 2 {\displaystyle 1+x^{2}} is always positive if x is a real number. On the other hand, f ( x ) = 1 − x 2 {\displaystyle f(x)={\sqrt {1-x^{2}}}} defines a function from the reals to the reals whose domain is reduced to the interval [−1, 1]. (In old texts, such a domain was called the domain of definition of the function.) Functions can be classified by the nature of formulas that define them: A quadratic function is a function that may be written f ( x ) = a x 2 + b x + c , {\displaystyle f(x)=ax^{2}+bx+c,} where a, b, c are constants. More generally, a polynomial function is a function that can be defined by a formula involving only additions, subtractions, multiplications, and exponentiation to nonnegative integer powers. For example, f ( x ) = x 3 − 3 x − 1 {\displaystyle f(x)=x^{3}-3x-1} and f ( x ) = ( x − 1 ) ( x 3 + 1 ) + 2 x 2 − 1 {\displaystyle f(x)=(x-1)(x^{3}+1)+2x^{2}-1} are polynomial functions of x {\displaystyle x} . A rational function is the same, with divisions also allowed, such as f ( x ) = x − 1 x + 1 , {\displaystyle f(x)={\frac {x-1}{x+1}},} and f ( x ) = 1 x + 1 + 3 x − 2 x − 1 . {\displaystyle f(x)={\frac {1}{x+1}}+{\frac {3}{x}}-{\frac {2}{x-1}}.} An algebraic function is the same, with nth roots and roots of polynomials also allowed. An elementary function is the same, with logarithms and exponential functions allowed. === Inverse and implicit functions === A function f : X → Y , {\displaystyle f:X\to Y,} with domain X and codomain Y, is bijective, if for every y in Y, there is one and only one element x in X such that y = f(x). In this case, the inverse function of f is the function f − 1 : Y → X {\displaystyle f^{-1}:Y\to X} that maps y ∈ Y {\displaystyle y\in Y} to the element x ∈ X {\displaystyle x\in X} such that y = f(x). For example, the natural logarithm is a bijective function from the positive real numbers to the real numbers. It thus has an inverse, called the exponential function, that maps the real numbers onto the positive numbers. If a function f : X → Y {\displaystyle f:X\to Y} is not bijective, it may occur that one can select subsets E ⊆ X {\displaystyle E\subseteq X} and F ⊆ Y {\displaystyle F\subseteq Y} such that the restriction of f to E is a bijection from E to F, and has thus an inverse. The inverse trigonometric functions are defined this way. For example, the cosine function induces, by restriction, a bijection from the interval [0, π] onto the interval [−1, 1], and its inverse function, called arccosine, maps [−1, 1] onto [0, π]. The other inverse trigonometric functions are defined similarly. More generally, given a binary relation R between two sets X and Y, let E be a subset of X such that, for every x ∈ E , {\displaystyle x\in E,} there is some y ∈ Y {\displaystyle y\in Y} such that x R y. If one has a criterion allowing selecting such a y for every x ∈ E , {\displaystyle x\in E,} this defines a function f : E → Y , {\displaystyle f:E\to Y,} called an implicit function, because it is implicitly defined by the relation R. For example, the equation of the unit circle x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} defines a relation on real numbers. If −1 < x < 1 there are two possible values of y, one positive and one negative. For x = ± 1, these two values become both equal to 0. Otherwise, there is no possible value of y. This means that the equation defines two implicit functions with domain [−1, 1] and respective codomains [0, +∞) and (−∞, 0]. In this example, the equation can be solved in y, giving y = ± 1 − x 2 , {\displaystyle y=\pm {\sqrt {1-x^{2}}},} but, in more complicated examples, this is impossible. For example, the relation y 5 + y + x = 0 {\displaystyle y^{5}+y+x=0} defines y as an implicit function of x, called the Bring radical, which has R {\displaystyle \mathbb {R} } as domain and range. The Bring radical cannot be expressed in terms of the four arithmetic operations and nth roots. The implicit function theorem provides mild differentiability conditions for existence and uniqueness of an implicit function in the neighborhood of a point. === Using differential calculus === Many functions can be defined as the antiderivative of another function. This is the case of the natural logarithm, which is the antiderivative of 1/x that is 0 for x = 1. Another common example is the error function. More generally, many functions, including most special functions, can be defined as solutions of differential equations. The simplest example is probably the exponential function, which can be defined as the unique function that is equal to its derivative and takes the value 1 for x = 0. Power series can be used to define functions on the domain in which they converge. For example, the exponential function is given by e x = ∑ n = 0 ∞ x n n ! {\textstyle e^{x}=\sum _{n=0}^{\infty }{x^{n} \over n!}} . However, as the coefficients of a series are quite arbitrary, a function that is the sum of a convergent series is generally defined otherwise, and the sequence of the coefficients is the result of some computation based on another definition. Then, the power series can be used to enlarge the domain of the function. Typically, if a function for a real variable is the sum of its Taylor series in some interval, this power series allows immediately enlarging the domain to a subset of the complex numbers, the disc of convergence of the series. Then analytic continuation allows enlarging further the domain for including almost the whole complex plane. This process is the method that is generally used for defining the logarithm, the exponential and the trigonometric functions of a complex number. === By recurrence === Functions whose domain are the nonnegative integers, known as sequences, are sometimes defined by recurrence relations. The factorial function on the nonnegative integers ( n ↦ n ! {\displaystyle n\mapsto n!} ) is a basic example, as it can be defined by the recurrence relation n ! = n ( n − 1 ) ! for n > 0 , {\displaystyle n!=n(n-1)!\quad {\text{for}}\quad n>0,} and the initial condition 0 ! = 1. {\displaystyle 0!=1.} == Representing a function == A graph is commonly used to give an intuitive picture of a function. As an example of how a graph helps to understand a function, it is easy to see from its graph whether a function is increasing or decreasing. Some functions may also be represented by bar charts. === Graphs and plots === Given a function f : X → Y , {\displaystyle f:X\to Y,} its graph is, formally, the set G = { ( x , f ( x ) ) ∣ x ∈ X } . {\displaystyle G=\{(x,f(x))\mid x\in X\}.} In the frequent case where X and Y are subsets of the real numbers (or may be identified with such subsets, e.g. intervals), an element ( x , y ) ∈ G {\displaystyle (x,y)\in G} may be identified with a point having coordinates x, y in a 2-dimensional coordinate system, e.g. the Cartesian plane. Parts of this may create a plot that represents (parts of) the function. The use of plots is so ubiquitous that they too are called the graph of the function. Graphic representations of functions are also possible in other coordinate systems. For example, the graph of the square function x ↦ x 2 , {\displaystyle x\mapsto x^{2},} consisting of all points with coordinates ( x , x 2 ) {\displaystyle (x,x^{2})} for x ∈ R , {\displaystyle x\in \mathbb {R} ,} yields, when depicted in Cartesian coordinates, the well known parabola. If the same quadratic function x ↦ x 2 , {\displaystyle x\mapsto x^{2},} with the same formal graph, consisting of pairs of numbers, is plotted instead in polar coordinates ( r , θ ) = ( x , x 2 ) , {\displaystyle (r,\theta )=(x,x^{2}),} the plot obtained is Fermat's spiral. === Tables === A function can be represented as a table of values. If the domain of a function is finite, then the function can be completely specified in this way. For example, the multiplication function f : { 1 , … , 5 } 2 → R {\displaystyle f:\{1,\ldots ,5\}^{2}\to \mathbb {R} } defined as f ( x , y ) = x y {\displaystyle f(x,y)=xy} can be represented by the familiar multiplication table On the other hand, if a function's domain is continuous, a table can give the values of the function at specific values of the domain. If an intermediate value is needed, interpolation can be used to estimate the value of the function. For example, a portion of a table for the sine function might be given as follows, with values rounded to 6 decimal places: Before the advent of handheld calculators and personal computers, such tables were often compiled and published for functions such as logarithms and trigonometric functions. === Bar chart === A bar chart can represent a function whose domain is a finite set, the natural numbers, or the integers. In this case, an element x of the domain is represented by an interval of the x-axis, and the corresponding value of the function, f(x), is represented by a rectangle whose base is the interval corresponding to x and whose height is f(x) (possibly negative, in which case the bar extends below the x-axis). == General properties == This section describes general properties of functions, that are independent of specific properties of the domain and the codomain. === Standard functions === There are a number of standard functions that occur frequently: For every set X, there is a unique function, called the empty function, or empty map, from the empty set to X. The graph of an empty function is the empty set. The existence of empty functions is needed both for the coherency of the theory and for avoiding exceptions concerning the empty set in many statements. Under the usual set-theoretic definition of a function as an ordered triplet (or equivalent ones), there is exactly one empty function for each set, thus the empty function ∅ → X {\displaystyle \varnothing \to X} is not equal to ∅ → Y {\displaystyle \varnothing \to Y} if and only if X ≠ Y {\displaystyle X\neq Y} , although their graphs are both the empty set. For every set X and every singleton set {s}, there is a unique function from X to {s}, which maps every element of X to s. This is a surjection (see below) unless X is the empty set. Given a function f : X → Y , {\displaystyle f:X\to Y,} the canonical surjection of f onto its image f ( X ) = { f ( x ) ∣ x ∈ X } {\displaystyle f(X)=\{f(x)\mid x\in X\}} is the function from X to f(X) that maps x to f(x). For every subset A of a set X, the inclusion map of A into X is the injective (see below) function that maps every element of A to itself. The identity function on a set X, often denoted by idX, is the inclusion of X into itself. === Function composition === Given two functions f : X → Y {\displaystyle f:X\to Y} and g : Y → Z {\displaystyle g:Y\to Z} such that the domain of g is the codomain of f, their composition is the function g ∘ f : X → Z {\displaystyle g\circ f:X\rightarrow Z} defined by ( g ∘ f ) ( x ) = g ( f ( x ) ) . {\displaystyle (g\circ f)(x)=g(f(x)).} That is, the value of g ∘ f {\displaystyle g\circ f} is obtained by first applying f to x to obtain y = f(x) and then applying g to the result y to obtain g(y) = g(f(x)). In this notation, the function that is applied first is always written on the right. The composition g ∘ f {\displaystyle g\circ f} is an operation on functions that is defined only if the codomain of the first function is the domain of the second one. Even when both g ∘ f {\displaystyle g\circ f} and f ∘ g {\displaystyle f\circ g} satisfy these conditions, the composition is not necessarily commutative, that is, the functions g ∘ f {\displaystyle g\circ f} and f ∘ g {\displaystyle f\circ g} need not be equal, but may deliver different values for the same argument. For example, let f(x) = x2 and g(x) = x + 1, then g ( f ( x ) ) = x 2 + 1 {\displaystyle g(f(x))=x^{2}+1} and f ( g ( x ) ) = ( x + 1 ) 2 {\displaystyle f(g(x))=(x+1)^{2}} agree just for x = 0. {\displaystyle x=0.} The function composition is associative in the sense that, if one of ( h ∘ g ) ∘ f {\displaystyle (h\circ g)\circ f} and h ∘ ( g ∘ f ) {\displaystyle h\circ (g\circ f)} is defined, then the other is also defined, and they are equal, that is, ( h ∘ g ) ∘ f = h ∘ ( g ∘ f ) . {\displaystyle (h\circ g)\circ f=h\circ (g\circ f).} Therefore, it is usual to just write h ∘ g ∘ f . {\displaystyle h\circ g\circ f.} The identity functions id X {\displaystyle \operatorname {id} _{X}} and id Y {\displaystyle \operatorname {id} _{Y}} are respectively a right identity and a left identity for functions from X to Y. That is, if f is a function with domain X, and codomain Y, one has f ∘ id X = id Y ∘ f = f . {\displaystyle f\circ \operatorname {id} _{X}=\operatorname {id} _{Y}\circ f=f.} === Image and preimage === Let f : X → Y . {\displaystyle f:X\to Y.} The image under f of an element x of the domain X is f(x). If A is any subset of X, then the image of A under f, denoted f(A), is the subset of the codomain Y consisting of all images of elements of A, that is, f ( A ) = { f ( x ) ∣ x ∈ A } . {\displaystyle f(A)=\{f(x)\mid x\in A\}.} The image of f is the image of the whole domain, that is, f(X). It is also called the range of f, although the term range may also refer to the codomain. On the other hand, the inverse image or preimage under f of an element y of the codomain Y is the set of all elements of the domain X whose images under f equal y. In symbols, the preimage of y is denoted by f − 1 ( y ) {\displaystyle f^{-1}(y)} and is given by the equation f − 1 ( y ) = { x ∈ X ∣ f ( x ) = y } . {\displaystyle f^{-1}(y)=\{x\in X\mid f(x)=y\}.} Likewise, the preimage of a subset B of the codomain Y is the set of the preimages of the elements of B, that is, it is the subset of the domain X consisting of all elements of X whose images belong to B. It is denoted by f − 1 ( B ) {\displaystyle f^{-1}(B)} and is given by the equation f − 1 ( B ) = { x ∈ X ∣ f ( x ) ∈ B } . {\displaystyle f^{-1}(B)=\{x\in X\mid f(x)\in B\}.} For example, the preimage of { 4 , 9 } {\displaystyle \{4,9\}} under the square function is the set { − 3 , − 2 , 2 , 3 } {\displaystyle \{-3,-2,2,3\}} . By definition of a function, the image of an element x of the domain is always a single element of the codomain. However, the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} of an element y of the codomain may be empty or contain any number of elements. For example, if f is the function from the integers to themselves that maps every integer to 0, then f − 1 ( 0 ) = Z {\displaystyle f^{-1}(0)=\mathbb {Z} } . If f : X → Y {\displaystyle f:X\to Y} is a function, A and B are subsets of X, and C and D are subsets of Y, then one has the following properties: A ⊆ B ⟹ f ( A ) ⊆ f ( B ) {\displaystyle A\subseteq B\Longrightarrow f(A)\subseteq f(B)} C ⊆ D ⟹ f − 1 ( C ) ⊆ f − 1 ( D ) {\displaystyle C\subseteq D\Longrightarrow f^{-1}(C)\subseteq f^{-1}(D)} A ⊆ f − 1 ( f ( A ) ) {\displaystyle A\subseteq f^{-1}(f(A))} C ⊇ f ( f − 1 ( C ) ) {\displaystyle C\supseteq f(f^{-1}(C))} f ( f − 1 ( f ( A ) ) ) = f ( A ) {\displaystyle f(f^{-1}(f(A)))=f(A)} f − 1 ( f ( f − 1 ( C ) ) ) = f − 1 ( C ) {\displaystyle f^{-1}(f(f^{-1}(C)))=f^{-1}(C)} The preimage by f of an element y of the codomain is sometimes called, in some contexts, the fiber of y under f. If a function f has an inverse (see below), this inverse is denoted f − 1 . {\displaystyle f^{-1}.} In this case f − 1 ( C ) {\displaystyle f^{-1}(C)} may denote either the image by f − 1 {\displaystyle f^{-1}} or the preimage by f of C. This is not a problem, as these sets are equal. The notation f ( A ) {\displaystyle f(A)} and f − 1 ( C ) {\displaystyle f^{-1}(C)} may be ambiguous in the case of sets that contain some subsets as elements, such as { x , { x } } . {\displaystyle \{x,\{x\}\}.} In this case, some care may be needed, for example, by using square brackets f [ A ] , f − 1 [ C ] {\displaystyle f[A],f^{-1}[C]} for images and preimages of subsets and ordinary parentheses for images and preimages of elements. === Injective, surjective and bijective functions === Let f : X → Y {\displaystyle f:X\to Y} be a function. The function f is injective (or one-to-one, or is an injection) if f(a) ≠ f(b) for every two different elements a and b of X. Equivalently, f is injective if and only if, for every y ∈ Y , {\displaystyle y\in Y,} the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} contains at most one element. An empty function is always injective. If X is not the empty set, then f is injective if and only if there exists a function g : Y → X {\displaystyle g:Y\to X} such that g ∘ f = id X , {\displaystyle g\circ f=\operatorname {id} _{X},} that is, if f has a left inverse. Proof: If f is injective, for defining g, one chooses an element x 0 {\displaystyle x_{0}} in X (which exists as X is supposed to be nonempty), and one defines g by g ( y ) = x {\displaystyle g(y)=x} if y = f ( x ) {\displaystyle y=f(x)} and g ( y ) = x 0 {\displaystyle g(y)=x_{0}} if y ∉ f ( X ) . {\displaystyle y\not \in f(X).} Conversely, if g ∘ f = id X , {\displaystyle g\circ f=\operatorname {id} _{X},} and y = f ( x ) , {\displaystyle y=f(x),} then x = g ( y ) , {\displaystyle x=g(y),} and thus f − 1 ( y ) = { x } . {\displaystyle f^{-1}(y)=\{x\}.} The function f is surjective (or onto, or is a surjection) if its range f ( X ) {\displaystyle f(X)} equals its codomain Y {\displaystyle Y} , that is, if, for each element y {\displaystyle y} of the codomain, there exists some element x {\displaystyle x} of the domain such that f ( x ) = y {\displaystyle f(x)=y} (in other words, the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} of every y ∈ Y {\displaystyle y\in Y} is nonempty). If, as usual in modern mathematics, the axiom of choice is assumed, then f is surjective if and only if there exists a function g : Y → X {\displaystyle g:Y\to X} such that f ∘ g = id Y , {\displaystyle f\circ g=\operatorname {id} _{Y},} that is, if f has a right inverse. The axiom of choice is needed, because, if f is surjective, one defines g by g ( y ) = x , {\displaystyle g(y)=x,} where x {\displaystyle x} is an arbitrarily chosen element of f − 1 ( y ) . {\displaystyle f^{-1}(y).} The function f is bijective (or is a bijection or a one-to-one correspondence) if it is both injective and surjective. That is, f is bijective if, for every y ∈ Y , {\displaystyle y\in Y,} the preimage f − 1 ( y ) {\displaystyle f^{-1}(y)} contains exactly one element. The function f is bijective if and only if it admits an inverse function, that is, a function g : Y → X {\displaystyle g:Y\to X} such that g ∘ f = id X {\displaystyle g\circ f=\operatorname {id} _{X}} and f ∘ g = id Y . {\displaystyle f\circ g=\operatorname {id} _{Y}.} (Contrarily to the case of surjections, this does not require the axiom of choice; the proof is straightforward). Every function f : X → Y {\displaystyle f:X\to Y} may be factorized as the composition i ∘ s {\displaystyle i\circ s} of a surjection followed by an injection, where s is the canonical surjection of X onto f(X) and i is the canonical injection of f(X) into Y. This is the canonical factorization of f. "One-to-one" and "onto" are terms that were more common in the older English language literature; "injective", "surjective", and "bijective" were originally coined as French words in the second quarter of the 20th century by the Bourbaki group and imported into English. As a word of caution, "a one-to-one function" is one that is injective, while a "one-to-one correspondence" refers to a bijective function. Also, the statement "f maps X onto Y" differs from "f maps X into B", in that the former implies that f is surjective, while the latter makes no assertion about the nature of f. In a complicated reasoning, the one letter difference can easily be missed. Due to the confusing nature of this older terminology, these terms have declined in popularity relative to the Bourbakian terms, which have also the advantage of being more symmetrical. === Restriction and extension === If f : X → Y {\displaystyle f:X\to Y} is a function and S is a subset of X, then the restriction of f {\displaystyle f} to S, denoted f | S {\displaystyle f|_{S}} , is the function from S to Y defined by f | S ( x ) = f ( x ) {\displaystyle f|_{S}(x)=f(x)} for all x in S. Restrictions can be used to define partial inverse functions: if there is a subset S of the domain of a function f {\displaystyle f} such that f | S {\displaystyle f|_{S}} is injective, then the canonical surjection of f | S {\displaystyle f|_{S}} onto its image f | S ( S ) = f ( S ) {\displaystyle f|_{S}(S)=f(S)} is a bijection, and thus has an inverse function from f ( S ) {\displaystyle f(S)} to S. One application is the definition of inverse trigonometric functions. For example, the cosine function is injective when restricted to the interval [0, π]. The image of this restriction is the interval [−1, 1], and thus the restriction has an inverse function from [−1, 1] to [0, π], which is called arccosine and is denoted arccos. Function restriction may also be used for "gluing" functions together. Let X = ⋃ i ∈ I U i {\textstyle X=\bigcup _{i\in I}U_{i}} be the decomposition of X as a union of subsets, and suppose that a function f i : U i → Y {\displaystyle f_{i}:U_{i}\to Y} is defined on each U i {\displaystyle U_{i}} such that for each pair i , j {\displaystyle i,j} of indices, the restrictions of f i {\displaystyle f_{i}} and f j {\displaystyle f_{j}} to U i ∩ U j {\displaystyle U_{i}\cap U_{j}} are equal. Then this defines a unique function f : X → Y {\displaystyle f:X\to Y} such that f | U i = f i {\displaystyle f|_{U_{i}}=f_{i}} for all i. This is the way that functions on manifolds are defined. An extension of a function f is a function g such that f is a restriction of g. A typical use of this concept is the process of analytic continuation, that allows extending functions whose domain is a small part of the complex plane to functions whose domain is almost the whole complex plane. Here is another classical example of a function extension that is encountered when studying homographies of the real line. A homography is a function h ( x ) = a x + b c x + d {\displaystyle h(x)={\frac {ax+b}{cx+d}}} such that ad − bc ≠ 0. Its domain is the set of all real numbers different from − d / c , {\displaystyle -d/c,} and its image is the set of all real numbers different from a / c . {\displaystyle a/c.} If one extends the real line to the projectively extended real line by including ∞, one may extend h to a bijection from the extended real line to itself by setting h ( ∞ ) = a / c {\displaystyle h(\infty )=a/c} and h ( − d / c ) = ∞ {\displaystyle h(-d/c)=\infty } . == In calculus == The idea of function, starting in the 17th century, was fundamental to the new infinitesimal calculus. At that time, only real-valued functions of a real variable were considered, and all functions were assumed to be smooth. But the definition was soon extended to functions of several variables and to functions of a complex variable. In the second half of the 19th century, the mathematically rigorous definition of a function was introduced, and functions with arbitrary domains and codomains were defined. Functions are now used throughout all areas of mathematics. In introductory calculus, when the word function is used without qualification, it means a real-valued function of a single real variable. The more general definition of a function is usually introduced to second or third year college students with STEM majors, and in their senior year they are introduced to calculus in a larger, more rigorous setting in courses such as real analysis and complex analysis. === Real function === A real function is a real-valued function of a real variable, that is, a function whose codomain is the field of real numbers and whose domain is a set of real numbers that contains an interval. In this section, these functions are simply called functions. The functions that are most commonly considered in mathematics and its applications have some regularity, that is they are continuous, differentiable, and even analytic. This regularity insures that these functions can be visualized by their graphs. In this section, all functions are differentiable in some interval. Functions enjoy pointwise operations, that is, if f and g are functions, their sum, difference and product are functions defined by ( f + g ) ( x ) = f ( x ) + g ( x ) ( f − g ) ( x ) = f ( x ) − g ( x ) ( f ⋅ g ) ( x ) = f ( x ) ⋅ g ( x ) . {\displaystyle {\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(f-g)(x)&=f(x)-g(x)\\(f\cdot g)(x)&=f(x)\cdot g(x)\\\end{aligned}}.} The domains of the resulting functions are the intersection of the domains of f and g. The quotient of two functions is defined similarly by f g ( x ) = f ( x ) g ( x ) , {\displaystyle {\frac {f}{g}}(x)={\frac {f(x)}{g(x)}},} but the domain of the resulting function is obtained by removing the zeros of g from the intersection of the domains of f and g. The polynomial functions are defined by polynomials, and their domain is the whole set of real numbers. They include constant functions, linear functions and quadratic functions. Rational functions are quotients of two polynomial functions, and their domain is the real numbers with a finite number of them removed to avoid division by zero. The simplest rational function is the function x ↦ 1 x , {\displaystyle x\mapsto {\frac {1}{x}},} whose graph is a hyperbola, and whose domain is the whole real line except for 0. The derivative of a real differentiable function is a real function. An antiderivative of a continuous real function is a real function that has the original function as a derivative. For example, the function x ↦ 1 x {\textstyle x\mapsto {\frac {1}{x}}} is continuous, and even differentiable, on the positive real numbers. Thus one antiderivative, which takes the value zero for x = 1, is a differentiable function called the natural logarithm. A real function f is monotonic in an interval if the sign of f ( x ) − f ( y ) x − y {\displaystyle {\frac {f(x)-f(y)}{x-y}}} does not depend of the choice of x and y in the interval. If the function is differentiable in the interval, it is monotonic if the sign of the derivative is constant in the interval. If a real function f is monotonic in an interval I, it has an inverse function, which is a real function with domain f(I) and image I. This is how inverse trigonometric functions are defined in terms of trigonometric functions, where the trigonometric functions are monotonic. Another example: the natural logarithm is monotonic on the positive real numbers, and its image is the whole real line; therefore it has an inverse function that is a bijection between the real numbers and the positive real numbers. This inverse is the exponential function. Many other real functions are defined either by the implicit function theorem (the inverse function is a particular instance) or as solutions of differential equations. For example, the sine and the cosine functions are the solutions of the linear differential equation y ″ + y = 0 {\displaystyle y''+y=0} such that sin ⁡ 0 = 0 , cos ⁡ 0 = 1 , ∂ sin ⁡ x ∂ x ( 0 ) = 1 , ∂ cos ⁡ x ∂ x ( 0 ) = 0. {\displaystyle \sin 0=0,\quad \cos 0=1,\quad {\frac {\partial \sin x}{\partial x}}(0)=1,\quad {\frac {\partial \cos x}{\partial x}}(0)=0.} === Vector-valued function === When the elements of the codomain of a function are vectors, the function is said to be a vector-valued function. These functions are particularly useful in applications, for example modeling physical properties. For example, the function that associates to each point of a fluid its velocity vector is a vector-valued function. Some vector-valued functions are defined on a subset of R n {\displaystyle \mathbb {R} ^{n}} or other spaces that share geometric or topological properties of R n {\displaystyle \mathbb {R} ^{n}} , such as manifolds. These vector-valued functions are given the name vector fields. == Function space == In mathematical analysis, and more specifically in functional analysis, a function space is a set of scalar-valued or vector-valued functions, which share a specific property and form a topological vector space. For example, the real smooth functions with a compact support (that is, they are zero outside some compact set) form a function space that is at the basis of the theory of distributions. Function spaces play a fundamental role in advanced mathematical analysis, by allowing the use of their algebraic and topological properties for studying properties of functions. For example, all theorems of existence and uniqueness of solutions of ordinary or partial differential equations result of the study of function spaces. == Multi-valued functions == Several methods for specifying functions of real or complex variables start from a local definition of the function at a point or on a neighbourhood of a point, and then extend by continuity the function to a much larger domain. Frequently, for a starting point x 0 , {\displaystyle x_{0},} there are several possible starting values for the function. For example, in defining the square root as the inverse function of the square function, for any positive real number x 0 , {\displaystyle x_{0},} there are two choices for the value of the square root, one of which is positive and denoted x 0 , {\displaystyle {\sqrt {x_{0}}},} and another which is negative and denoted − x 0 . {\displaystyle -{\sqrt {x_{0}}}.} These choices define two continuous functions, both having the nonnegative real numbers as a domain, and having either the nonnegative or the nonpositive real numbers as images. When looking at the graphs of these functions, one can see that, together, they form a single smooth curve. It is therefore often useful to consider these two square root functions as a single function that has two values for positive x, one value for 0 and no value for negative x. In the preceding example, one choice, the positive square root, is more natural than the other. This is not the case in general. For example, let consider the implicit function that maps y to a root x of x 3 − 3 x − y = 0 {\displaystyle x^{3}-3x-y=0} (see the figure on the right). For y = 0 one may choose either 0 , 3 , or − 3 {\displaystyle 0,{\sqrt {3}},{\text{ or }}-{\sqrt {3}}} for x. By the implicit function theorem, each choice defines a function; for the first one, the (maximal) domain is the interval [−2, 2] and the image is [−1, 1]; for the second one, the domain is [−2, ∞) and the image is [1, ∞); for the last one, the domain is (−∞, 2] and the image is (−∞, −1]. As the three graphs together form a smooth curve, and there is no reason for preferring one choice, these three functions are often considered as a single multi-valued function of y that has three values for −2 < y < 2, and only one value for y ≤ −2 and y ≥ −2. Usefulness of the concept of multi-valued functions is clearer when considering complex functions, typically analytic functions. The domain to which a complex function may be extended by analytic continuation generally consists of almost the whole complex plane. However, when extending the domain through two different paths, one often gets different values. For example, when extending the domain of the square root function, along a path of complex numbers with positive imaginary parts, one gets i for the square root of −1; while, when extending through complex numbers with negative imaginary parts, one gets −i. There are generally two ways of solving the problem. One may define a function that is not continuous along some curve, called a branch cut. Such a function is called the principal value of the function. The other way is to consider that one has a multi-valued function, which is analytic everywhere except for isolated singularities, but whose value may "jump" if one follows a closed loop around a singularity. This jump is called the monodromy. == In the foundations of mathematics == The definition of a function that is given in this article requires the concept of set, since the domain and the codomain of a function must be a set. This is not a problem in usual mathematics, as it is generally not difficult to consider only functions whose domain and codomain are sets, which are well defined, even if the domain is not explicitly defined. However, it is sometimes useful to consider more general functions. For example, the singleton set may be considered as a function x ↦ { x } . {\displaystyle x\mapsto \{x\}.} Its domain would include all sets, and therefore would not be a set. In usual mathematics, one avoids this kind of problem by specifying a domain, which means that one has many singleton functions. However, when establishing foundations of mathematics, one may have to use functions whose domain, codomain or both are not specified, and some authors, often logicians, give precise definitions for these weakly specified functions. These generalized functions may be critical in the development of a formalization of the foundations of mathematics. For example, Von Neumann–Bernays–Gödel set theory, is an extension of the set theory in which the collection of all sets is a class. This theory includes the replacement axiom, which may be stated as: If X is a set and F is a function, then F[X] is a set. In alternative formulations of the foundations of mathematics using type theory rather than set theory, functions are taken as primitive notions rather than defined from other kinds of object. They are the inhabitants of function types, and may be constructed using expressions in the lambda calculus. == In computer science == In computer programming, a function is, in general, a subroutine which implements the abstract concept of function. That is, it is a program unit that produces an output for each input. Functional programming is the programming paradigm consisting of building programs by using only subroutines that behave like mathematical functions, meaning that they have no side effects and depend only on their arguments: they are referentially transparent. For example, if_then_else is a function that takes three (nullary) functions as arguments, and, depending on the value of the first argument (true or false), returns the value of either the second or the third argument. An important advantage of functional programming is that it makes easier program proofs, as being based on a well founded theory, the lambda calculus (see below). However, side effects are generally necessary for practical programs, ones that perform input/output. There is a class of purely functional languages, such as Haskell, which encapsulate the possibility of side effects in the type of a function. Others, such as the ML family, simply allow side effects. In many programming languages, every subroutine is called a function, even when there is no output but only side effects, and when the functionality consists simply of modifying some data in the computer memory. Outside the context of programming languages, "function" has the usual mathematical meaning in computer science. In this area, a property of major interest is the computability of a function. For giving a precise meaning to this concept, and to the related concept of algorithm, several models of computation have been introduced, the old ones being general recursive functions, lambda calculus, and Turing machine. The fundamental theorem of computability theory is that these three models of computation define the same set of computable functions, and that all the other models of computation that have ever been proposed define the same set of computable functions or a smaller one. The Church–Turing thesis is the claim that every philosophically acceptable definition of a computable function defines also the same functions. General recursive functions are partial functions from integers to integers that can be defined from constant functions, successor, and projection functions via the operators composition, primitive recursion, and minimization. Although defined only for functions from integers to integers, they can model any computable function as a consequence of the following properties: a computation is the manipulation of finite sequences of symbols (digits of numbers, formulas, etc.), every sequence of symbols may be coded as a sequence of bits, a bit sequence can be interpreted as the binary representation of an integer. Lambda calculus is a theory that defines computable functions without using set theory, and is the theoretical background of functional programming. It consists of terms that are either variables, function definitions (𝜆-terms), or applications of functions to terms. Terms are manipulated by interpreting its axioms (the α-equivalence, the β-reduction, and the η-conversion) as rewriting rules, which can be used for computation. In its original form, lambda calculus does not include the concepts of domain and codomain of a function. Roughly speaking, they have been introduced in the theory under the name of type in typed lambda calculus. Most kinds of typed lambda calculi can define fewer functions than untyped lambda calculus. == See also == === Subpages === === Generalizations === === Related topics === == Notes == == References == == Sources == == Further reading == == External links == The Wolfram Functions – website giving formulae and visualizations of many mathematical functions NIST Digital Library of Mathematical Functions
Wikipedia/Multivariable_function
In economics, the marginal rate of substitution (MRS) is the rate at which a consumer can give up some amount of one good in exchange for another good while maintaining the same level of utility. At equilibrium consumption levels (assuming no externalities), marginal rates of substitution are identical. The marginal rate of substitution is one of the three factors from marginal productivity, the others being marginal rates of transformation and marginal productivity of a factor. == As the slope of indifference curve == Under the standard assumption of neoclassical economics that goods and services are continuously divisible, the marginal rates of substitution will be the same regardless of the direction of exchange, and will correspond to the slope of an indifference curve (more precisely, to the slope multiplied by −1) passing through the consumption bundle in question, at that point: mathematically, it is the implicit derivative. MRS of X for Y is the amount of Y which a consumer can exchange for one unit of X locally. The MRS is different at each point along the indifference curve thus it is important to keep locus in the definition. Further on this assumption, or otherwise on the assumption that utility is quantified, the marginal rate of substitution of good or service X for good or service Y (MRSxy) is also equivalent to the marginal utility of X over the marginal utility of Y. Formally, M R S x y = − m i n d i f = − ( d y / d x ) {\displaystyle MRS_{xy}=-m_{\mathrm {indif} }=-(dy/dx)\,} M R S x y = M U x / M U y {\displaystyle MRS_{xy}=MU_{x}/MU_{y}\,} It is important to note that when comparing bundles of goods X and Y that give a constant utility (points along an indifference curve), the marginal utility of X is measured in terms of units of Y that is being given up. For example, if the MRSxy = 2, the consumer will give up 2 units of Y to obtain 1 additional unit of X. As one moves down a (standardly convex) indifference curve, the marginal rate of substitution decreases (as measured by the absolute value of the slope of the indifference curve, which decreases). This is known as the law of diminishing marginal rate of substitution. Since the indifference curve is convex with respect to the origin and we have defined the MRS as the negative slope of the indifference curve, M R S x y ≥ 0 {\displaystyle \ MRS_{xy}\geq 0} == Simple mathematical analysis == Assume the consumer utility function is defined by U ( x , y ) {\displaystyle U(x,y)} , where U is consumer utility, x and y are goods. Then the marginal rate of substitution can be computed via partial differentiation, as follows. Also, note that: M U x = ∂ U / ∂ x {\displaystyle \ MU_{x}=\partial U/\partial x} M U y = ∂ U / ∂ y {\displaystyle \ MU_{y}=\partial U/\partial y} where M U x {\displaystyle \ MU_{x}} is the marginal utility with respect to good x and M U y {\displaystyle \ MU_{y}} is the marginal utility with respect to good y. By taking the total differential of the utility function equation, we obtain the following results: d U = ( ∂ U / ∂ x ) d x + ( ∂ U / ∂ y ) d y {\displaystyle \ dU=(\partial U/\partial x)dx+(\partial U/\partial y)dy} , or substituting from above, d U = M U x d x + M U y d y {\displaystyle \ dU=MU_{x}dx+MU_{y}dy} , or, without loss of generality, the total derivative of the utility function with respect to good x, d U d x = M U x d x d x + M U y d y d x {\displaystyle {\frac {dU}{dx}}=MU_{x}{\frac {dx}{dx}}+MU_{y}{\frac {dy}{dx}}} , that is, d U d x = M U x + M U y d y d x {\displaystyle {\frac {dU}{dx}}=MU_{x}+MU_{y}{\frac {dy}{dx}}} . Through any point on the indifference curve, dU/dx = 0, because U = c, where c is a constant. It follows from the above equation that: 0 = M U x + M U y d y d x {\displaystyle 0=MU_{x}+MU_{y}{\frac {dy}{dx}}} , or rearranging − d y d x = M U x M U y {\displaystyle -{\frac {dy}{dx}}={\frac {MU_{x}}{MU_{y}}}} The marginal rate of substitution is defined as the absolute value of the slope of the indifference curve at whichever commodity bundle quantities are of interest. That turns out to equal the ratio of the marginal utilities: M R S x y = M U x / M U y {\displaystyle \ MRS_{xy}=MU_{x}/MU_{y}\,} . When consumers maximize utility with respect to a budget constraint, the indifference curve is tangent to the budget line, therefore, with m representing slope: m i n d i f = m b u d g e t {\displaystyle \ m_{\mathrm {indif} }=m_{\mathrm {budget} }} − ( M R S x y ) = − ( P x / P y ) {\displaystyle \ -(MRS_{xy})=-(P_{x}/P_{y})} M R S x y = P x / P y {\displaystyle \ MRS_{xy}=P_{x}/P_{y}} Therefore, when the consumer is choosing his utility maximized market basket on his budget line, M U x / M U y = P x / P y {\displaystyle \ MU_{x}/MU_{y}=P_{x}/P_{y}} M U x / P x = M U y / P y {\displaystyle \ MU_{x}/P_{x}=MU_{y}/P_{y}} This important result tells us that utility is maximized when the consumer's budget is allocated so that the marginal utility per unit of money spent is equal for each good. If this equality did not hold, the consumer could increase his/her utility by cutting spending on the good with lower marginal utility per unit of money and increase spending on the other good. To decrease the marginal rate of substitution, the consumer must buy more of the good for which he/she wishes the marginal utility to fall for (due to the law of diminishing marginal utility). == Diminishing Marginal rate of Substitution == An important principle of economic theory is that marginal rate of substitution of X for Y diminishes as more and more of good X is substituted for good Y. In other words, as the consumer has more and more of good X, he is prepared to forego less and less of good Y. It means that as the consumer's stock of X increases and his stock of Y decreases, he is willing to forego less and less of Y for a given increment in X. In other words, the marginal rate of substitution of X for Y falls as the consumer has more of X and less of Y. That the marginal rate of substitution of X for Y diminishes can also be known from drawing tangents at different points on an indifference curve. == Using MRS to determine Convexity == When analyzing the utility function of consumer's in terms of determining if they are convex or not. For the horizon of two goods we can apply a quick derivative test (take the derivative of MRS) to determine if our consumer's preferences are convex. If the derivative of MRS is negative the utility curve would be concave down meaning that it has a maximum and then decreases on either side of the maximum. This utility curve may have an appearance similar to that of a lower case n. If the derivative of MRS is equal to 0 the utility curve would be linear, the slope would stay constant throughout the utility curve. If the derivative of MRS is positive the utility curve would be convex up meaning that it has a minimum and then increases on either side of the minimum. This utility curve may have an appearance similar to that of a u. These statements are shown mathematically below. d M R S x y d x < 0 Non Convexity of Utility Function {\displaystyle \ {\frac {dMRS_{xy}}{dx}}<0{\text{ Non Convexity of Utility Function}}} d M R S x y d x = 0 Weak Convexity of Utility Function {\displaystyle \ {\frac {dMRS_{xy}}{dx}}=0{\text{ Weak Convexity of Utility Function}}} d M R S x y d x > 0 Strict Convexity of Utility Function {\displaystyle \ {\frac {dMRS_{xy}}{dx}}>0{\text{ Strict Convexity of Utility Function}}} For more than two variables, the use of the Hessian matrix is required. == See also == Marginal concepts Marginal rate of technical substitution (the same concept on production side) Indifference curves Consumer theory Convex preferences Implicit differentiation Labour economics == References == Adam Hayes. (2021, March 31). Inside the marginal rate of substitution. Investopedia. Jerelin, R. (2017, May 30). Diminishing marginal rate of substitution | Indifference curve | Economics. Economics Discussion Krugman, Paul; Wells, Robin (2008). Microeconomics (2nd ed.). Palgrave. ISBN 978-0-7167-7159-3. Pindyck, Robert S.; Rubinfeld, Daniel L. (2005). Microeconomics (6th ed.). Pearson Prentice Hall. ISBN 0-13-008461-1. Dorfman, R. (2008). "Marginal Productivity Theory". In Palgrave Macmillan (ed.). The New Palgrave Dictionary of Economics. London: Palgrave Macmillan. pp. 1–5. doi:10.1057/978-1-349-95121-5_988-2. ISBN 978-1-349-95121-5 – via SpringerLink.
Wikipedia/Marginal_rate_of_substitution
In microeconomic theory, the marginal rate of technical substitution (MRTS)—or technical rate of substitution (TRS)—is the amount by which the quantity of one input has to be reduced ( − Δ x 2 {\displaystyle -\Delta x_{2}} ) when one extra unit of another input is used ( Δ x 1 = 1 {\displaystyle \Delta x_{1}=1} ), so that output remains constant ( y = y ¯ {\displaystyle y={\bar {y}}} ). M R T S ( x 1 , x 2 ) = − Δ x 2 Δ x 1 = M P 1 M P 2 {\displaystyle MRTS(x_{1},x_{2})=-{\frac {\Delta x_{2}}{\Delta x_{1}}}={\frac {MP_{1}}{MP_{2}}}} where M P 1 {\displaystyle MP_{1}} and M P 2 {\displaystyle MP_{2}} are the marginal products of input 1 and input 2, respectively. Along an isoquant, the MRTS shows the rate at which one input (e.g. capital or labor) may be substituted for another, while maintaining the same level of output. Thus the MRTS is the absolute value of the slope of an isoquant at the point in question. When relative input usages are optimal, the marginal rate of technical substitution is equal to the relative unit costs of the inputs, and the slope of the isoquant at the chosen point equals the slope of the isocost curve (see conditional factor demands). It is the rate at which one input is substituted for another to maintain the same level of output. == See also == Marginal rate of substitution (the same concept on consumption side) Marginal rate of transformation (slope of the production-possibility frontier) == References == Mas-Colell, Andreu; Whinston, Michael; Green, Jerry (1995). Microeconomic Theory. Oxford: Oxford University Press. ISBN 0-19-507340-1.
Wikipedia/Marginal_rate_of_technical_substitution
In mathematics, a quadratic equation (from Latin quadratus 'square') is an equation that can be rearranged in standard form as a x 2 + b x + c = 0 , {\displaystyle ax^{2}+bx+c=0\,,} where the variable x represents an unknown number, and a, b, and c represent known numbers, where a ≠ 0. (If a = 0 and b ≠ 0 then the equation is linear, not quadratic.) The numbers a, b, and c are the coefficients of the equation and may be distinguished by respectively calling them, the quadratic coefficient, the linear coefficient and the constant coefficient or free term. The values of x that satisfy the equation are called solutions of the equation, and roots or zeros of the quadratic function on its left-hand side. A quadratic equation has at most two solutions. If there is only one solution, one says that it is a double root. If all the coefficients are real numbers, there are either two real solutions, or a single real double root, or two complex solutions that are complex conjugates of each other. A quadratic equation always has two roots, if complex roots are included and a double root is counted for two. A quadratic equation can be factored into an equivalent equation a x 2 + b x + c = a ( x − r ) ( x − s ) = 0 {\displaystyle ax^{2}+bx+c=a(x-r)(x-s)=0} where r and s are the solutions for x. The quadratic formula x = − b ± b 2 − 4 a c 2 a {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}} expresses the solutions in terms of a, b, and c. Completing the square is one of several ways for deriving the formula. Solutions to problems that can be expressed in terms of quadratic equations were known as early as 2000 BC. Because the quadratic equation involves only one unknown, it is called "univariate". The quadratic equation contains only powers of x that are non-negative integers, and therefore it is a polynomial equation. In particular, it is a second-degree polynomial equation, since the greatest power is two. == Solving the quadratic equation == A quadratic equation whose coefficients are real numbers can have either zero, one, or two distinct real-valued solutions, also called roots. When there is only one distinct root, it can be interpreted as two roots with the same value, called a double root. When there are no real roots, the coefficients can be considered as complex numbers with zero imaginary part, and the quadratic equation still has two complex-valued roots, complex conjugates of each-other with a non-zero imaginary part. A quadratic equation whose coefficients are arbitrary complex numbers always has two complex-valued roots which may or may not be distinct. The solutions of a quadratic equation can be found by several alternative methods. === Factoring by inspection === It may be possible to express a quadratic equation ax2 + bx + c = 0 as a product (px + q)(rx + s) = 0. In some cases, it is possible, by simple inspection, to determine values of p, q, r, and s that make the two forms equivalent to one another. If the quadratic equation is written in the second form, then the "Zero Factor Property" states that the quadratic equation is satisfied if px + q = 0 or rx + s = 0. Solving these two linear equations provides the roots of the quadratic. For most students, factoring by inspection is the first method of solving quadratic equations to which they are exposed.: 202–207  If one is given a quadratic equation in the form x2 + bx + c = 0, the sought factorization has the form (x + q)(x + s), and one has to find two numbers q and s that add up to b and whose product is c (this is sometimes called "Vieta's rule" and is related to Vieta's formulas). As an example, x2 + 5x + 6 factors as (x + 3)(x + 2). The more general case where a does not equal 1 can require a considerable effort in trial and error guess-and-check, assuming that it can be factored at all by inspection. Except for special cases such as where b = 0 or c = 0, factoring by inspection only works for quadratic equations that have rational roots. This means that the great majority of quadratic equations that arise in practical applications cannot be solved by factoring by inspection.: 207  === Completing the square === The process of completing the square makes use of the algebraic identity x 2 + 2 h x + h 2 = ( x + h ) 2 , {\displaystyle x^{2}+2hx+h^{2}=(x+h)^{2},} which represents a well-defined algorithm that can be used to solve any quadratic equation.: 207  Starting with a quadratic equation in standard form, ax2 + bx + c = 0 Divide each side by a, the coefficient of the squared term. Subtract the constant term c/a from both sides. Add the square of one-half of b/a, the coefficient of x, to both sides. This "completes the square", converting the left side into a perfect square. Write the left side as a square and simplify the right side if necessary. Produce two linear equations by equating the square root of the left side with the positive and negative square roots of the right side. Solve each of the two linear equations. We illustrate use of this algorithm by solving 2x2 + 4x − 4 = 0 2 x 2 + 4 x − 4 = 0 {\displaystyle 2x^{2}+4x-4=0} x 2 + 2 x − 2 = 0 {\displaystyle \ x^{2}+2x-2=0} x 2 + 2 x = 2 {\displaystyle \ x^{2}+2x=2} x 2 + 2 x + 1 = 2 + 1 {\displaystyle \ x^{2}+2x+1=2+1} ( x + 1 ) 2 = 3 {\displaystyle \left(x+1\right)^{2}=3} x + 1 = ± 3 {\displaystyle \ x+1=\pm {\sqrt {3}}} x = − 1 ± 3 {\displaystyle \ x=-1\pm {\sqrt {3}}} The plus–minus symbol "±" indicates that both x = − 1 + 3 {\textstyle x=-1+{\sqrt {3}}} and x = − 1 − 3 {\textstyle x=-1-{\sqrt {3}}} are solutions of the quadratic equation. === Quadratic formula and its derivation === Completing the square can be used to derive a general formula for solving quadratic equations, called the quadratic formula. The mathematical proof will now be briefly summarized. It can easily be seen, by polynomial expansion, that the following equation is equivalent to the quadratic equation: ( x + b 2 a ) 2 = b 2 − 4 a c 4 a 2 . {\displaystyle \left(x+{\frac {b}{2a}}\right)^{2}={\frac {b^{2}-4ac}{4a^{2}}}.} Taking the square root of both sides, and isolating x, gives: x = − b ± b 2 − 4 a c 2 a . {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}.} Some sources, particularly older ones, use alternative parameterizations of the quadratic equation such as ax2 + 2bx + c = 0 or ax2 − 2bx + c = 0 , where b has a magnitude one half of the more common one, possibly with opposite sign. These result in slightly different forms for the solution, but are otherwise equivalent. A number of alternative derivations can be found in the literature. These proofs are simpler than the standard completing the square method, represent interesting applications of other frequently used techniques in algebra, or offer insight into other areas of mathematics. A lesser known quadratic formula, as used in Muller's method, provides the same roots via the equation x = 2 c − b ± b 2 − 4 a c . {\displaystyle x={\frac {2c}{-b\pm {\sqrt {b^{2}-4ac}}}}.} This can be deduced from the standard quadratic formula by Vieta's formulas, which assert that the product of the roots is c/a. It also follows from dividing the quadratic equation by x 2 {\displaystyle x^{2}} giving c x − 2 + b x − 1 + a = 0 , {\displaystyle cx^{-2}+bx^{-1}+a=0,} solving this for x − 1 , {\displaystyle x^{-1},} and then inverting. One property of this form is that it yields one valid root when a = 0, while the other root contains division by zero, because when a = 0, the quadratic equation becomes a linear equation, which has one root. By contrast, in this case, the more common formula has a division by zero for one root and an indeterminate form 0/0 for the other root. On the other hand, when c = 0, the more common formula yields two correct roots whereas this form yields the zero root and an indeterminate form 0/0. When neither a nor c is zero, the equality between the standard quadratic formula and Muller's method, 2 c − b − b 2 − 4 a c = − b + b 2 − 4 a c 2 a , {\displaystyle {\frac {2c}{-b-{\sqrt {b^{2}-4ac}}}}={\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}\,,} can be verified by cross multiplication, and similarly for the other choice of signs. === Reduced quadratic equation === It is sometimes convenient to reduce a quadratic equation so that its leading coefficient is one. This is done by dividing both sides by a, which is always possible since a is non-zero. This produces the reduced quadratic equation: x 2 + p x + q = 0 , {\displaystyle x^{2}+px+q=0,} where p = b/a and q = c/a. This monic polynomial equation has the same solutions as the original. The quadratic formula for the solutions of the reduced quadratic equation, written in terms of its coefficients, is x = − p 2 ± ( p 2 ) 2 − q . {\displaystyle x=-{\frac {p}{2}}\pm {\sqrt {\left({\frac {p}{2}}\right)^{2}-q}}\,.} === Discriminant === In the quadratic formula, the expression underneath the square root sign is called the discriminant of the quadratic equation, and is often represented using an upper case D or an upper case Greek delta: Δ = b 2 − 4 a c . {\displaystyle \Delta =b^{2}-4ac.} A quadratic equation with real coefficients can have either one or two distinct real roots, or two distinct complex roots. In this case the discriminant determines the number and nature of the roots. There are three cases: If the discriminant is positive, then there are two distinct roots − b + Δ 2 a and − b − Δ 2 a , {\displaystyle {\frac {-b+{\sqrt {\Delta }}}{2a}}\quad {\text{and}}\quad {\frac {-b-{\sqrt {\Delta }}}{2a}},} both of which are real numbers. For quadratic equations with rational coefficients, if the discriminant is a square number, then the roots are rational—in other cases they may be quadratic irrationals. If the discriminant is zero, then there is exactly one real root − b 2 a , {\displaystyle -{\frac {b}{2a}},} sometimes called a repeated or double root or two equal roots. If the discriminant is negative, then there are no real roots. Rather, there are two distinct (non-real) complex roots − b 2 a + i − Δ 2 a and − b 2 a − i − Δ 2 a , {\displaystyle -{\frac {b}{2a}}+i{\frac {\sqrt {-\Delta }}{2a}}\quad {\text{and}}\quad -{\frac {b}{2a}}-i{\frac {\sqrt {-\Delta }}{2a}},} which are complex conjugates of each other. In these expressions i is the imaginary unit. Thus the roots are distinct if and only if the discriminant is non-zero, and the roots are real if and only if the discriminant is non-negative. === Geometric interpretation === The function f(x) = ax2 + bx + c is a quadratic function. The graph of any quadratic function has the same general shape, which is called a parabola. The location and size of the parabola, and how it opens, depend on the values of a, b, and c. If a > 0, the parabola has a minimum point and opens upward. If a < 0, the parabola has a maximum point and opens downward. The extreme point of the parabola, whether minimum or maximum, corresponds to its vertex. The x-coordinate of the vertex will be located at x = − b 2 a {\displaystyle \scriptstyle x={\tfrac {-b}{2a}}} , and the y-coordinate of the vertex may be found by substituting this x-value into the function. The y-intercept is located at the point (0, c). The solutions of the quadratic equation ax2 + bx + c = 0 correspond to the roots of the function f(x) = ax2 + bx + c, since they are the values of x for which f(x) = 0. If a, b, and c are real numbers and the domain of f is the set of real numbers, then the roots of f are exactly the x-coordinates of the points where the graph touches the x-axis. If the discriminant is positive, the graph touches the x-axis at two points; if zero, the graph touches at one point; and if negative, the graph does not touch the x-axis. === Quadratic factorization === The term x − r {\displaystyle x-r} is a factor of the polynomial a x 2 + b x + c {\displaystyle ax^{2}+bx+c} if and only if r is a root of the quadratic equation a x 2 + b x + c = 0. {\displaystyle ax^{2}+bx+c=0.} It follows from the quadratic formula that a x 2 + b x + c = a ( x − − b + b 2 − 4 a c 2 a ) ( x − − b − b 2 − 4 a c 2 a ) . {\displaystyle ax^{2}+bx+c=a\left(x-{\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}\right)\left(x-{\frac {-b-{\sqrt {b^{2}-4ac}}}{2a}}\right).} In the special case b2 = 4ac where the quadratic has only one distinct root (i.e. the discriminant is zero), the quadratic polynomial can be factored as a x 2 + b x + c = a ( x + b 2 a ) 2 . {\displaystyle ax^{2}+bx+c=a\left(x+{\frac {b}{2a}}\right)^{2}.} === Graphical solution === The solutions of the quadratic equation a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} may be deduced from the graph of the quadratic function f ( x ) = a x 2 + b x + c , {\displaystyle f(x)=ax^{2}+bx+c,} which is a parabola. If the parabola intersects the x-axis in two points, there are two real roots, which are the x-coordinates of these two points (also called x-intercept). If the parabola is tangent to the x-axis, there is a double root, which is the x-coordinate of the contact point between the graph and parabola. If the parabola does not intersect the x-axis, there are two complex conjugate roots. Although these roots cannot be visualized on the graph, their real and imaginary parts can be. Let h and k be respectively the x-coordinate and the y-coordinate of the vertex of the parabola (that is the point with maximal or minimal y-coordinate. The quadratic function may be rewritten y = a ( x − h ) 2 + k . {\displaystyle y=a(x-h)^{2}+k.} Let d be the distance between the point of y-coordinate 2k on the axis of the parabola, and a point on the parabola with the same y-coordinate (see the figure; there are two such points, which give the same distance, because of the symmetry of the parabola). Then the real part of the roots is h, and their imaginary part are ±d. That is, the roots are h + i d and h − i d , {\displaystyle h+id\quad {\text{and}}\quad h-id,} or in the case of the example of the figure 5 + 3 i and 5 − 3 i . {\displaystyle 5+3i\quad {\text{and}}\quad 5-3i.} === Avoiding loss of significance === Although the quadratic formula provides an exact solution, the result is not exact if real numbers are approximated during the computation, as usual in numerical analysis, where real numbers are approximated by floating point numbers (called "reals" in many programming languages). In this context, the quadratic formula is not completely stable. This occurs when the roots have different order of magnitude, or, equivalently, when b2 and b2 − 4ac are close in magnitude. In this case, the subtraction of two nearly equal numbers will cause loss of significance or catastrophic cancellation in the smaller root. To avoid this, the root that is smaller in magnitude, r, can be computed as ( c / a ) / R {\displaystyle (c/a)/R} where R is the root that is bigger in magnitude. This is equivalent to using the formula x = − 2 c b ± b 2 − 4 a c {\displaystyle x={\frac {-2c}{b\pm {\sqrt {b^{2}-4ac}}}}} using the plus sign if b > 0 {\displaystyle b>0} and the minus sign if b < 0. {\displaystyle b<0.} A second form of cancellation can occur between the terms b2 and 4ac of the discriminant, that is when the two roots are very close. This can lead to loss of up to half of correct significant figures in the roots. == Examples and applications == The golden ratio is found as the positive solution of the quadratic equation x 2 − x − 1 = 0. {\displaystyle x^{2}-x-1=0.} The equations of the circle and the other conic sections—ellipses, parabolas, and hyperbolas—are quadratic equations in two variables. Given the cosine or sine of an angle, finding the cosine or sine of the angle that is half as large involves solving a quadratic equation. The process of simplifying expressions involving the square root of an expression involving the square root of another expression involves finding the two solutions of a quadratic equation. Descartes' theorem states that for every four kissing (mutually tangent) circles, their radii satisfy a particular quadratic equation. The equation given by Fuss' theorem, giving the relation among the radius of a bicentric quadrilateral's inscribed circle, the radius of its circumscribed circle, and the distance between the centers of those circles, can be expressed as a quadratic equation for which the distance between the two circles' centers in terms of their radii is one of the solutions. The other solution of the same equation in terms of the relevant radii gives the distance between the circumscribed circle's center and the center of the excircle of an ex-tangential quadrilateral. Critical points of a cubic function and inflection points of a quartic function are found by solving a quadratic equation. In physics, for motion with constant acceleration a {\displaystyle a} , the displacement or position x {\displaystyle x} of a moving body can be expressed as a quadratic function of time t {\displaystyle t} given the initial position x 0 {\displaystyle x_{0}} and initial velocity v 0 {\displaystyle v_{0}} : x = x 0 + v 0 t + 1 2 a t 2 {\textstyle x=x_{0}+v_{0}t+{\frac {1}{2}}at^{2}} . In chemistry, the pH of a solution of weak acid can be calculated from the negative base-10 logarithm of the positive root of a quadratic equation in terms of the acidity constant and the analytical concentration of the acid. == History == Babylonian mathematicians, as early as 2000 BC (displayed on Old Babylonian clay tablets) could solve problems relating the areas and sides of rectangles. There is evidence dating this algorithm as far back as the Third Dynasty of Ur. In modern notation, the problems typically involved solving a pair of simultaneous equations of the form: x + y = p , x y = q , {\displaystyle x+y=p,\ \ xy=q,} which is equivalent to the statement that x and y are the roots of the equation:: 86  z 2 + q = p z . {\displaystyle z^{2}+q=pz.} The steps given by Babylonian scribes for solving the above rectangle problem, in terms of x and y, were as follows: Compute half of p. Square the result. Subtract q. Find the (positive) square root using a table of squares. Add together the results of steps (1) and (4) to give x. In modern notation this means calculating x = p 2 + ( p 2 ) 2 − q {\displaystyle x={\frac {p}{2}}+{\sqrt {\left({\frac {p}{2}}\right)^{2}-q}}} , which is equivalent to the modern day quadratic formula for the larger real root (if any) x = − b + b 2 − 4 a c 2 a {\displaystyle x={\frac {-b+{\sqrt {b^{2}-4ac}}}{2a}}} with a = 1, b = −p, and c = q. Geometric methods were used to solve quadratic equations in Babylonia, Egypt, Greece, China, and India. The Egyptian Berlin Papyrus, dating back to the Middle Kingdom (2050 BC to 1650 BC), contains the solution to a two-term quadratic equation. Babylonian mathematicians from circa 400 BC and Chinese mathematicians from circa 200 BC used geometric methods of dissection to solve quadratic equations with positive roots. Rules for quadratic equations were given in The Nine Chapters on the Mathematical Art, a Chinese treatise on mathematics. These early geometric methods do not appear to have had a general formula. Euclid, the Greek mathematician, produced a more abstract geometrical method around 300 BC. With a purely geometric approach Pythagoras and Euclid created a general procedure to find solutions of the quadratic equation. In his work Arithmetica, the Greek mathematician Diophantus solved the quadratic equation, but giving only one root, even when both roots were positive. In 628 AD, Brahmagupta, an Indian mathematician, gave in his book Brāhmasphuṭasiddhānta the first explicit (although still not completely general) solution of the quadratic equation ax2 + bx = c as follows: "To the absolute number multiplied by four times the [coefficient of the] square, add the square of the [coefficient of the] middle term; the square root of the same, less the [coefficient of the] middle term, being divided by twice the [coefficient of the] square is the value." This is equivalent to x = 4 a c + b 2 − b 2 a . {\displaystyle x={\frac {{\sqrt {4ac+b^{2}}}-b}{2a}}.} The Bakhshali Manuscript written in India in the 7th century AD contained an algebraic formula for solving quadratic equations, as well as linear indeterminate equations (originally of type ax/c = y). Muhammad ibn Musa al-Khwarizmi (9th century) developed a set of formulas that worked for positive solutions. Al-Khwarizmi goes further in providing a full solution to the general quadratic equation, accepting one or two numerical answers for every quadratic equation, while providing geometric proofs in the process. He also described the method of completing the square and recognized that the discriminant must be positive,: 230  which was proven by his contemporary 'Abd al-Hamīd ibn Turk (Central Asia, 9th century) who gave geometric figures to prove that if the discriminant is negative, a quadratic equation has no solution.: 234  While al-Khwarizmi himself did not accept negative solutions, later Islamic mathematicians that succeeded him accepted negative solutions,: 191  as well as irrational numbers as solutions. Abū Kāmil Shujā ibn Aslam (Egypt, 10th century) in particular was the first to accept irrational numbers (often in the form of a square root, cube root or fourth root) as solutions to quadratic equations or as coefficients in an equation. The 9th century Indian mathematician Sridhara wrote down rules for solving quadratic equations. The Jewish mathematician Abraham bar Hiyya Ha-Nasi (12th century, Spain) authored the first European book to include the full solution to the general quadratic equation. His solution was largely based on Al-Khwarizmi's work. The writing of the Chinese mathematician Yang Hui (1238–1298 AD) is the first known one in which quadratic equations with negative coefficients of 'x' appear, although he attributes this to the earlier Liu Yi. By 1545 Gerolamo Cardano compiled the works related to the quadratic equations. The quadratic formula covering all cases was first obtained by Simon Stevin in 1594. In 1637 René Descartes published La Géométrie containing the quadratic formula in the form we know today. == Advanced topics == === Alternative methods of root calculation === ==== Vieta's formulas ==== Vieta's formulas (named after François Viète) are the relations x 1 + x 2 = − b a , x 1 x 2 = c a {\displaystyle x_{1}+x_{2}=-{\frac {b}{a}},\quad x_{1}x_{2}={\frac {c}{a}}} between the roots of a quadratic polynomial and its coefficients. They result from comparing term by term the relation ( x − x 1 ) ( x − x 2 ) = x 2 − ( x 1 + x 2 ) x + x 1 x 2 = 0 {\displaystyle \left(x-x_{1}\right)\left(x-x_{2}\right)=x^{2}-\left(x_{1}+x_{2}\right)x+x_{1}x_{2}=0} with the equation x 2 + b a x + c a = 0. {\displaystyle x^{2}+{\frac {b}{a}}x+{\frac {c}{a}}=0.} The first Vieta's formula is useful for graphing a quadratic function. Since the graph is symmetric with respect to a vertical line through the vertex, the vertex's x-coordinate is located at the average of the roots (or intercepts). Thus the x-coordinate of the vertex is x V = x 1 + x 2 2 = − b 2 a . {\displaystyle x_{V}={\frac {x_{1}+x_{2}}{2}}=-{\frac {b}{2a}}.} The y-coordinate can be obtained by substituting the above result into the given quadratic equation, giving y V = − b 2 4 a + c = − b 2 − 4 a c 4 a . {\displaystyle y_{V}=-{\frac {b^{2}}{4a}}+c=-{\frac {b^{2}-4ac}{4a}}.} Also, these formulas for the vertex can be deduced directly from the formula (see Completing the square) a x 2 + b x + c = a ( x + b 2 a ) 2 − b 2 − 4 a c 4 a . {\displaystyle ax^{2}+bx+c=a\left(x+{\frac {b}{2a}}\right)^{2}-{\frac {b^{2}-4ac}{4a}}.} For numerical computation, Vieta's formulas provide a useful method for finding the roots of a quadratic equation in the case where one root is much smaller than the other. If |x2| << |x1|, then x1 + x2 ≈ x1, and we have the estimate: x 1 ≈ − b a . {\displaystyle x_{1}\approx -{\frac {b}{a}}.} The second Vieta's formula then provides: x 2 = c a x 1 ≈ − c b . {\displaystyle x_{2}={\frac {c}{ax_{1}}}\approx -{\frac {c}{b}}.} These formulas are much easier to evaluate than the quadratic formula under the condition of one large and one small root, because the quadratic formula evaluates the small root as the difference of two very nearly equal numbers (the case of large b), which causes round-off error in a numerical evaluation. The figure shows the difference between (i) a direct evaluation using the quadratic formula (accurate when the roots are near each other in value) and (ii) an evaluation based upon the above approximation of Vieta's formulas (accurate when the roots are widely spaced). As the linear coefficient b increases, initially the quadratic formula is accurate, and the approximate formula improves in accuracy, leading to a smaller difference between the methods as b increases. However, at some point the quadratic formula begins to lose accuracy because of round off error, while the approximate method continues to improve. Consequently, the difference between the methods begins to increase as the quadratic formula becomes worse and worse. This situation arises commonly in amplifier design, where widely separated roots are desired to ensure a stable operation (see Step response). ==== Trigonometric solution ==== In the days before calculators, people would use mathematical tables—lists of numbers showing the results of calculation with varying arguments—to simplify and speed up computation. Tables of logarithms and trigonometric functions were common in math and science textbooks. Specialized tables were published for applications such as astronomy, celestial navigation and statistics. Methods of numerical approximation existed, called prosthaphaeresis, that offered shortcuts around time-consuming operations such as multiplication and taking powers and roots. Astronomers, especially, were concerned with methods that could speed up the long series of computations involved in celestial mechanics calculations. It is within this context that we may understand the development of means of solving quadratic equations by the aid of trigonometric substitution. Consider the following alternate form of the quadratic equation, where the sign of the ± symbol is chosen so that a and c may both be positive. By substituting and then multiplying through by cos2(θ) / c, we obtain Introducing functions of 2θ and rearranging, we obtain where the subscripts n and p correspond, respectively, to the use of a negative or positive sign in equation [1]. Substituting the two values of θn or θp found from equations [4] or [5] into [2] gives the required roots of [1]. Complex roots occur in the solution based on equation [5] if the absolute value of sin 2θp exceeds unity. The amount of effort involved in solving quadratic equations using this mixed trigonometric and logarithmic table look-up strategy was two-thirds the effort using logarithmic tables alone. Calculating complex roots would require using a different trigonometric form. To illustrate, let us assume we had available seven-place logarithm and trigonometric tables, and wished to solve the following to six-significant-figure accuracy: 4.16130 x 2 + 9.15933 x − 11.4207 = 0 {\displaystyle 4.16130x^{2}+9.15933x-11.4207=0} A seven-place lookup table might have only 100,000 entries, and computing intermediate results to seven places would generally require interpolation between adjacent entries. log ⁡ a = 0.6192290 , log ⁡ b = 0.9618637 , log ⁡ c = 1.0576927 {\displaystyle \log a=0.6192290,\log b=0.9618637,\log c=1.0576927} 2 a c / b = 2 × 10 ( 0.6192290 + 1.0576927 ) / 2 − 0.9618637 = 1.505314 {\displaystyle 2{\sqrt {ac}}/b=2\times 10^{(0.6192290+1.0576927)/2-0.9618637}=1.505314} θ = ( tan − 1 ⁡ 1.505314 ) / 2 = 28.20169 ∘ or − 61.79831 ∘ {\displaystyle \theta =(\tan ^{-1}1.505314)/2=28.20169^{\circ }{\text{ or }}-61.79831^{\circ }} log ⁡ | tan ⁡ θ | = − 0.2706462 or 0.2706462 {\displaystyle \log |\tan \theta |=-0.2706462{\text{ or }}0.2706462} log ⁡ c / a = ( 1.0576927 − 0.6192290 ) / 2 = 0.2192318 {\displaystyle \log {\textstyle {\sqrt {c/a}}}=(1.0576927-0.6192290)/2=0.2192318} x 1 = 10 0.2192318 − 0.2706462 = 0.888353 {\displaystyle x_{1}=10^{0.2192318-0.2706462}=0.888353} (rounded to six significant figures) x 2 = − 10 0.2192318 + 0.2706462 = − 3.08943 {\displaystyle x_{2}=-10^{0.2192318+0.2706462}=-3.08943} ==== Solution for complex roots in polar coordinates ==== If the quadratic equation a x 2 + b x + c = 0 {\displaystyle ax^{2}+bx+c=0} with real coefficients has two complex roots—the case where b 2 − 4 a c < 0 , {\displaystyle b^{2}-4ac<0,} requiring a and c to have the same sign as each other—then the solutions for the roots can be expressed in polar form as x 1 , x 2 = r ( cos ⁡ θ ± i sin ⁡ θ ) , {\displaystyle x_{1},\,x_{2}=r(\cos \theta \pm i\sin \theta ),} where r = c a {\displaystyle r={\sqrt {\tfrac {c}{a}}}} and θ = cos − 1 ⁡ ( − b 2 a c ) . {\displaystyle \theta =\cos ^{-1}\left({\tfrac {-b}{2{\sqrt {ac}}}}\right).} ==== Geometric solution ==== The quadratic equation may be solved geometrically in a number of ways. One way is via Lill's method. The three coefficients a, b, c are drawn with right angles between them as in SA, AB, and BC in Figure 6. A circle is drawn with the start and end point SC as a diameter. If this cuts the middle line AB of the three then the equation has a solution, and the solutions are given by negative of the distance along this line from A divided by the first coefficient a or SA. If a is 1 the coefficients may be read off directly. Thus the solutions in the diagram are −AX1/SA and −AX2/SA. The Carlyle circle, named after Thomas Carlyle, has the property that the solutions of the quadratic equation are the horizontal coordinates of the intersections of the circle with the horizontal axis. Carlyle circles have been used to develop ruler-and-compass constructions of regular polygons. === Generalization of quadratic equation === The formula and its derivation remain correct if the coefficients a, b and c are complex numbers, or more generally members of any field whose characteristic is not 2. (In a field of characteristic 2, the element 2a is zero and it is impossible to divide by it.) The symbol ± b 2 − 4 a c {\displaystyle \pm {\sqrt {b^{2}-4ac}}} in the formula should be understood as "either of the two elements whose square is b2 − 4ac, if such elements exist". In some fields, some elements have no square roots and some have two; only zero has just one square root, except in fields of characteristic 2. Even if a field does not contain a square root of some number, there is always a quadratic extension field which does, so the quadratic formula will always make sense as a formula in that extension field. ==== Characteristic 2 ==== In a field of characteristic 2, the quadratic formula, which relies on 2 being a unit, does not hold. Consider the monic quadratic polynomial x 2 + b x + c {\displaystyle x^{2}+bx+c} over a field of characteristic 2. If b = 0, then the solution reduces to extracting a square root, so the solution is x = c {\displaystyle x={\sqrt {c}}} and there is only one root since − c = − c + 2 c = c . {\displaystyle -{\sqrt {c}}=-{\sqrt {c}}+2{\sqrt {c}}={\sqrt {c}}.} In summary, x 2 + c = ( x + c ) 2 . {\displaystyle \displaystyle x^{2}+c=(x+{\sqrt {c}})^{2}.} See quadratic residue for more information about extracting square roots in finite fields. In the case that b ≠ 0, there are two distinct roots, but if the polynomial is irreducible, they cannot be expressed in terms of square roots of numbers in the coefficient field. Instead, define the 2-root R(c) of c to be a root of the polynomial x2 + x + c, an element of the splitting field of that polynomial. One verifies that R(c) + 1 is also a root. In terms of the 2-root operation, the two roots of the (non-monic) quadratic ax2 + bx + c are b a R ( a c b 2 ) {\displaystyle {\frac {b}{a}}R\left({\frac {ac}{b^{2}}}\right)} and b a ( R ( a c b 2 ) + 1 ) . {\displaystyle {\frac {b}{a}}\left(R\left({\frac {ac}{b^{2}}}\right)+1\right).} For example, let a denote a multiplicative generator of the group of units of F4, the Galois field of order four (thus a and a + 1 are roots of x2 + x + 1 over F4. Because (a + 1)2 = a, a + 1 is the unique solution of the quadratic equation x2 + a = 0. On the other hand, the polynomial x2 + ax + 1 is irreducible over F4, but it splits over F16, where it has the two roots ab and ab + a, where b is a root of x2 + x + a in F16. This is a special case of Artin–Schreier theory. == See also == Solving quadratic equations with continued fractions Linear equation Cubic function Quartic equation Quintic equation Fundamental theorem of algebra == References == == External links == "Quadratic equation", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weisstein, Eric W. "Quadratic equations". MathWorld. 101 uses of a quadratic equation Archived 2007-11-10 at the Wayback Machine 101 uses of a quadratic equation: Part II Archived 2007-10-22 at the Wayback Machine
Wikipedia/Quadratic_equations
In economics, an inverse demand function is the mathematical relationship that expresses price as a function of quantity demanded (it is therefore also known as a price function). Historically, the economists first expressed the price of a good as a function of demand (holding the other economic variables, like income, constant), and plotted the price-demand relationship with demand on the x (horizontal) axis (the demand curve). Later the additional variables, like prices of other goods, came into analysis, and it became more convenient to express the demand as a multivariate function (the demand function): d e m a n d = f ( p r i c e , i n c o m e , . . . ) {\displaystyle {demand}=f({price},{income},...)} , so the original demand curve now depicts the inverse demand function p r i c e = f − 1 ( d e m a n d ) {\displaystyle {price}=f^{-1}({demand})} with extra variables fixed. == Definition == In mathematical terms, if the demand function is d e m a n d = f ( p r i c e ) {\displaystyle {demand}=f({price})} , then the inverse demand function is p r i c e = f − 1 ( d e m a n d ) {\displaystyle {price}=f^{-1}({demand})} . The value of the inverse demand function is the highest price that could be charged and still generate the quantity demanded. This is useful because economists typically place price (P) on the vertical axis and quantity (demand, Q) on the horizontal axis in supply-and-demand diagrams, so it is the inverse demand function that depicts the graphed demand curve in the way the reader expects to see. The inverse demand function is the same as the average revenue function, since P = AR. To compute the inverse demand function, simply solve for P from the demand function. For example, if the demand function has the form Q = 240 − 2 P {\displaystyle Q=240-2P} then the inverse demand function would be P = 120 − 1 2 Q {\displaystyle P=120-{\frac {1}{2}}Q} . Note that although price is the dependent variable in the inverse demand function, it is still the case that the equation represents how the price determines the quantity demanded, not the reverse. == Relation to marginal revenue == There is a close relationship between any inverse demand function for a linear demand equation and the marginal revenue function. For any linear demand function with an inverse demand equation of the form P = a - bQ, the marginal revenue function has the form MR = a - 2bQ. The inverse linear demand function and the marginal revenue function derived from it have the following characteristics: Both functions are linear. The marginal revenue function and inverse demand function have the same y intercept. The x intercept of the marginal revenue function is one-half the x intercept of the inverse demand function. The marginal revenue function has twice the slope of the inverse demand function. The marginal revenue function is below the inverse demand function at every positive quantity. The inverse demand function can be used to derive the total and marginal revenue functions. Total revenue equals price, P, times quantity, Q, or TR = P×Q. Multiply the inverse demand function by Q to derive the total revenue function: T R = ( 120 − 1 2 Q ) ⋅ Q = 120 Q − 1 2 Q 2 {\displaystyle TR=(120-{\frac {1}{2}}Q)\cdot Q=120Q-{\frac {1}{2}}Q^{2}} . The marginal revenue function is the first derivative of the total revenue function or MR = 120 - Q. Note that in this linear example the MR function has the same y-intercept as the inverse demand function, the x-intercept of the MR function is one-half the value of the demand function, and the slope of the MR function is twice that of the inverse demand function. This relationship holds true for all linear demand equations. The importance of being able to quickly calculate MR is that the profit-maximizing condition for firms regardless of market structure is to produce where marginal revenue equals marginal cost (MC). To derive MC the first derivative of the total cost function is taken. For example, assume cost, C, equals 420 + 60Q + Q2. then MC = 60 + 2Q. Equating MR to MC and solving for Q gives Q = 20. So 20 is the profit-maximizing quantity: to find the profit-maximizing price simply plug the value of Q into the inverse demand equation and solve for P. == See also == Hicksian demand function Marshallian demand function Excess demand function Supply and demand Demand Law of demand Profit (economics) == References == == Further reading == Ryan, W. J. L.; Pearce, D. W. (1977). "Demand Functions". Price Theory. London: Macmillan Education UK. pp. 31–69. doi:10.1007/978-1-349-17334-1_2. ISBN 978-0-333-17913-0.
Wikipedia/Demand_function
In mathematics, the domain of a function is the set of inputs accepted by the function. It is sometimes denoted by dom ⁡ ( f ) {\displaystyle \operatorname {dom} (f)} or dom ⁡ f {\displaystyle \operatorname {dom} f} , where f is the function. In layman's terms, the domain of a function can generally be thought of as "what x can be". More precisely, given a function f : X → Y {\displaystyle f\colon X\to Y} , the domain of f is X. In modern mathematical language, the domain is part of the definition of a function rather than a property of it. In the special case that X and Y are both sets of real numbers, the function f can be graphed in the Cartesian coordinate system. In this case, the domain is represented on the x-axis of the graph, as the projection of the graph of the function onto the x-axis. For a function f : X → Y {\displaystyle f\colon X\to Y} , the set Y is called the codomain: the set to which all outputs must belong. The set of specific outputs the function assigns to elements of X is called its range or image. The image of f is a subset of Y, shown as the yellow oval in the accompanying diagram. Any function can be restricted to a subset of its domain. The restriction of f : X → Y {\displaystyle f\colon X\to Y} to A {\displaystyle A} , where A ⊆ X {\displaystyle A\subseteq X} , is written as f | A : A → Y {\displaystyle \left.f\right|_{A}\colon A\to Y} . == Natural domain == If a real function f is given by a formula, it may be not defined for some values of the variable. In this case, it is a partial function, and the set of real numbers on which the formula can be evaluated to a real number is called the natural domain or domain of definition of f. In many contexts, a partial function is called simply a function, and its natural domain is called simply its domain. === Examples === The function f {\displaystyle f} defined by f ( x ) = 1 x {\displaystyle f(x)={\frac {1}{x}}} cannot be evaluated at 0. Therefore, the natural domain of f {\displaystyle f} is the set of real numbers excluding 0, which can be denoted by R ∖ { 0 } {\displaystyle \mathbb {R} \setminus \{0\}} or { x ∈ R : x ≠ 0 } {\displaystyle \{x\in \mathbb {R} :x\neq 0\}} . The piecewise function f {\displaystyle f} defined by f ( x ) = { 1 / x x ≠ 0 0 x = 0 , {\displaystyle f(x)={\begin{cases}1/x&x\not =0\\0&x=0\end{cases}},} has as its natural domain the set R {\displaystyle \mathbb {R} } of real numbers. The square root function f ( x ) = x {\displaystyle f(x)={\sqrt {x}}} has as its natural domain the set of non-negative real numbers, which can be denoted by R ≥ 0 {\displaystyle \mathbb {R} _{\geq 0}} , the interval [ 0 , ∞ ) {\displaystyle [0,\infty )} , or { x ∈ R : x ≥ 0 } {\displaystyle \{x\in \mathbb {R} :x\geq 0\}} . The tangent function, denoted tan {\displaystyle \tan } , has as its natural domain the set of all real numbers which are not of the form π 2 + k π {\displaystyle {\tfrac {\pi }{2}}+k\pi } for some integer k {\displaystyle k} , which can be written as R ∖ { π 2 + k π : k ∈ Z } {\displaystyle \mathbb {R} \setminus \{{\tfrac {\pi }{2}}+k\pi :k\in \mathbb {Z} \}} . == Other uses == The term domain is also commonly used in a different sense in mathematical analysis: a domain is a non-empty connected open set in a topological space. In particular, in real and complex analysis, a domain is a non-empty connected open subset of the real coordinate space R n {\displaystyle \mathbb {R} ^{n}} or the complex coordinate space C n . {\displaystyle \mathbb {C} ^{n}.} Sometimes such a domain is used as the domain of a function, although functions may be defined on more general sets. The two concepts are sometimes conflated as in, for example, the study of partial differential equations: in that case, a domain is the open connected subset of R n {\displaystyle \mathbb {R} ^{n}} where a problem is posed, making it both an analysis-style domain and also the domain of the unknown function(s) sought. == Set theoretical notions == For example, it is sometimes convenient in set theory to permit the domain of a function to be a proper class X, in which case there is formally no such thing as a triple (X, Y, G). With such a definition, functions do not have a domain, although some authors still use it informally after introducing a function in the form f: X → Y. == See also == Argument of a function Attribute domain Bijection, injection and surjection Codomain Domain decomposition Effective domain Endofunction Image (mathematics) Lipschitz domain Naive set theory Range of a function Support (mathematics) == Notes == == References == Bourbaki, Nicolas (1970). Théorie des ensembles. Éléments de mathématique. Springer. ISBN 9783540340348. Eccles, Peter J. (11 December 1997). An Introduction to Mathematical Reasoning: Numbers, Sets and Functions. Cambridge University Press. ISBN 978-0-521-59718-0. Mac Lane, Saunders (25 September 1998). Categories for the Working Mathematician. Springer Science & Business Media. ISBN 978-0-387-98403-2. Scott, Dana S.; Jech, Thomas J. (31 December 1971). Axiomatic Set Theory, Part 1. American Mathematical Soc. ISBN 978-0-8218-0245-8. Sharma, A. K. (2010). Introduction To Set Theory. Discovery Publishing House. ISBN 978-81-7141-877-0. Stewart, Ian; Tall, David (1977). The Foundations of Mathematics. Oxford University Press. ISBN 978-0-19-853165-4.
Wikipedia/Function_domain
In multivariable calculus, an iterated integral is the result of applying integrals to a function of more than one variable (for example f ( x , y ) {\displaystyle f(x,y)} or f ( x , y , z ) {\displaystyle f(x,y,z)} ) in such a way that each of the integrals considers some of the variables as given constants. For example, the function f ( x , y ) {\displaystyle f(x,y)} , if y {\displaystyle y} is considered a given parameter, can be integrated with respect to x {\displaystyle x} , ∫ f ( x , y ) d x {\textstyle \int f(x,y)\,dx} . The result is a function of y {\displaystyle y} and therefore its integral can be considered. If this is done, the result is the iterated integral ∫ ( ∫ f ( x , y ) d x ) d y . {\displaystyle \int \left(\int f(x,y)\,dx\right)\,dy.} It is key for the notion of iterated integrals that this is different, in principle, from the multiple integral ∬ f ( x , y ) d x d y . {\displaystyle \iint f(x,y)\,dx\,dy.} In general, although these two can be different, Fubini's theorem states that under specific conditions, they are equivalent. The alternative notation for iterated integrals ∫ d y ∫ d x f ( x , y ) {\displaystyle \int dy\int dx\,f(x,y)} is also used. In the notation that uses parentheses, iterated integrals are computed following the operational order indicated by the parentheses starting from the most inner integral outside. In the alternative notation, writing ∫ d y ∫ d x f ( x , y ) {\textstyle \int dy\,\int dx\,f(x,y)} , the innermost integrand is computed first. == Examples == === A simple computation === For the iterated integral ∫ ( ∫ ( x + y ) d x ) d y {\displaystyle \int \left(\int (x+y)\,dx\right)\,dy} the integral ∫ ( x + y ) d x = x 2 2 + y x {\displaystyle \int (x+y)\,dx={\frac {x^{2}}{2}}+yx} is computed first and then the result is used to compute the integral with respect to y. ∫ ( x 2 2 + y x ) d y = y x 2 2 + x y 2 2 {\displaystyle \int \left({\frac {x^{2}}{2}}+yx\right)\,dy={\frac {yx^{2}}{2}}+{\frac {xy^{2}}{2}}} This example omits the constants of integration. After the first integration with respect to x, we would rigorously need to introduce a "constant" function of y. That is, If we were to differentiate this function with respect to x, any terms containing only y would vanish, leaving the original integrand. Similarly for the second integral, we would introduce a "constant" function of x, because we have integrated with respect to y. In this way, indefinite integration does not make very much sense for functions of several variables. === Lack of commutativity === The order in which the integrals are computed is important in iterated integrals, particularly when the integrand is not continuous on the domain of integration. Examples in which the different orders lead to different results are usually for complicated functions as the one that follows. Define the sequence a 0 = 0 < a 1 < a 2 < ⋯ {\displaystyle a_{0}=0<a_{1}<a_{2}<\cdots } such that a n → 1 {\displaystyle a_{n}\to 1} . Let g n {\displaystyle g_{n}} be a sequence of continuous functions not vanishing in the interval ( a n , a n + 1 ) {\displaystyle (a_{n},a_{n+1})} and zero elsewhere, such that ∫ 0 1 g n = 1 {\textstyle \int _{0}^{1}g_{n}=1} for every n {\displaystyle n} . Define f ( x , y ) = ∑ n = 0 ∞ ( g n ( x ) − g n + 1 ( x ) ) g n ( y ) . {\displaystyle f(x,y)=\sum _{n=0}^{\infty }\left(g_{n}(x)-g_{n+1}(x)\right)g_{n}(y).} In the previous sum, at each specific ( x , y ) {\displaystyle (x,y)} , at most one term is different from zero. For this function it happens that ∫ 0 1 ( ∫ 0 1 f ( x , y ) d y ) d x = ∫ 0 a 1 ( ∫ 0 a 1 g 0 ( x ) g 0 ( y ) d y ) d x = 1 ≠ 0 = ∫ 0 1 0 d y = ∫ 0 1 ( ∫ 0 1 f ( x , y ) d x ) d y {\displaystyle \int _{0}^{1}\left(\int _{0}^{1}f(x,y)\,dy\right)\,dx=\int _{0}^{a_{1}}\left(\int _{0}^{a_{1}}g_{0}(x)g_{0}(y)\,dy\right)\,dx=1\neq 0=\int _{0}^{1}0\,dy=\int _{0}^{1}\left(\int _{0}^{1}f(x,y)\,dx\right)\,dy} == See also == Fubini's theorem – Conditions for switching order of integration in calculus == References ==
Wikipedia/Iterated_integral
The conjugate residual method is an iterative numeric method used for solving systems of linear equations. It's a Krylov subspace method very similar to the much more popular conjugate gradient method, with similar construction and convergence properties. This method is used to solve linear equations of the form A x = b {\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} } where A is an invertible and Hermitian matrix, and b is nonzero. The conjugate residual method differs from the closely related conjugate gradient method. It involves more numerical operations and requires more storage. Given an (arbitrary) initial estimate of the solution x 0 {\displaystyle \mathbf {x} _{0}} , the method is outlined below: x 0 := Some initial guess r 0 := b − A x 0 p 0 := r 0 Iterate, with k starting at 0 : α k := r k T A r k ( A p k ) T A p k x k + 1 := x k + α k p k r k + 1 := r k − α k A p k β k := r k + 1 T A r k + 1 r k T A r k p k + 1 := r k + 1 + β k p k A p k + 1 := A r k + 1 + β k A p k k := k + 1 {\displaystyle {\begin{aligned}&\mathbf {x} _{0}:={\text{Some initial guess}}\\&\mathbf {r} _{0}:=\mathbf {b} -\mathbf {Ax} _{0}\\&\mathbf {p} _{0}:=\mathbf {r} _{0}\\&{\text{Iterate, with }}k{\text{ starting at }}0:\\&\qquad \alpha _{k}:={\frac {\mathbf {r} _{k}^{\mathrm {T} }\mathbf {Ar} _{k}}{(\mathbf {Ap} _{k})^{\mathrm {T} }\mathbf {Ap} _{k}}}\\&\qquad \mathbf {x} _{k+1}:=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}\\&\qquad \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}\\&\qquad \beta _{k}:={\frac {\mathbf {r} _{k+1}^{\mathrm {T} }\mathbf {Ar} _{k+1}}{\mathbf {r} _{k}^{\mathrm {T} }\mathbf {Ar} _{k}}}\\&\qquad \mathbf {p} _{k+1}:=\mathbf {r} _{k+1}+\beta _{k}\mathbf {p} _{k}\\&\qquad \mathbf {Ap} _{k+1}:=\mathbf {Ar} _{k+1}+\beta _{k}\mathbf {Ap} _{k}\\&\qquad k:=k+1\end{aligned}}} the iteration may be stopped once x k {\displaystyle \mathbf {x} _{k}} has been deemed converged. The only difference between this and the conjugate gradient method is the calculation of α k {\displaystyle \alpha _{k}} and β k {\displaystyle \beta _{k}} (plus the optional incremental calculation of A p k {\displaystyle \mathbf {Ap} _{k}} at the end). Note: the above algorithm can be transformed so to make only one symmetric matrix-vector multiplication in each iteration. == Preconditioning == By making a few substitutions and variable changes, a preconditioned conjugate residual method may be derived in the same way as done for the conjugate gradient method: x 0 := Some initial guess r 0 := M − 1 ( b − A x 0 ) p 0 := r 0 Iterate, with k starting at 0 : α k := r k T A r k ( A p k ) T M − 1 A p k x k + 1 := x k + α k p k r k + 1 := r k − α k M − 1 A p k β k := r k + 1 T A r k + 1 r k T A r k p k + 1 := r k + 1 + β k p k A p k + 1 := A r k + 1 + β k A p k k := k + 1 {\displaystyle {\begin{aligned}&\mathbf {x} _{0}:={\text{Some initial guess}}\\&\mathbf {r} _{0}:=\mathbf {M} ^{-1}(\mathbf {b} -\mathbf {Ax} _{0})\\&\mathbf {p} _{0}:=\mathbf {r} _{0}\\&{\text{Iterate, with }}k{\text{ starting at }}0:\\&\qquad \alpha _{k}:={\frac {\mathbf {r} _{k}^{\mathrm {T} }\mathbf {A} \mathbf {r} _{k}}{(\mathbf {Ap} _{k})^{\mathrm {T} }\mathbf {M} ^{-1}\mathbf {Ap} _{k}}}\\&\qquad \mathbf {x} _{k+1}:=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}\\&\qquad \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {M} ^{-1}\mathbf {Ap} _{k}\\&\qquad \beta _{k}:={\frac {\mathbf {r} _{k+1}^{\mathrm {T} }\mathbf {A} \mathbf {r} _{k+1}}{\mathbf {r} _{k}^{\mathrm {T} }\mathbf {A} \mathbf {r} _{k}}}\\&\qquad \mathbf {p} _{k+1}:=\mathbf {r} _{k+1}+\beta _{k}\mathbf {p} _{k}\\&\qquad \mathbf {Ap} _{k+1}:=\mathbf {A} \mathbf {r} _{k+1}+\beta _{k}\mathbf {Ap} _{k}\\&\qquad k:=k+1\\\end{aligned}}} The preconditioner M − 1 {\displaystyle \mathbf {M} ^{-1}} must be symmetric positive definite. Note that the residual vector here is different from the residual vector without preconditioning. == References == Yousef Saad, Iterative methods for sparse linear systems (2nd ed.), page 194, SIAM. ISBN 978-0-89871-534-7.
Wikipedia/Conjugate_residual_method
In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Since it influences to what extent newly acquired information overrides old information, it metaphorically represents the speed at which a machine learning model "learns". In the adaptive control literature, the learning rate is commonly referred to as gain. In setting a learning rate, there is a trade-off between the rate of convergence and overshooting. While the descent direction is usually determined from the gradient of the loss function, the learning rate determines how big a step is taken in that direction. A too high learning rate will make the learning jump over minima but a too low learning rate will either take too long to converge or get stuck in an undesirable local minimum. In order to achieve faster convergence, prevent oscillations and getting stuck in undesirable local minima the learning rate is often varied during training either in accordance to a learning rate schedule or by using an adaptive learning rate. The learning rate and its adjustments may also differ per parameter, in which case it is a diagonal matrix that can be interpreted as an approximation to the inverse of the Hessian matrix in Newton's method. The learning rate is related to the step length determined by inexact line search in quasi-Newton methods and related optimization algorithms. == Learning rate schedule == Initial rate can be left as system default or can be selected using a range of techniques. A learning rate schedule changes the learning rate during learning and is most often changed between epochs/iterations. This is mainly done with two parameters: decay and momentum. There are many different learning rate schedules but the most common are time-based, step-based and exponential. Decay serves to settle the learning in a nice place and avoid oscillations, a situation that may arise when a too high constant learning rate makes the learning jump back and forth over a minimum, and is controlled by a hyperparameter. Momentum is analogous to a ball rolling down a hill; we want the ball to settle at the lowest point of the hill (corresponding to the lowest error). Momentum both speeds up the learning (increasing the learning rate) when the error cost gradient is heading in the same direction for a long time and also avoids local minima by 'rolling over' small bumps. Momentum is controlled by a hyperparameter analogous to a ball's mass which must be chosen manually—too high and the ball will roll over minima which we wish to find, too low and it will not fulfil its purpose. The formula for factoring in the momentum is more complex than for decay but is most often built in with deep learning libraries such as Keras. Time-based learning schedules alter the learning rate depending on the learning rate of the previous time iteration. Factoring in the decay the mathematical formula for the learning rate is: η n + 1 = η n 1 + d n {\displaystyle \eta _{n+1}={\frac {\eta _{n}}{1+dn}}} where η {\displaystyle \eta } is the learning rate, d {\displaystyle d} is a decay parameter and n {\displaystyle n} is the iteration step. Step-based learning schedules changes the learning rate according to some predefined steps. The decay application formula is here defined as: η n = η 0 d ⌊ 1 + n r ⌋ {\displaystyle \eta _{n}=\eta _{0}d^{\left\lfloor {\frac {1+n}{r}}\right\rfloor }} where η n {\displaystyle \eta _{n}} is the learning rate at iteration n {\displaystyle n} , η 0 {\displaystyle \eta _{0}} is the initial learning rate, d {\displaystyle d} is how much the learning rate should change at each drop (0.5 corresponds to a halving) and r {\displaystyle r} corresponds to the drop rate, or how often the rate should be dropped (10 corresponds to a drop every 10 iterations). The floor function ( ⌊ … ⌋ {\displaystyle \lfloor \dots \rfloor } ) here drops the value of its input to 0 for all values smaller than 1. Exponential learning schedules are similar to step-based, but instead of steps, a decreasing exponential function is used. The mathematical formula for factoring in the decay is: η n = η 0 e − d n {\displaystyle \eta _{n}=\eta _{0}e^{-dn}} where d {\displaystyle d} is a decay parameter. == Adaptive learning rate == The issue with learning rate schedules is that they all depend on hyperparameters that must be manually chosen for each given learning session and may vary greatly depending on the problem at hand or the model used. To combat this, there are many different types of adaptive gradient descent algorithms such as Adagrad, Adadelta, RMSprop, and Adam which are generally built into deep learning libraries such as Keras. == See also == == References == == Further reading == Géron, Aurélien (2017). "Gradient Descent". Hands-On Machine Learning with Scikit-Learn and TensorFlow. O'Reilly. pp. 113–124. ISBN 978-1-4919-6229-9. Plagianakos, V. P.; Magoulas, G. D.; Vrahatis, M. N. (2001). "Learning Rate Adaptation in Stochastic Gradient Descent". Advances in Convex Analysis and Global Optimization. Kluwer. pp. 433–444. ISBN 0-7923-6942-4. == External links == de Freitas, Nando (February 12, 2015). "Optimization". Deep Learning Lecture 6. University of Oxford – via YouTube.
Wikipedia/Learning_rate
In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the i-th approximation (called an "iterate") is derived from the previous ones. A specific implementation with termination criteria for a given iterative method like gradient descent, hill climbing, Newton's method, or quasi-Newton methods like BFGS, is an algorithm of an iterative method or a method of successive approximation. An iterative method is called convergent if the corresponding sequence converges for given initial approximations. A mathematically rigorous convergence analysis of an iterative method is usually performed; however, heuristic-based iterative methods are also common. In contrast, direct methods attempt to solve the problem by a finite sequence of operations. In the absence of rounding errors, direct methods would deliver an exact solution (for example, solving a linear system of equations A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } by Gaussian elimination). Iterative methods are often the only choice for nonlinear equations. However, iterative methods are often useful even for linear problems involving many variables (sometimes on the order of millions), where direct methods would be prohibitively expensive (and in some cases impossible) even with the best available computing power. == Attractive fixed points == If an equation can be put into the form f(x) = x, and a solution x is an attractive fixed point of the function f, then one may begin with a point x1 in the basin of attraction of x, and let xn+1 = f(xn) for n ≥ 1, and the sequence {xn}n ≥ 1 will converge to the solution x. Here xn is the nth approximation or iteration of x and xn+1 is the next or n + 1 iteration of x. Alternately, superscripts in parentheses are often used in numerical methods, so as not to interfere with subscripts with other meanings. (For example, x(n+1) = f(x(n)).) If the function f is continuously differentiable, a sufficient condition for convergence is that the spectral radius of the derivative is strictly bounded by one in a neighborhood of the fixed point. If this condition holds at the fixed point, then a sufficiently small neighborhood (basin of attraction) must exist. == Linear systems == In the case of a system of linear equations, the two main classes of iterative methods are the stationary iterative methods, and the more general Krylov subspace methods. === Stationary iterative methods === ==== Introduction ==== Stationary iterative methods solve a linear system with an operator approximating the original one; and based on a measurement of the error in the result (the residual), form a "correction equation" for which this process is repeated. While these methods are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices. ==== Definition ==== An iterative method is defined by x k + 1 := Ψ ( x k ) , k ≥ 0 {\displaystyle \mathbf {x} ^{k+1}:=\Psi (\mathbf {x} ^{k}),\quad k\geq 0} and for a given linear system A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } with exact solution x ∗ {\displaystyle \mathbf {x} ^{*}} the error by e k := x k − x ∗ , k ≥ 0. {\displaystyle \mathbf {e} ^{k}:=\mathbf {x} ^{k}-\mathbf {x} ^{*},\quad k\geq 0.} An iterative method is called linear if there exists a matrix C ∈ R n × n {\displaystyle C\in \mathbb {R} ^{n\times n}} such that e k + 1 = C e k ∀ k ≥ 0 {\displaystyle \mathbf {e} ^{k+1}=C\mathbf {e} ^{k}\quad \forall k\geq 0} and this matrix is called the iteration matrix. An iterative method with a given iteration matrix C {\displaystyle C} is called convergent if the following holds lim k → ∞ C k = 0. {\displaystyle \lim _{k\rightarrow \infty }C^{k}=0.} An important theorem states that for a given iterative method and its iteration matrix C {\displaystyle C} it is convergent if and only if its spectral radius ρ ( C ) {\displaystyle \rho (C)} is smaller than unity, that is, ρ ( C ) < 1. {\displaystyle \rho (C)<1.} The basic iterative methods work by splitting the matrix A {\displaystyle A} into A = M − N {\displaystyle A=M-N} and here the matrix M {\displaystyle M} should be easily invertible. The iterative methods are now defined as M x k + 1 = N x k + b , k ≥ 0 , {\displaystyle M\mathbf {x} ^{k+1}=N\mathbf {x} ^{k}+b,\quad k\geq 0,} or, equivalently, x k + 1 = x k + M − 1 ( b − A x k ) , k ≥ 0. {\displaystyle \mathbf {x} ^{k+1}=\mathbf {x} ^{k}+M^{-1}(b-A\mathbf {x} ^{k}),\quad k\geq 0.} From this follows that the iteration matrix is given by C = I − M − 1 A = M − 1 N . {\displaystyle C=I-M^{-1}A=M^{-1}N.} ==== Examples ==== Basic examples of stationary iterative methods use a splitting of the matrix A {\displaystyle A} such as A = D + L + U , D := diag ( ( a i i ) i ) {\displaystyle A=D+L+U\,,\quad D:={\text{diag}}((a_{ii})_{i})} where D {\displaystyle D} is only the diagonal part of A {\displaystyle A} , and L {\displaystyle L} is the strict lower triangular part of A {\displaystyle A} . Respectively, U {\displaystyle U} is the strict upper triangular part of A {\displaystyle A} . Richardson method: M := 1 ω I ( ω ≠ 0 ) {\displaystyle M:={\frac {1}{\omega }}I\quad (\omega \neq 0)} Jacobi method: M := D {\displaystyle M:=D} Damped Jacobi method: M := 1 ω D ( ω ≠ 0 ) {\displaystyle M:={\frac {1}{\omega }}D\quad (\omega \neq 0)} Gauss–Seidel method: M := D + L {\displaystyle M:=D+L} Successive over-relaxation method (SOR): M := 1 ω D + L ( ω ≠ 0 ) {\displaystyle M:={\frac {1}{\omega }}D+L\quad (\omega \neq 0)} Symmetric successive over-relaxation (SSOR): M := 1 ω ( 2 − ω ) ( D + ω L ) D − 1 ( D + ω U ) ( ω ∉ { 0 , 2 } ) {\displaystyle M:={\frac {1}{\omega (2-\omega )}}(D+\omega L)D^{-1}(D+\omega U)\quad (\omega \not \in \{0,2\})} Linear stationary iterative methods are also called relaxation methods. === Krylov subspace methods === Krylov subspace methods work by forming a basis of the sequence of successive matrix powers times the initial residual (the Krylov sequence). The approximations to the solution are then formed by minimizing the residual over the subspace formed. The prototypical method in this class is the conjugate gradient method (CG) which assumes that the system matrix A {\displaystyle A} is symmetric positive-definite. For symmetric (and possibly indefinite) A {\displaystyle A} one works with the minimal residual method (MINRES). In the case of non-symmetric matrices, methods such as the generalized minimal residual method (GMRES) and the biconjugate gradient method (BiCG) have been derived. ==== Convergence of Krylov subspace methods ==== Since these methods form a basis, it is evident that the method converges in N iterations, where N is the system size. However, in the presence of rounding errors this statement does not hold; moreover, in practice N can be very large, and the iterative process reaches sufficient accuracy already far earlier. The analysis of these methods is hard, depending on a complicated function of the spectrum of the operator. === Preconditioners === The approximating operator that appears in stationary iterative methods can also be incorporated in Krylov subspace methods such as GMRES (alternatively, preconditioned Krylov methods can be considered as accelerations of stationary iterative methods), where they become transformations of the original operator to a presumably better conditioned one. The construction of preconditioners is a large research area. == Methods of successive approximation == Mathematical methods relating to successive approximation include: Babylonian method, for finding square roots of numbers Fixed-point iteration Means of finding zeros of functions: Halley's method Newton's method Differential-equation matters: Picard–Lindelöf theorem, on existence of solutions of differential equations Runge–Kutta methods, for numerical solution of differential equations === History === Jamshīd al-Kāshī used iterative methods to calculate the sine of 1° and π in The Treatise of Chord and Sine to high precision. An early iterative method for solving a linear system appeared in a letter of Gauss to a student of his. He proposed solving a 4-by-4 system of equations by repeatedly solving the component in which the residual was the largest . The theory of stationary iterative methods was solidly established with the work of D.M. Young starting in the 1950s. The conjugate gradient method was also invented in the 1950s, with independent developments by Cornelius Lanczos, Magnus Hestenes and Eduard Stiefel, but its nature and applicability were misunderstood at the time. Only in the 1970s was it realized that conjugacy based methods work very well for partial differential equations, especially the elliptic type. == See also == Closed-form expression Iterative refinement Kaczmarz method Non-linear least squares Numerical analysis Root-finding algorithm == References == == External links == Templates for the Solution of Linear Systems Y. Saad: Iterative Methods for Sparse Linear Systems, 1st edition, PWS 1996
Wikipedia/Iterative_methods
A self-concordant function is a function satisfying a certain differential inequality, which makes it particularly easy for optimization using Newton's method: Sub.6.2.4.2  A self-concordant barrier is a particular self-concordant function, that is also a barrier function for a particular convex set. Self-concordant barriers are important ingredients in interior point methods for optimization. == Self-concordant functions == === Multivariate self-concordant function === Here is the general definition of a self-concordant function.: Def.2.0.1  Let C be a convex nonempty open set in Rn. Let f be a function that is three-times continuously differentiable defined on C. We say that f is self-concordant on C if it satisfies the following properties: 1. Barrier property: on any sequence of points in C that converges to a boundary point of C, f converges to ∞. 2. Differential inequality: for every point x in C, and any direction h in Rn, let gh be the function f restricted to the direction h, that is: gh(t) = f(x+t*h). Then the one-dimensional function gh should satisfy the following differential inequality: | g h ‴ ( x ) | ≤ 2 g h ″ ( x ) 3 / 2 {\displaystyle |g_{h}'''(x)|\leq 2g_{h}''(x)^{3/2}} . Equivalently: d d α ∇ 2 f ( x + α y ) | α = 0 ⪯ 2 y T ∇ 2 f ( x ) y ∇ 2 f ( x ) {\displaystyle \left.{\frac {d}{d\alpha }}\nabla ^{2}f(x+\alpha y)\right|_{\alpha =0}\preceq 2{\sqrt {y^{T}\nabla ^{2}f(x)\,y}}\,\nabla ^{2}f(x)} === Univariate self-concordant function === A function f : R → R {\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} } is self-concordant on R {\displaystyle \mathbb {R} } if: | f ‴ ( x ) | ≤ 2 f ″ ( x ) 3 / 2 {\displaystyle |f'''(x)|\leq 2f''(x)^{3/2}} Equivalently: if wherever f ″ ( x ) > 0 {\displaystyle f''(x)>0} it satisfies: | d d x 1 f ″ ( x ) | ≤ 1 {\displaystyle \left|{\frac {d}{dx}}{\frac {1}{\sqrt {f''(x)}}}\right|\leq 1} and satisfies f ‴ ( x ) = 0 {\displaystyle f'''(x)=0} elsewhere. === Examples === Linear and convex quadratic functions are self-concordant, since their third derivative is zero. Any function f ( x ) = − log ⁡ ( − g ( x ) ) − log ⁡ x {\displaystyle f(x)=-\log(-g(x))-\log x} where g ( x ) {\displaystyle g(x)} is defined and convex for all x > 0 {\displaystyle x>0} and verifies | g ‴ ( x ) | ≤ 3 g ″ ( x ) / x {\displaystyle |g'''(x)|\leq 3g''(x)/x} , is self concordant on its domain which is { x ∣ x > 0 , g ( x ) < 0 } {\displaystyle \{x\mid x>0,g(x)<0\}} . Some examples are g ( x ) = − x p {\displaystyle g(x)=-x^{p}} for 0 < p ≤ 1 {\displaystyle 0<p\leq 1} g ( x ) = − log ⁡ x {\displaystyle g(x)=-\log x} g ( x ) = x p {\displaystyle g(x)=x^{p}} for − 1 ≤ p ≤ 0 {\displaystyle -1\leq p\leq 0} g ( x ) = ( a x + b ) 2 / x {\displaystyle g(x)=(ax+b)^{2}/x} for any function g {\displaystyle g} satisfying the conditions, the function g ( x ) + a x 2 + b x + c {\displaystyle g(x)+ax^{2}+bx+c} with a ≥ 0 {\displaystyle a\geq 0} also satisfies the conditions. Some functions that are not self-concordant: f ( x ) = e x {\displaystyle f(x)=e^{x}} f ( x ) = 1 x p , x > 0 , p > 0 {\displaystyle f(x)={\frac {1}{x^{p}}},x>0,p>0} f ( x ) = | x p | , p > 2 {\displaystyle f(x)=|x^{p}|,p>2} == Self-concordant barriers == Here is the general definition of a self-concordant barrier (SCB).: Def.3.1.1  Let C be a convex closed set in Rn with a non-empty interior. Let f be a function from interior(C) to R. Let M>0 be a real parameter. We say that f is a M-self-concordant barrier for C if it satisfies the following: 1. f is a self-concordant function on interior(C). 2. For every point x in interior(C), and any direction h in Rn, let gh be the function f restricted to the direction h, that is: gh(t) = f(x+t*h). Then the one-dimensional function gh should satisfy the following differential inequality: | g h ′ ( x ) | ≤ M 1 / 2 ⋅ g h ″ ( x ) 1 / 2 {\displaystyle |g_{h}'(x)|\leq M^{1/2}\cdot g_{h}''(x)^{1/2}} . === Constructing SCBs === Due to the importance of SCBs in interior-point methods, it is important to know how to construct SCBs for various domains. In theory, it can be proved that every closed convex domain in Rn has a self-concordant barrier with parameter O(n). But this “universal barrier” is given by some multivariate integrals, and it is too complicated for actual computations. Hence, the main goal is to construct SCBs that are efficiently computable.: Sec.9.2.3.3  SCBs can be constructed from some basic SCBs, that are combined to produce SCBs for more complex domains, using several combination rules. === Basic SCBs === Every constant is a self-concordant barrier for all Rn, with parameter M=0. It is the only self-concordant barrier for the entire space, and the only self-concordant barrier with M < 1.: Example 3.1.1  [Note that linear and quadratic functions are self-concordant functions, but they are not self concordant barriers]. For the positive half-line R + {\displaystyle \mathbb {R} _{+}} ( x > 0 {\displaystyle x>0} ), f ( x ) = − ln ⁡ x {\displaystyle f(x)=-\ln x} is a self-concordant barrier with parameter M = 1 {\displaystyle M=1} . This can be proved directly from the definition. === Substitution rule === Let G be a closed convex domain in Rn, and g an M-SCB for G. Let x = Ay+b be an affine mapping from Rk to Rn with its image intersecting the interior of G. Let H be the inverse image of G under the mapping: H = {y in Rk | Ay+b in G}. Let h be the composite function h(y) := g(Ay+b). Then, h is an M-SCB for H.: Prop.3.1.1  For example, take n=1, G the positive half-line, and g ( x ) = − ln ⁡ x {\displaystyle g(x)=-\ln x} . For any k, let a be a k-element vector and b a scalar. Let H = {y in Rk | aTy+b ≥ 0} = a k-dimensional half-space. By the substitution rule, h ( y ) = − ln ⁡ ( a T y + b ) {\displaystyle h(y)=-\ln(a^{T}y+b)} is a 1-SCB for H. A more common format is H = {x in Rk | aTx ≤ b}, for which the SCB is h ( y ) = − ln ⁡ ( b − a T y ) {\displaystyle h(y)=-\ln(b-a^{T}y)} . The substitution rule can be extended from affine mappings to a certain class of "appropriate" mappings,: Thm.9.1.1  and to quadratic mappings.: Sub.9.3  === Cartesian product rule === For all i in 1,...,m, let Gi be a closed convex domains in Rni, and let gi be an Mi-SCB for Gi. Let G be the cartesian product of all Gi. Let g(x1,...,xm) := sumi gi(xi). Then, g is a SCB for G, with parameter sumi Mi.: Prop.3.1.1  For example, take all Gi to be the positive half-line, so that G is the positive orthant R + m {\displaystyle \mathbb {R} _{+}^{m}} . Let g ( x ) = − ∑ i = 1 m ln ⁡ x i {\displaystyle g(x)=-\sum _{i=1}^{m}\ln x_{i}} is an m-SCB for G. We can now apply the substitution rule. We get that, for the polytope defined by the linear inequalities ajTx ≤ bj for j in 1,...,m, if it satisfies Slater's condition, then f ( x ) = − ∑ i = 1 m ln ⁡ ( b j − a j T x ) {\displaystyle f(x)=-\sum _{i=1}^{m}\ln(b_{j}-a_{j}^{T}x)} is an m-SCB. The linear functions b j − a j T x {\displaystyle b_{j}-a_{j}^{T}x} can be replaced by quadratic functions. === Intersection rule === Let G1,...,Gm be closed convex domains in Rn. For each i in 1,...,m, let gi be an Mi-SCB for Gi, and ri a real number. Let G be the intersection of all Gi, and suppose its interior is nonempty. Let g := sumi ri*gi. Then, g is a SCB for G, with parameter sumi ri*Mi.: Prop.3.1.1  Therefore, if G is defined by a list of constraints, we can find a SCB for each constraint separately, and then simply sum them to get a SCB for G. For example, suppose the domain is defined by m linear constraints of the form ajTx ≤ bj, for j in 1,...,m. Then we can use the Intersection rule to construct the m-SCB f ( x ) = − ∑ i = 1 m ln ⁡ ( b j − a j T x ) {\displaystyle f(x)=-\sum _{i=1}^{m}\ln(b_{j}-a_{j}^{T}x)} (the same one that we previously computed using the Cartesian product rule). === SCBs for epigraphs === The epigraph of a function f(x) is the area above the graph of the function, that is, { ( x , t ) ∈ R 2 : t ≥ f ( x ) } {\displaystyle \{(x,t)\in \mathbb {R} ^{2}:t\geq f(x)\}} . The epigraph of f is a convex set if and only if f is a convex function. The following theorems present some functions f for which the epigraph has an SCB. Let g(t) be a 3-times continuously-differentiable concave function on t>0, such that t ⋅ | g ‴ ( t ) | / | g ″ ( t ) | {\displaystyle t\cdot |g'''(t)|/|g''(t)|} is bounded by a constant (denoted 3*b) for all t>0. Let G be the 2-dimensional convex domain: G = closure ( { ( x , t ) ∈ R 2 : t > 0 , x ≤ g ( t ) } ) . {\displaystyle G={\text{closure}}(\{(x,t)\in \mathbb {R} ^{2}:t>0,x\leq g(t)\}).} Then, the function f(x,t) = -ln(f(t)-x) - max[1,b2]*ln(t) is a self-concordant barrier for G, with parameter (1+max[1,b2]).: Prop.9.2.1  Examples: Let g(t) = t1/p, for some p≥1, and b=(2p-1)/(3p). Then G 1 = { ( x , t ) ∈ R 2 : ( x + ) p ≤ t } {\displaystyle G_{1}=\{(x,t)\in \mathbb {R} ^{2}:(x_{+})^{p}\leq t\}} has a 2-SCB. Similarly, G 2 = { ( x , t ) ∈ R 2 : ( [ − x ] + ) p ≤ t } {\displaystyle G_{2}=\{(x,t)\in \mathbb {R} ^{2}:([-x]_{+})^{p}\leq t\}} has a 2-SCB. Using the Intersection rule, we get that G = G 1 ∩ G 2 = { ( x , t ) ∈ R 2 : | x | p ≤ t } {\displaystyle G=G_{1}\cap G_{2}=\{(x,t)\in \mathbb {R} ^{2}:|x|^{p}\leq t\}} has a 4-SCB. Let g(t)=ln(t) and b=2/3. Then G = { ( x , t ) ∈ R 2 : e x ≤ t } {\displaystyle G=\{(x,t)\in \mathbb {R} ^{2}:e^{x}\leq t\}} has a 2-SCB. We can now construct a SCB for the problem of minimizing the p-norm: min x ∑ j = 1 n | v j − x T u j | p {\displaystyle \min _{x}\sum _{j=1}^{n}|v_{j}-x^{T}u_{j}|^{p}} , where vj are constant scalars, uj are constant vectors, and p>0 is a constant. We first convert it into minimization of a linear objective: min x ∑ j = 1 n t j {\displaystyle \min _{x}\sum _{j=1}^{n}t_{j}} , with the constraints: t j ≥ | v j − x T u j | p {\displaystyle t_{j}\geq |v_{j}-x^{T}u_{j}|^{p}} for all j in [m]. For each constraint, we have a 4-SCB by the affine substitution rule. Using the Intersection rule, we get a (4n)-SCB for the entire feasible domain. Similarly, let g be a 3-times continuously-differentiable convex function on the ray x>0, such that: x ⋅ | g ‴ ( x ) | / | g ″ ( x ) | ≤ 3 b {\displaystyle x\cdot |g'''(x)|/|g''(x)|\leq 3b} for all x>0. Let G be the 2-dimensional convex domain: closure({ (t,x) in R2: x>0, t ≥ g(x) }). Then, the function f(x,t) = -ln(t-f(x)) - max[1,b2]*ln(x) is a self-concordant barrier for G, with parameter (1+max[1,b2]).: Prop.9.2.2  Examples: Let g(x) = x−p, for some p>0, and b=(2+p)/3. Then G 1 = { ( x , t ) ∈ R 2 : x − p ≤ t , x ≥ 0 } {\displaystyle G_{1}=\{(x,t)\in \mathbb {R} ^{2}:x^{-p}\leq t,x\geq 0\}} has a 2-SCB. Let g(x)=x ln(x) and b=1/3. Then G = { ( x , t ) ∈ R 2 : x ln ⁡ x ≤ t , x ≥ 0 } {\displaystyle G=\{(x,t)\in \mathbb {R} ^{2}:x\ln x\leq t,x\geq 0\}} has a 2-SCB. === SCBs for cones === For the second order cone { ( x , y ) ∈ R n − 1 × R ∣ ‖ x ‖ ≤ y } {\displaystyle \{(x,y)\in \mathbb {R} ^{n-1}\times \mathbb {R} \mid \|x\|\leq y\}} , the function f ( x , y ) = − log ⁡ ( y 2 − x T x ) {\displaystyle f(x,y)=-\log(y^{2}-x^{T}x)} is a self-concordant barrier. For the cone of positive semidefinite of m*m symmetric matrices, the function f ( A ) = − log ⁡ det A {\displaystyle f(A)=-\log \det A} is a self-concordant barrier. For the quadratic region defined by ϕ ( x ) > 0 {\displaystyle \phi (x)>0} where ϕ ( x ) = α + ⟨ a , x ⟩ − 1 2 ⟨ A x , x ⟩ {\displaystyle \phi (x)=\alpha +\langle a,x\rangle -{\frac {1}{2}}\langle Ax,x\rangle } where A = A T ≥ 0 {\displaystyle A=A^{T}\geq 0} is a positive semi-definite symmetric matrix, the logarithmic barrier f ( x ) = − log ⁡ ϕ ( x ) {\displaystyle f(x)=-\log \phi (x)} is self-concordant with M = 2 {\displaystyle M=2} For the exponential cone { ( x , y , z ) ∈ R 3 ∣ y e x / y ≤ z , y > 0 } {\displaystyle \{(x,y,z)\in \mathbb {R} ^{3}\mid ye^{x/y}\leq z,y>0\}} , the function f ( x , y , z ) = − log ⁡ ( y log ⁡ ( z / y ) − x ) − log ⁡ z − log ⁡ y {\displaystyle f(x,y,z)=-\log(y\log(z/y)-x)-\log z-\log y} is a self-concordant barrier. For the power cone { ( x 1 , x 2 , y ) ∈ R + 2 × R ∣ | y | ≤ x 1 α x 2 1 − α } {\displaystyle \{(x_{1},x_{2},y)\in \mathbb {R} _{+}^{2}\times \mathbb {R} \mid |y|\leq x_{1}^{\alpha }x_{2}^{1-\alpha }\}} , the function f ( x 1 , x 2 , y ) = − log ⁡ ( x 1 2 α x 2 2 ( 1 − α ) − y 2 ) − log ⁡ x 1 − log ⁡ x 2 {\displaystyle f(x_{1},x_{2},y)=-\log(x_{1}^{2\alpha }x_{2}^{2(1-\alpha )}-y^{2})-\log x_{1}-\log x_{2}} is a self-concordant barrier. == History == As mentioned in the "Bibliography Comments" of their 1994 book, self-concordant functions were introduced in 1988 by Yurii Nesterov and further developed with Arkadi Nemirovski. As explained in their basic observation was that the Newton method is affine invariant, in the sense that if for a function f ( x ) {\displaystyle f(x)} we have Newton steps x k + 1 = x k − [ f ″ ( x k ) ] − 1 f ′ ( x k ) {\displaystyle x_{k+1}=x_{k}-[f''(x_{k})]^{-1}f'(x_{k})} then for a function ϕ ( y ) = f ( A y ) {\displaystyle \phi (y)=f(Ay)} where A {\displaystyle A} is a non-degenerate linear transformation, starting from y 0 = A − 1 x 0 {\displaystyle y_{0}=A^{-1}x_{0}} we have the Newton steps y k = A − 1 x k {\displaystyle y_{k}=A^{-1}x_{k}} which can be shown recursively y k + 1 = y k − [ ϕ ″ ( y k ) ] − 1 ϕ ′ ( y k ) = y k − [ A T f ″ ( A y k ) A ] − 1 A T f ′ ( A y k ) = A − 1 x k − A − 1 [ f ″ ( x k ) ] − 1 f ′ ( x k ) = A − 1 x k + 1 {\displaystyle y_{k+1}=y_{k}-[\phi ''(y_{k})]^{-1}\phi '(y_{k})=y_{k}-[A^{T}f''(Ay_{k})A]^{-1}A^{T}f'(Ay_{k})=A^{-1}x_{k}-A^{-1}[f''(x_{k})]^{-1}f'(x_{k})=A^{-1}x_{k+1}} . However, the standard analysis of the Newton method supposes that the Hessian of f {\displaystyle f} is Lipschitz continuous, that is ‖ f ″ ( x ) − f ″ ( y ) ‖ ≤ M ‖ x − y ‖ {\displaystyle \|f''(x)-f''(y)\|\leq M\|x-y\|} for some constant M {\displaystyle M} . If we suppose that f {\displaystyle f} is 3 times continuously differentiable, then this is equivalent to | ⟨ f ‴ ( x ) [ u ] v , v ⟩ | ≤ M ‖ u ‖ ‖ v ‖ 2 {\displaystyle |\langle f'''(x)[u]v,v\rangle |\leq M\|u\|\|v\|^{2}} for all u , v ∈ R n {\displaystyle u,v\in \mathbb {R} ^{n}} where f ‴ ( x ) [ u ] = lim α → 0 α − 1 [ f ″ ( x + α u ) − f ″ ( x ) ] {\displaystyle f'''(x)[u]=\lim _{\alpha \to 0}\alpha ^{-1}[f''(x+\alpha u)-f''(x)]} . Then the left hand side of the above inequality is invariant under the affine transformation f ( x ) → ϕ ( y ) = f ( A y ) , u → A − 1 u , v → A − 1 v {\displaystyle f(x)\to \phi (y)=f(Ay),u\to A^{-1}u,v\to A^{-1}v} , however the right hand side is not. The authors note that the right hand side can be made also invariant if we replace the Euclidean metric by the scalar product defined by the Hessian of f {\displaystyle f} defined as ‖ w ‖ f ″ ( x ) = ⟨ f ″ ( x ) w , w ⟩ 1 / 2 {\displaystyle \|w\|_{f''(x)}=\langle f''(x)w,w\rangle ^{1/2}} for w ∈ R n {\displaystyle w\in \mathbb {R} ^{n}} . They then arrive at the definition of a self concordant function as | ⟨ f ‴ ( x ) [ u ] u , u ⟩ | ≤ M ⟨ f ″ ( x ) u , u ⟩ 3 / 2 {\displaystyle |\langle f'''(x)[u]u,u\rangle |\leq M\langle f''(x)u,u\rangle ^{3/2}} . == Properties == === Linear combination === If f 1 {\displaystyle f_{1}} and f 2 {\displaystyle f_{2}} are self-concordant with constants M 1 {\displaystyle M_{1}} and M 2 {\displaystyle M_{2}} and α , β > 0 {\displaystyle \alpha ,\beta >0} , then α f 1 + β f 2 {\displaystyle \alpha f_{1}+\beta f_{2}} is self-concordant with constant max ( α − 1 / 2 M 1 , β − 1 / 2 M 2 ) {\displaystyle \max(\alpha ^{-1/2}M_{1},\beta ^{-1/2}M_{2})} . === Affine transformation === If f {\displaystyle f} is self-concordant with constant M {\displaystyle M} and A x + b {\displaystyle Ax+b} is an affine transformation of R n {\displaystyle \mathbb {R} ^{n}} , then ϕ ( x ) = f ( A x + b ) {\displaystyle \phi (x)=f(Ax+b)} is also self-concordant with parameter M {\displaystyle M} . === Convex conjugate === If f {\displaystyle f} is self-concordant, then its convex conjugate f ∗ {\displaystyle f^{*}} is also self-concordant. === Non-singular Hessian === If f {\displaystyle f} is self-concordant and the domain of f {\displaystyle f} contains no straight line (infinite in both directions), then f ″ {\displaystyle f''} is non-singular. Conversely, if for some x {\displaystyle x} in the domain of f {\displaystyle f} and u ∈ R n , u ≠ 0 {\displaystyle u\in \mathbb {R} ^{n},u\neq 0} we have ⟨ f ″ ( x ) u , u ⟩ = 0 {\displaystyle \langle f''(x)u,u\rangle =0} , then ⟨ f ″ ( x + α u ) u , u ⟩ = 0 {\displaystyle \langle f''(x+\alpha u)u,u\rangle =0} for all α {\displaystyle \alpha } for which x + α u {\displaystyle x+\alpha u} is in the domain of f {\displaystyle f} and then f ( x + α u ) {\displaystyle f(x+\alpha u)} is linear and cannot have a maximum so all of x + α u , α ∈ R {\displaystyle x+\alpha u,\alpha \in \mathbb {R} } is in the domain of f {\displaystyle f} . We note also that f {\displaystyle f} cannot have a minimum inside its domain. == Applications == Among other things, self-concordant functions are useful in the analysis of Newton's method. Self-concordant barrier functions are used to develop the barrier functions used in interior point methods for convex and nonlinear optimization. The usual analysis of the Newton method would not work for barrier functions as their second derivative cannot be Lipschitz continuous, otherwise they would be bounded on any compact subset of R n {\displaystyle \mathbb {R} ^{n}} . Self-concordant barrier functions are a class of functions that can be used as barriers in constrained optimization methods can be minimized using the Newton algorithm with provable convergence properties analogous to the usual case (but these results are somewhat more difficult to derive) to have both of the above, the usual constant bound on the third derivative of the function (required to get the usual convergence results for the Newton method) is replaced by a bound relative to the Hessian === Minimizing a self-concordant function === A self-concordant function may be minimized with a modified Newton method where we have a bound on the number of steps required for convergence. We suppose here that f {\displaystyle f} is a standard self-concordant function, that is it is self-concordant with parameter M = 2 {\displaystyle M=2} . We define the Newton decrement λ f ( x ) {\displaystyle \lambda _{f}(x)} of f {\displaystyle f} at x {\displaystyle x} as the size of the Newton step [ f ″ ( x ) ] − 1 f ′ ( x ) {\displaystyle [f''(x)]^{-1}f'(x)} in the local norm defined by the Hessian of f {\displaystyle f} at x {\displaystyle x} λ f ( x ) = ⟨ f ″ ( x ) [ f ″ ( x ) ] − 1 f ′ ( x ) , [ f ″ ( x ) ] − 1 f ′ ( x ) ⟩ 1 / 2 = ⟨ [ f ″ ( x ) ] − 1 f ′ ( x ) , f ′ ( x ) ⟩ 1 / 2 {\displaystyle \lambda _{f}(x)=\langle f''(x)[f''(x)]^{-1}f'(x),[f''(x)]^{-1}f'(x)\rangle ^{1/2}=\langle [f''(x)]^{-1}f'(x),f'(x)\rangle ^{1/2}} Then for x {\displaystyle x} in the domain of f {\displaystyle f} , if λ f ( x ) < 1 {\displaystyle \lambda _{f}(x)<1} then it is possible to prove that the Newton iterate x + = x − [ f ″ ( x ) ] − 1 f ′ ( x ) {\displaystyle x_{+}=x-[f''(x)]^{-1}f'(x)} will be also in the domain of f {\displaystyle f} . This is because, based on the self-concordance of f {\displaystyle f} , it is possible to give some finite bounds on the value of f ( x + ) {\displaystyle f(x_{+})} . We further have λ f ( x + ) ≤ ( λ f ( x ) 1 − λ f ( x ) ) 2 {\displaystyle \lambda _{f}(x_{+})\leq {\Bigg (}{\frac {\lambda _{f}(x)}{1-\lambda _{f}(x)}}{\Bigg )}^{2}} Then if we have λ f ( x ) < λ ¯ = 3 − 5 2 {\displaystyle \lambda _{f}(x)<{\bar {\lambda }}={\frac {3-{\sqrt {5}}}{2}}} then it is also guaranteed that λ f ( x + ) < λ f ( x ) {\displaystyle \lambda _{f}(x_{+})<\lambda _{f}(x)} , so that we can continue to use the Newton method until convergence. Note that for λ f ( x + ) < β {\displaystyle \lambda _{f}(x_{+})<\beta } for some β ∈ ( 0 , λ ¯ ) {\displaystyle \beta \in (0,{\bar {\lambda }})} we have quadratic convergence of λ f {\displaystyle \lambda _{f}} to 0 as λ f ( x + ) ≤ ( 1 − β ) − 2 λ f ( x ) 2 {\displaystyle \lambda _{f}(x_{+})\leq (1-\beta )^{-2}\lambda _{f}(x)^{2}} . This then gives quadratic convergence of f ( x k ) {\displaystyle f(x_{k})} to f ( x ∗ ) {\displaystyle f(x^{*})} and of x {\displaystyle x} to x ∗ {\displaystyle x^{*}} , where x ∗ = arg ⁡ min f ( x ) {\displaystyle x^{*}=\arg \min f(x)} , by the following theorem. If λ f ( x ) < 1 {\displaystyle \lambda _{f}(x)<1} then ω ( λ f ( x ) ) ≤ f ( x ) − f ( x ∗ ) ≤ ω ∗ ( λ f ( x ) ) {\displaystyle \omega (\lambda _{f}(x))\leq f(x)-f(x^{*})\leq \omega _{*}(\lambda _{f}(x))} ω ′ ( λ f ( x ) ) ≤ ‖ x − x ∗ ‖ x ≤ ω ∗ ′ ( λ f ( x ) ) {\displaystyle \omega '(\lambda _{f}(x))\leq \|x-x^{*}\|_{x}\leq \omega _{*}'(\lambda _{f}(x))} with the following definitions ω ( t ) = t − log ⁡ ( 1 + t ) {\displaystyle \omega (t)=t-\log(1+t)} ω ∗ ( t ) = − t − log ⁡ ( 1 − t ) {\displaystyle \omega _{*}(t)=-t-\log(1-t)} ‖ u ‖ x = ⟨ f ″ ( x ) u , u ⟩ 1 / 2 {\displaystyle \|u\|_{x}=\langle f''(x)u,u\rangle ^{1/2}} If we start the Newton method from some x 0 {\displaystyle x_{0}} with λ f ( x 0 ) ≥ λ ¯ {\displaystyle \lambda _{f}(x_{0})\geq {\bar {\lambda }}} then we have to start by using a damped Newton method defined by x k + 1 = x k − 1 1 + λ f ( x k ) [ f ″ ( x k ) ] − 1 f ′ ( x k ) {\displaystyle x_{k+1}=x_{k}-{\frac {1}{1+\lambda _{f}(x_{k})}}[f''(x_{k})]^{-1}f'(x_{k})} For this it can be shown that f ( x k + 1 ) ≤ f ( x k ) − ω ( λ f ( x k ) ) {\displaystyle f(x_{k+1})\leq f(x_{k})-\omega (\lambda _{f}(x_{k}))} with ω {\displaystyle \omega } as defined previously. Note that ω ( t ) {\displaystyle \omega (t)} is an increasing function for t > 0 {\displaystyle t>0} so that ω ( t ) ≥ ω ( λ ¯ ) {\displaystyle \omega (t)\geq \omega ({\bar {\lambda }})} for any t ≥ λ ¯ {\displaystyle t\geq {\bar {\lambda }}} , so the value of f {\displaystyle f} is guaranteed to decrease by a certain amount in each iteration, which also proves that x k + 1 {\displaystyle x_{k+1}} is in the domain of f {\displaystyle f} . == References ==
Wikipedia/Self-concordant_function
In mathematics and physics, a tensor field is a function assigning a tensor to each point of a region of a mathematical space (typically a Euclidean space or manifold) or of the physical space. Tensor fields are used in differential geometry, algebraic geometry, general relativity, in the analysis of stress and strain in material object, and in numerous applications in the physical sciences. As a tensor is a generalization of a scalar (a pure number representing a value, for example speed) and a vector (a magnitude and a direction, like velocity), a tensor field is a generalization of a scalar field and a vector field that assigns, respectively, a scalar or vector to each point of space. If a tensor A is defined on a vector fields set X(M) over a module M, we call A a tensor field on M. A tensor field, in common usage, is often referred to in the shorter form "tensor". For example, the Riemann curvature tensor refers a tensor field, as it associates a tensor to each point of a Riemannian manifold, a topological space. == Definition == Let M {\displaystyle M} be a manifold, for instance the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} . Definition. A tensor field of type ( p , q ) {\displaystyle (p,q)} is a section T ∈ Γ ( M , V ⊗ p ⊗ ( V ∗ ) ⊗ q ) {\displaystyle T\ \in \ \Gamma (M,V^{\otimes p}\otimes (V^{*})^{\otimes q})} where V = T M {\displaystyle V=TM} to be the tangent bundle of M {\displaystyle M} (whose sections are called vector fields or contra variant vector fields in Physics) and V ∗ = T ∗ M {\displaystyle V^{*}=T^{*}M} is its dual bundle, the cotangent space (whose sections are called 1 forms, or covariant vector fields in Physics), and ⊗ {\displaystyle \otimes } is the tensor product of vector bundles. Equivalently, a tensor field is a collection of elements T x ∈ V x ⊗ p ⊗ ( V x ∗ ) ⊗ q {\displaystyle T_{x}\in V_{x}^{\otimes p}\otimes (V_{x}^{*})^{\otimes q}} for every point x ∈ M {\displaystyle x\in M} , where ⊗ {\displaystyle \otimes } now denotes the tensor product of vectors spaces, such that it constitutes a smooth map T : M → V ⊗ p ⊗ ( V ∗ ) ⊗ q {\displaystyle T:M\rightarrow V^{\otimes p}\otimes (V^{*})^{\otimes q}} . The elements T x {\displaystyle T_{x}} are called tensors. Locally in a coordinate neighbourhood U {\displaystyle U} with coordinates x 1 , … x n {\displaystyle x^{1},\ldots x^{n}} we have a local basis (Vielbein) of vector fields ∂ 1 = ∂ ∂ x n … ∂ n = ∂ ∂ x n {\displaystyle \partial _{1}={\frac {\partial }{\partial x^{n}}}\ldots \partial _{n}={\frac {\partial }{\partial x_{n}}}} , and a dual basis of 1 forms d x 1 , … d x n {\displaystyle dx^{1},\ldots dx^{n}} so that d x i ( ∂ j ) = ∂ j x i = δ j i {\displaystyle dx^{i}(\partial _{j})=\partial _{j}x^{i}=\delta _{j}^{i}} . In the coordinate neighbourhood U {\displaystyle U} we then have T x = T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) ∂ i 1 ⊗ ⋯ ⊗ ∂ i p ⊗ d x j 1 ⊗ ⋯ ⊗ d x j q {\displaystyle T_{x}=T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n})\partial _{i_{1}}\otimes \cdots \otimes \partial _{i_{p}}\otimes dx^{j_{1}}\otimes \cdots \otimes dx^{j_{q}}} where here and below we use Einstein summation conventions. Note that if we choose different coordinate system y 1 … y n {\displaystyle y^{1}\ldots y^{n}} then ∂ ∂ x i = ∂ y k ∂ x i ∂ ∂ y k {\displaystyle {\frac {\partial }{\partial x^{i}}}={\frac {\partial y^{k}}{\partial x^{i}}}{\frac {\partial }{\partial y^{k}}}} and d x j = ∂ x j ∂ y ℓ d y ℓ {\displaystyle dx^{j}={\frac {\partial x^{j}}{\partial y^{\ell }}}dy^{\ell }} where the coordinates ( x 1 , … , x n ) {\displaystyle (x^{1},\ldots ,x^{n})} can be expressed in the coordinates ( y 1 , … y n {\displaystyle (y^{1},\ldots y^{n}} and vice versa, so that T x = T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) ∂ ∂ x i 1 ⊗ ⋯ ⊗ ∂ ∂ x i p ⊗ d x j 1 ⊗ ⋯ ⊗ d x j q = T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) ∂ y k 1 ∂ x i 1 ⋯ ∂ y k p ∂ x i p ∂ x j 1 ∂ y ℓ 1 ⋯ ∂ x j q ∂ y ℓ q ∂ ∂ y k 1 ⊗ ⋯ ⊗ ∂ ∂ y k p ⊗ d y ℓ 1 ⊗ ⋯ ⊗ d y ℓ q = T ℓ 1 , ⋯ ℓ q k 1 , … , k p ( y 1 , … y n ) ∂ ∂ y k 1 ⊗ ⋯ ⊗ ∂ ∂ y k p ⊗ d y ℓ 1 ⊗ ⋯ ⊗ d y ℓ q {\displaystyle {\begin{aligned}T_{x}&=T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n}){\frac {\partial }{\partial x^{i_{1}}}}\otimes \cdots \otimes {\frac {\partial }{\partial x^{i_{p}}}}\otimes dx^{j_{1}}\otimes \cdots \otimes dx^{j_{q}}\\&=T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n}){\frac {\partial y^{k_{1}}}{\partial x^{i_{1}}}}\cdots {\frac {\partial y^{k_{p}}}{\partial x^{i_{p}}}}{\frac {\partial x^{j_{1}}}{\partial y^{\ell _{1}}}}\cdots {\frac {\partial x^{j_{q}}}{\partial y^{\ell _{q}}}}{\frac {\partial }{\partial y^{k_{1}}}}\otimes \cdots \otimes {\frac {\partial }{\partial y^{k_{p}}}}\otimes dy^{\ell _{1}}\otimes \cdots \otimes dy^{\ell _{q}}\\&=T_{\ell _{1},\cdots \ell _{q}}^{k_{1},\ldots ,k_{p}}(y^{1},\ldots y^{n}){\frac {\partial }{\partial y^{k_{1}}}}\otimes \cdots \otimes {\frac {\partial }{\partial y^{k_{p}}}}\otimes dy^{\ell _{1}}\otimes \cdots \otimes dy^{\ell _{q}}\\\end{aligned}}} i.e. T ℓ 1 , ⋯ ℓ q k 1 , … , k p ( y 1 , … y n ) = T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) ∂ y k 1 ∂ x i 1 ⋯ ∂ y k p ∂ x i p ∂ x j 1 ∂ y ℓ 1 ⋯ ∂ x j q ∂ y ℓ q {\displaystyle T_{\ell _{1},\cdots \ell _{q}}^{k_{1},\ldots ,k_{p}}(y^{1},\ldots y^{n})=T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n}){\frac {\partial y^{k_{1}}}{\partial x^{i_{1}}}}\cdots {\frac {\partial y^{k_{p}}}{\partial x^{i_{p}}}}{\frac {\partial x^{j_{1}}}{\partial y^{\ell _{1}}}}\cdots {\frac {\partial x^{j_{q}}}{\partial y^{\ell _{q}}}}} The system of indexed functions T j 1 , … , j q i 1 , … i p ( x 1 , … , x n ) {\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots i_{p}}(x^{1},\ldots ,x^{n})} (one system for each choice of coordinate system) connected by transformations as above are the tensors in the definitions below. Remark One can, more generally, take V {\displaystyle V} to be any vector bundle on M {\displaystyle M} , and V ∗ {\displaystyle V^{*}} its dual bundle. In that case can be a more general topological space. These sections are called tensors of V {\displaystyle V} or tensors for short if no confusion is possible . == Geometric introduction == Intuitively, a vector field is best visualized as an "arrow" attached to each point of a region, with variable length and direction. One example of a vector field on a curved space is a weather map showing horizontal wind velocity at each point of the Earth's surface. Now consider more complicated fields. For example, if the manifold is Riemannian, then it has a metric field g {\displaystyle g} , such that given any two vectors v , w {\displaystyle v,w} at point x {\displaystyle x} , their inner product is g x ( v , w ) {\displaystyle g_{x}(v,w)} . The field g {\displaystyle g} could be given in matrix form, but it depends on a choice of coordinates. It could instead be given as an ellipsoid of radius 1 at each point, which is coordinate-free. Applied to the Earth's surface, this is Tissot's indicatrix. In general, we want to specify tensor fields in a coordinate-independent way: It should exist independently of latitude and longitude, or whatever particular "cartographic projection" we are using to introduce numerical coordinates. == Via coordinate transitions == Following Schouten (1951) and McConnell (1957), the concept of a tensor relies on a concept of a reference frame (or coordinate system), which may be fixed (relative to some background reference frame), but in general may be allowed to vary within some class of transformations of these coordinate systems. For example, coordinates belonging to the n-dimensional real coordinate space R n {\displaystyle \mathbb {R} ^{n}} may be subjected to arbitrary affine transformations: x k ↦ A j k x j + a k {\displaystyle x^{k}\mapsto A_{j}^{k}x^{j}+a^{k}} (with n-dimensional indices, summation implied). A covariant vector, or covector, is a system of functions v k {\displaystyle v_{k}} that transforms under this affine transformation by the rule v k ↦ v i A k i . {\displaystyle v_{k}\mapsto v_{i}A_{k}^{i}.} The list of Cartesian coordinate basis vectors e k {\displaystyle \mathbf {e} _{k}} transforms as a covector, since under the affine transformation e k ↦ A k i e i {\displaystyle \mathbf {e} _{k}\mapsto A_{k}^{i}\mathbf {e} _{i}} . A contravariant vector is a system of functions v k {\displaystyle v^{k}} of the coordinates that, under such an affine transformation undergoes a transformation v k ↦ ( A − 1 ) j k v j . {\displaystyle v^{k}\mapsto (A^{-1})_{j}^{k}v^{j}.} This is precisely the requirement needed to ensure that the quantity v k e k {\displaystyle v^{k}\mathbf {e} _{k}} is an invariant object that does not depend on the coordinate system chosen. More generally, the coordinates of a tensor of valence (p,q) have p upper indices and q lower indices, with the transformation law being T i 1 ⋯ i p j 1 ⋯ j q ↦ A i 1 ′ i 1 ⋯ A i p ′ i p T i 1 ′ ⋯ i p ′ j 1 ′ ⋯ j q ′ ( A − 1 ) j 1 j 1 ′ ⋯ ( A − 1 ) j q j q ′ . {\displaystyle {T^{i_{1}\cdots i_{p}}}_{j_{1}\cdots j_{q}}\mapsto A_{i'_{1}}^{i_{1}}\cdots A_{i'_{p}}^{i_{p}}{T^{i'_{1}\cdots i'_{p}}}_{j'_{1}\cdots j'_{q}}(A^{-1})_{j_{1}}^{j'_{1}}\cdots (A^{-1})_{j_{q}}^{j'_{q}}.} The concept of a tensor field may be obtained by specializing the allowed coordinate transformations to be smooth (or differentiable, analytic, etc.). A covector field is a function v k {\displaystyle v_{k}} of the coordinates that transforms by the Jacobian of the transition functions (in the given class). Likewise, a contravariant vector field v k {\displaystyle v^{k}} transforms by the inverse Jacobian. == Tensor bundles == A tensor bundle is a fiber bundle where the fiber is a tensor product of any number of copies of the tangent space and/or cotangent space of the base space, which is a manifold. As such, the fiber is a vector space and the tensor bundle is a special kind of vector bundle. The vector bundle is a natural idea of "vector space depending continuously (or smoothly) on parameters" – the parameters being the points of a manifold M. For example, a vector space of one dimension depending on an angle could look like a Möbius strip or alternatively like a cylinder. Given a vector bundle V over M, the corresponding field concept is called a section of the bundle: for m varying over M, a choice of vector vm in Vm, where Vm is the vector space "at" m. Since the tensor product concept is independent of any choice of basis, taking the tensor product of two vector bundles on M is routine. Starting with the tangent bundle (the bundle of tangent spaces) the whole apparatus explained at component-free treatment of tensors carries over in a routine way – again independently of coordinates, as mentioned in the introduction. We therefore can give a definition of tensor field, namely as a section of some tensor bundle. (There are vector bundles that are not tensor bundles: the Möbius band for instance.) This is then guaranteed geometric content, since everything has been done in an intrinsic way. More precisely, a tensor field assigns to any given point of the manifold a tensor in the space V ⊗ ⋯ ⊗ V ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ , {\displaystyle V\otimes \cdots \otimes V\otimes V^{*}\otimes \cdots \otimes V^{*},} where V is the tangent space at that point and V∗ is the cotangent space. See also tangent bundle and cotangent bundle. Given two tensor bundles E → M and F → M, a linear map A: Γ(E) → Γ(F) from the space of sections of E to sections of F can be considered itself as a tensor section of E ∗ ⊗ F {\displaystyle \scriptstyle E^{*}\otimes F} if and only if it satisfies A(fs) = fA(s), for each section s in Γ(E) and each smooth function f on M. Thus a tensor section is not only a linear map on the vector space of sections, but a C∞(M)-linear map on the module of sections. This property is used to check, for example, that even though the Lie derivative and covariant derivative are not tensors, the torsion and curvature tensors built from them are. == Notation == The notation for tensor fields can sometimes be confusingly similar to the notation for tensor spaces. Thus, the tangent bundle TM = T(M) might sometimes be written as T 0 1 ( M ) = T ( M ) = T M {\displaystyle T_{0}^{1}(M)=T(M)=TM} to emphasize that the tangent bundle is the range space of the (1,0) tensor fields (i.e., vector fields) on the manifold M. This should not be confused with the very similar looking notation T 0 1 ( V ) {\displaystyle T_{0}^{1}(V)} ; in the latter case, we just have one tensor space, whereas in the former, we have a tensor space defined for each point in the manifold M. Curly (script) letters are sometimes used to denote the set of infinitely-differentiable tensor fields on M. Thus, T n m ( M ) {\displaystyle {\mathcal {T}}_{n}^{m}(M)} are the sections of the (m,n) tensor bundle on M that are infinitely-differentiable. A tensor field is an element of this set. == Tensor fields as multilinear forms == There is another more abstract (but often useful) way of characterizing tensor fields on a manifold M, which makes tensor fields into honest tensors (i.e. single multilinear mappings), though of a different type (although this is not usually why one often says "tensor" when one really means "tensor field"). First, we may consider the set of all smooth (C∞) vector fields on M, X ( M ) := T 0 1 ( M ) {\displaystyle {\mathfrak {X}}(M):={\mathcal {T}}_{0}^{1}(M)} (see the section on notation above) as a single space – a module over the ring of smooth functions, C∞(M), by pointwise scalar multiplication. The notions of multilinearity and tensor products extend easily to the case of modules over any commutative ring. As a motivating example, consider the space Ω 1 ( M ) = T 1 0 ( M ) {\displaystyle \Omega ^{1}(M)={\mathcal {T}}_{1}^{0}(M)} of smooth covector fields (1-forms), also a module over the smooth functions. These act on smooth vector fields to yield smooth functions by pointwise evaluation, namely, given a covector field ω and a vector field X, we define ω ~ ( X ) ( p ) := ω ( p ) ( X ( p ) ) . {\displaystyle {\tilde {\omega }}(X)(p):=\omega (p)(X(p)).} Because of the pointwise nature of everything involved, the action of ω ~ {\displaystyle {\tilde {\omega }}} on X is a C∞(M)-linear map, that is, ω ~ ( f X ) ( p ) = ω ( p ) ( ( f X ) ( p ) ) = ω ( p ) ( f ( p ) X ( p ) ) = f ( p ) ω ( p ) ( X ( p ) ) = ( f ω ) ( p ) ( X ( p ) ) = ( f ω ~ ) ( X ) ( p ) {\displaystyle {\tilde {\omega }}(fX)(p)=\omega (p)((fX)(p))=\omega (p)(f(p)X(p))=f(p)\omega (p)(X(p))=(f\omega )(p)(X(p))=(f{\tilde {\omega }})(X)(p)} for any p in M and smooth function f. Thus we can regard covector fields not just as sections of the cotangent bundle, but also linear mappings of vector fields into functions. By the double-dual construction, vector fields can similarly be expressed as mappings of covector fields into functions (namely, we could start "natively" with covector fields and work up from there). In a complete parallel to the construction of ordinary single tensors (not tensor fields!) on M as multilinear maps on vectors and covectors, we can regard general (k,l) tensor fields on M as C∞(M)-multilinear maps defined on k copies of X ( M ) {\displaystyle {\mathfrak {X}}(M)} and l copies of Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} into C∞(M). Now, given any arbitrary mapping T from a product of k copies of X ( M ) {\displaystyle {\mathfrak {X}}(M)} and l copies of Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} into C∞(M), it turns out that it arises from a tensor field on M if and only if it is multilinear over C∞(M). Namely C∞(M)-module of tensor fields of type ( k , l ) {\displaystyle (k,l)} over M is canonically isomorphic to C∞(M)-module of C∞(M)-multilinear forms Ω 1 ( M ) × … × Ω 1 ( M ) ⏟ l t i m e s × X ( M ) × … × X ( M ) ⏟ k t i m e s → C ∞ ( M ) . {\displaystyle \underbrace {\Omega ^{1}(M)\times \ldots \times \Omega ^{1}(M)} _{l\ \mathrm {times} }\times \underbrace {{\mathfrak {X}}(M)\times \ldots \times {\mathfrak {X}}(M)} _{k\ \mathrm {times} }\to C^{\infty }(M).} This kind of multilinearity implicitly expresses the fact that we're really dealing with a pointwise-defined object, i.e. a tensor field, as opposed to a function which, even when evaluated at a single point, depends on all the values of vector fields and 1-forms simultaneously. A frequent example application of this general rule is showing that the Levi-Civita connection, which is a mapping of smooth vector fields ( X , Y ) ↦ ∇ X Y {\displaystyle (X,Y)\mapsto \nabla _{X}Y} taking a pair of vector fields to a vector field, does not define a tensor field on M. This is because it is only R {\displaystyle \mathbb {R} } -linear in Y (in place of full C∞(M)-linearity, it satisfies the Leibniz rule, ∇ X ( f Y ) = ( X f ) Y + f ∇ X Y {\displaystyle \nabla _{X}(fY)=(Xf)Y+f\nabla _{X}Y} )). Nevertheless, it must be stressed that even though it is not a tensor field, it still qualifies as a geometric object with a component-free interpretation. == Applications == The curvature tensor is discussed in differential geometry and the stress–energy tensor is important in physics, and these two tensors are related by Einstein's theory of general relativity. In electromagnetism, the electric and magnetic fields are combined into an electromagnetic tensor field. Differential forms, used in defining integration on manifolds, are a type of tensor field. == Tensor calculus == In theoretical physics and other fields, differential equations posed in terms of tensor fields provide a very general way to express relationships that are both geometric in nature (guaranteed by the tensor nature) and conventionally linked to differential calculus. Even to formulate such equations requires a fresh notion, the covariant derivative. This handles the formulation of variation of a tensor field along a vector field. The original absolute differential calculus notion, which was later called tensor calculus, led to the isolation of the geometric concept of connection. == Twisting by a line bundle == An extension of the tensor field idea incorporates an extra line bundle L on M. If W is the tensor product bundle of V with L, then W is a bundle of vector spaces of just the same dimension as V. This allows one to define the concept of tensor density, a 'twisted' type of tensor field. A tensor density is the special case where L is the bundle of densities on a manifold, namely the determinant bundle of the cotangent bundle. (To be strictly accurate, one should also apply the absolute value to the transition functions – this makes little difference for an orientable manifold.) For a more traditional explanation see the tensor density article. One feature of the bundle of densities (again assuming orientability) L is that Ls is well-defined for real number values of s; this can be read from the transition functions, which take strictly positive real values. This means for example that we can take a half-density, the case where s = ⁠1/2⁠. In general we can take sections of W, the tensor product of V with Ls, and consider tensor density fields with weight s. Half-densities are applied in areas such as defining integral operators on manifolds, and geometric quantization. == Flat case == When M is a Euclidean space and all the fields are taken to be invariant by translations by the vectors of M, we get back to a situation where a tensor field is synonymous with a tensor 'sitting at the origin'. This does no great harm, and is often used in applications. As applied to tensor densities, it does make a difference. The bundle of densities cannot seriously be defined 'at a point'; and therefore a limitation of the contemporary mathematical treatment of tensors is that tensor densities are defined in a roundabout fashion. == Cocycles and chain rules == As an advanced explanation of the tensor concept, one can interpret the chain rule in the multivariable case, as applied to coordinate changes, also as the requirement for self-consistent concepts of tensor giving rise to tensor fields. Abstractly, we can identify the chain rule as a 1-cocycle. It gives the consistency required to define the tangent bundle in an intrinsic way. The other vector bundles of tensors have comparable cocycles, which come from applying functorial properties of tensor constructions to the chain rule itself; this is why they also are intrinsic (read, 'natural') concepts. What is usually spoken of as the 'classical' approach to tensors tries to read this backwards – and is therefore a heuristic, post hoc approach rather than truly a foundational one. Implicit in defining tensors by how they transform under a coordinate change is the kind of self-consistency the cocycle expresses. The construction of tensor densities is a 'twisting' at the cocycle level. Geometers have not been in any doubt about the geometric nature of tensor quantities; this kind of descent argument justifies abstractly the whole theory. == Generalizations == === Tensor densities === The concept of a tensor field can be generalized by considering objects that transform differently. An object that transforms as an ordinary tensor field under coordinate transformations, except that it is also multiplied by the determinant of the Jacobian of the inverse coordinate transformation to the wth power, is called a tensor density with weight w. Invariantly, in the language of multilinear algebra, one can think of tensor densities as multilinear maps taking their values in a density bundle such as the (1-dimensional) space of n-forms (where n is the dimension of the space), as opposed to taking their values in just R. Higher "weights" then just correspond to taking additional tensor products with this space in the range. A special case are the scalar densities. Scalar 1-densities are especially important because it makes sense to define their integral over a manifold. They appear, for instance, in the Einstein–Hilbert action in general relativity. The most common example of a scalar 1-density is the volume element, which in the presence of a metric tensor g is the square root of its determinant in coordinates, denoted det g {\displaystyle {\sqrt {\det g}}} . The metric tensor is a covariant tensor of order 2, and so its determinant scales by the square of the coordinate transition: det ( g ′ ) = ( det ∂ x ∂ x ′ ) 2 det ( g ) , {\displaystyle \det(g')=\left(\det {\frac {\partial x}{\partial x'}}\right)^{2}\det(g),} which is the transformation law for a scalar density of weight +2. More generally, any tensor density is the product of an ordinary tensor with a scalar density of the appropriate weight. In the language of vector bundles, the determinant bundle of the tangent bundle is a line bundle that can be used to 'twist' other bundles w times. While locally the more general transformation law can indeed be used to recognise these tensors, there is a global question that arises, reflecting that in the transformation law one may write either the Jacobian determinant, or its absolute value. Non-integral powers of the (positive) transition functions of the bundle of densities make sense, so that the weight of a density, in that sense, is not restricted to integer values. Restricting to changes of coordinates with positive Jacobian determinant is possible on orientable manifolds, because there is a consistent global way to eliminate the minus signs; but otherwise the line bundle of densities and the line bundle of n-forms are distinct. For more on the intrinsic meaning, see Density on a manifold. == See also == Bitensor – Tensorial object depending on two points in a manifold Jet bundle – Construction in differential topology Ricci calculus – Tensor index notation for tensor-based calculations Spinor field – Geometric structurePages displaying short descriptions of redirect targets == Notes == == References == O'neill, Barrett (1983). Semi-Riemannian Geometry With Applications to Relativity. Elsevier Science. ISBN 9780080570570. Frankel, T. (2012), The Geometry of Physics (3rd edition), Cambridge University Press, ISBN 978-1-107-60260-1. Lambourne [Open University], R.J.A. (2010), Relativity, Gravitation, and Cosmology, Cambridge University Press, Bibcode:2010rgc..book.....L, ISBN 978-0-521-13138-4. Lerner, R.G.; Trigg, G.L. (1991), Encyclopaedia of Physics (2nd Edition), VHC Publishers. McConnell, A. J. (1957), Applications of Tensor Analysis, Dover Publications, ISBN 9780486145020 {{citation}}: ISBN / Date incompatibility (help). McMahon, D. (2006), Relativity DeMystified, McGraw Hill (USA), ISBN 0-07-145545-0. C. Misner, K. S. Thorne, J. A. Wheeler (1973), Gravitation, W.H. Freeman & Co, ISBN 0-7167-0344-0{{citation}}: CS1 maint: multiple names: authors list (link). Parker, C.B. (1994), McGraw Hill Encyclopaedia of Physics (2nd Edition), McGraw Hill, ISBN 0-07-051400-3. Schouten, Jan Arnoldus (1951), Tensor Analysis for Physicists, Oxford University Press. Steenrod, Norman (5 April 1999). The Topology of Fibre Bundles. Princeton Mathematical Series. Vol. 14. Princeton, N.J.: Princeton University Press. ISBN 978-0-691-00548-5. OCLC 40734875.
Wikipedia/Tensor_fields
Geometric transformations can be distinguished into two types: active or alibi transformations which change the physical position of a set of points relative to a fixed frame of reference or coordinate system (alibi meaning "being somewhere else at the same time"); and passive or alias transformations which leave points fixed but change the frame of reference or coordinate system relative to which they are described (alias meaning "going under a different name"). By transformation, mathematicians usually refer to active transformations, while physicists and engineers could mean either. For instance, active transformations are useful to describe successive positions of a rigid body. On the other hand, passive transformations may be useful in human motion analysis to observe the motion of the tibia relative to the femur, that is, its motion relative to a (local) coordinate system which moves together with the femur, rather than a (global) coordinate system which is fixed to the floor. In three-dimensional Euclidean space, any proper rigid transformation, whether active or passive, can be represented as a screw displacement, the composition of a translation along an axis and a rotation about that axis. The terms active transformation and passive transformation were first introduced in 1957 by Valentine Bargmann for describing Lorentz transformations in special relativity. == Example == As an example, let the vector v = ( v 1 , v 2 ) ∈ R 2 {\displaystyle \mathbf {v} =(v_{1},v_{2})\in \mathbb {R} ^{2}} , be a vector in the plane. A rotation of the vector through an angle θ in counterclockwise direction is given by the rotation matrix: R = ( cos ⁡ θ − sin ⁡ θ sin ⁡ θ cos ⁡ θ ) , {\displaystyle R={\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{pmatrix}},} which can be viewed either as an active transformation or a passive transformation (where the above matrix will be inverted), as described below. == Spatial transformations in the Euclidean space R3 == In general a spatial transformation T : R 3 → R 3 {\displaystyle T\colon \mathbb {R} ^{3}\to \mathbb {R} ^{3}} may consist of a translation and a linear transformation. In the following, the translation will be omitted, and the linear transformation will be represented by a 3×3 matrix T {\displaystyle T} . === Active transformation === As an active transformation, T {\displaystyle T} transforms the initial vector v = ( v x , v y , v z ) {\displaystyle \mathbf {v} =(v_{x},v_{y},v_{z})} into a new vector v ′ = ( v x ′ , v y ′ , v z ′ ) = T v = T ( v x , v y , v z ) {\displaystyle \mathbf {v} '=(v'_{x},v'_{y},v'_{z})=T\mathbf {v} =T(v_{x},v_{y},v_{z})} . If one views { e x ′ = T ( 1 , 0 , 0 ) , e y ′ = T ( 0 , 1 , 0 ) , e z ′ = T ( 0 , 0 , 1 ) } {\displaystyle \{\mathbf {e} '_{x}=T(1,0,0),\ \mathbf {e} '_{y}=T(0,1,0),\ \mathbf {e} '_{z}=T(0,0,1)\}} as a new basis, then the coordinates of the new vector v ′ = v x e x ′ + v y e y ′ + v z e z ′ {\displaystyle \mathbf {v} '=v_{x}\mathbf {e} '_{x}+v_{y}\mathbf {e} '_{y}+v_{z}\mathbf {e} '_{z}} in the new basis are the same as those of v = v x e x + v y e y + v z e z {\displaystyle \mathbf {v} =v_{x}\mathbf {e} _{x}+v_{y}\mathbf {e} _{y}+v_{z}\mathbf {e} _{z}} in the original basis. Note that active transformations make sense even as a linear transformation into a different vector space. It makes sense to write the new vector in the unprimed basis (as above) only when the transformation is from the space into itself. === Passive transformation === On the other hand, when one views T {\displaystyle T} as a passive transformation, the initial vector v = ( v x , v y , v z ) {\displaystyle \mathbf {v} =(v_{x},v_{y},v_{z})} is left unchanged, while the coordinate system and its basis vectors are transformed in the opposite direction, that is, with the inverse transformation T − 1 {\displaystyle T^{-1}} . This gives a new coordinate system XYZ with basis vectors: e X = T − 1 ( 1 , 0 , 0 ) , e Y = T − 1 ( 0 , 1 , 0 ) , e Z = T − 1 ( 0 , 0 , 1 ) {\displaystyle \mathbf {e} _{X}=T^{-1}(1,0,0),\ \mathbf {e} _{Y}=T^{-1}(0,1,0),\ \mathbf {e} _{Z}=T^{-1}(0,0,1)} The new coordinates ( v X , v Y , v Z ) {\displaystyle (v_{X},v_{Y},v_{Z})} of v {\displaystyle \mathbf {v} } with respect to the new coordinate system XYZ are given by: v = ( v x , v y , v z ) = v X e X + v Y e Y + v Z e Z = T − 1 ( v X , v Y , v Z ) . {\displaystyle \mathbf {v} =(v_{x},v_{y},v_{z})=v_{X}\mathbf {e} _{X}+v_{Y}\mathbf {e} _{Y}+v_{Z}\mathbf {e} _{Z}=T^{-1}(v_{X},v_{Y},v_{Z}).} From this equation one sees that the new coordinates are given by ( v X , v Y , v Z ) = T ( v x , v y , v z ) . {\displaystyle (v_{X},v_{Y},v_{Z})=T(v_{x},v_{y},v_{z}).} As a passive transformation T {\displaystyle T} transforms the old coordinates into the new ones. Note the equivalence between the two kinds of transformations: the coordinates of the new point in the active transformation and the new coordinates of the point in the passive transformation are the same, namely ( v X , v Y , v Z ) = ( v x ′ , v y ′ , v z ′ ) . {\displaystyle (v_{X},v_{Y},v_{Z})=(v'_{x},v'_{y},v'_{z}).} == In abstract vector spaces == The distinction between active and passive transformations can be seen mathematically by considering abstract vector spaces. Fix a finite-dimensional vector space V {\displaystyle V} over a field K {\displaystyle K} (thought of as R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } ), and a basis B = { e i } 1 ≤ i ≤ n {\displaystyle {\mathcal {B}}=\{e_{i}\}_{1\leq i\leq n}} of V {\displaystyle V} . This basis provides an isomorphism C : K n → V {\displaystyle C:K^{n}\rightarrow V} via the component map ( v i ) 1 ≤ i ≤ n = ( v 1 , ⋯ , v n ) ↦ ∑ i v i e i {\textstyle (v_{i})_{1\leq i\leq n}=(v_{1},\cdots ,v_{n})\mapsto \sum _{i}v_{i}e_{i}} . An active transformation is then an endomorphism on V {\displaystyle V} , that is, a linear map from V {\displaystyle V} to itself. Taking such a transformation τ ∈ End ( V ) {\displaystyle \tau \in {\text{End}}(V)} , a vector v ∈ V {\displaystyle v\in V} transforms as v ↦ τ v {\displaystyle v\mapsto \tau v} . The components of τ {\displaystyle \tau } with respect to the basis B {\displaystyle {\mathcal {B}}} are defined via the equation τ e i = ∑ j τ j i e j {\textstyle \tau e_{i}=\sum _{j}\tau _{ji}e_{j}} . Then, the components of v {\displaystyle v} transform as v i ↦ τ i j v j {\displaystyle v_{i}\mapsto \tau _{ij}v_{j}} . A passive transformation is instead an endomorphism on K n {\displaystyle K^{n}} . This is applied to the components: v i ↦ T i j v j =: v i ′ {\displaystyle v_{i}\mapsto T_{ij}v_{j}=:v'_{i}} . Provided that T {\displaystyle T} is invertible, the new basis B ′ = { e i ′ } {\displaystyle {\mathcal {B}}'=\{e'_{i}\}} is determined by asking that v i e i = v i ′ e i ′ {\displaystyle v_{i}e_{i}=v'_{i}e'_{i}} , from which the expression e i ′ = ( T − 1 ) j i e j {\displaystyle e'_{i}=(T^{-1})_{ji}e_{j}} can be derived. Although the spaces End ( V ) {\displaystyle {\text{End}}(V)} and End ( K n ) {\displaystyle {\text{End}}({K^{n}})} are isomorphic, they are not canonically isomorphic. Nevertheless a choice of basis B {\displaystyle {\mathcal {B}}} allows construction of an isomorphism. === As left- and right-actions === Often one restricts to the case where the maps are invertible, so that active transformations are the general linear group GL ( V ) {\displaystyle {\text{GL}}(V)} of transformations while passive transformations are the group GL ( n , K ) {\displaystyle {\text{GL}}(n,K)} . The transformations can then be understood as acting on the space of bases for V {\displaystyle V} . An active transformation τ ∈ GL ( V ) {\displaystyle \tau \in {\text{GL}}(V)} sends the basis { e i } ↦ { τ e i } {\displaystyle \{e_{i}\}\mapsto \{\tau e_{i}\}} . Meanwhile a passive transformation T ∈ GL ( n , K ) {\displaystyle T\in {\text{GL}}(n,K)} sends the basis { e i } ↦ { ∑ j ( T − 1 ) j i e j } {\textstyle \{e_{i}\}\mapsto \left\{\sum _{j}(T^{-1})_{ji}e_{j}\right\}} . The inverse in the passive transformation ensures the components transform identically under τ {\displaystyle \tau } and T {\displaystyle T} . This then gives a sharp distinction between active and passive transformations: active transformations act from the left on bases, while the passive transformations act from the right, due to the inverse. This observation is made more natural by viewing bases B {\displaystyle {\mathcal {B}}} as a choice of isomorphism Φ B : K n → V {\displaystyle \Phi _{\mathcal {B}}:K^{n}\rightarrow V} . The space of bases is equivalently the space of such isomorphisms, denoted Iso ( K n , V ) {\displaystyle {\text{Iso}}(K^{n},V)} . Active transformations, identified with GL ( V ) {\displaystyle {\text{GL}}(V)} , act on Iso ( K n , V ) {\displaystyle {\text{Iso}}(K^{n},V)} from the left by composition, that is if τ {\displaystyle \tau } represents an active transformation, we have Φ B ′ = τ ∘ Φ B {\displaystyle \Phi _{\mathcal {B'}}=\tau \circ \Phi _{\mathcal {B}}} . On the opposite, passive transformations, identified with GL ( n , K ) {\displaystyle {\text{GL}}(n,K)} acts on Iso ( K n , V ) {\displaystyle {\text{Iso}}(K^{n},V)} from the right by pre-composition, that is if T {\displaystyle T} represents a passive transformation, we have Φ B ″ = Φ B ∘ T {\displaystyle \Phi _{\mathcal {B''}}=\Phi _{\mathcal {B}}\circ T} . This turns the space of bases into a left GL ( V ) {\displaystyle {\text{GL}}(V)} -torsor and a right GL ( n , K ) {\displaystyle {\text{GL}}(n,K)} -torsor. From a physical perspective, active transformations can be characterized as transformations of physical space, while passive transformations are characterized as redundancies in the description of physical space. This plays an important role in mathematical gauge theory, where gauge transformations are described mathematically by transition maps which act from the right on fibers. == See also == Change of basis Covariance and contravariance of vectors Rotation of axes Translation of axes == References == Dirk Struik (1953) Lectures on Analytic and Projective Geometry, page 84, Addison-Wesley. == External links == UI ambiguity
Wikipedia/Passive_transformation
In general relativity, Regge calculus is a formalism for producing simplicial approximations of spacetimes that are solutions to the Einstein field equation. The calculus was introduced by the Italian theoretician Tullio Regge in 1961. == Overview == The starting point for Regge's work is the fact that every four dimensional time orientable Lorentzian manifold admits a triangulation into simplices. Furthermore, the spacetime curvature can be expressed in terms of deficit angles associated with 2-faces where arrangements of 4-simplices meet. These 2-faces play the same role as the vertices where arrangements of triangles meet in a triangulation of a 2-manifold, which is easier to visualize. Here a vertex with a positive angular deficit represents a concentration of positive Gaussian curvature, whereas a vertex with a negative angular deficit represents a concentration of negative Gaussian curvature. The deficit angles can be computed directly from the various edge lengths in the triangulation, which is equivalent to saying that the Riemann curvature tensor can be computed from the metric tensor of a Lorentzian manifold. Regge showed that the vacuum field equations can be reformulated as a restriction on these deficit angles. He then showed how this can be applied to evolve an initial spacelike hyperslice according to the vacuum field equation. The result is that, starting with a triangulation of some spacelike hyperslice (which must itself satisfy a certain constraint equation), one can eventually obtain a simplicial approximation to a vacuum solution. This can be applied to difficult problems in numerical relativity such as simulating the collision of two black holes. The elegant idea behind Regge calculus has motivated the construction of further generalizations of this idea. In particular, Regge calculus has been adapted to study quantum gravity. == See also == == Notes == == References == John Archibald Wheeler (1965). "Geometrodynamics and the Issue of the Final State, in "Relativity Groups and Topology"". Les Houches Lecture Notes 1963, Gordon and Breach. {{cite journal}}: Cite journal requires |journal= (help) Misner, Charles W. Thorne, Kip S. & Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 978-0-7167-0344-0.{{cite book}}: CS1 maint: multiple names: authors list (link) See chapter 42. Herbert W. Hamber (2009). Hamber, Herbert W (ed.). Quantum Gravitation - The Feynman Path Integral Approach. Springer Publishing. doi:10.1007/978-3-540-85293-3. ISBN 978-3-540-85292-6. Chapters 4 and 6. [1] [2] James B. Hartle (1985). "Simplicial MiniSuperSpace I. General Discussion". Journal of Mathematical Physics. 26 (4): 804–812. Bibcode:1985JMP....26..804H. doi:10.1063/1.526571. Ruth M. Williams & Philip A. Tuckey (1992). "Regge calculus: a brief review and bibliography". Class. Quantum Grav. 9 (5): 1409–1422. Bibcode:1992CQGra...9.1409W. doi:10.1088/0264-9381/9/5/021. S2CID 250776873. Available (subscribers only) at "Classical and Quantum Gravity". Tullio E. Regge and Ruth M. Williams (2000). "Discrete Structures in Gravity". Journal of Mathematical Physics. 41 (6): 3964–3984. arXiv:gr-qc/0012035. Bibcode:2000JMP....41.3964R. doi:10.1063/1.533333. S2CID 118957627. Available at [3]. Herbert W. Hamber (1984). "Simplicial Quantum Gravity, in the Les Houches Summer School on Critical Phenomena, Random Systems and Gauge Theories, Session XLIII". North Holland Elsevier: 375–439. {{cite journal}}: Cite journal requires |journal= (help) [4] Adrian P. Gentle (2002). "Regge calculus: a unique tool for numerical relativity". Gen. Rel. Grav. 34 (10): 1701–1718. doi:10.1023/A:1020128425143. S2CID 119090423. eprint Renate Loll (1998). "Discrete approaches to quantum gravity in four dimensions". Living Rev. Relativ. 1 (1): 13. arXiv:gr-qc/9805049. Bibcode:1998LRR.....1...13L. doi:10.12942/lrr-1998-13. PMC 5253799. PMID 28191826. Available at "Living Reviews of Relativity". See section 3. J. W. Barrett (1987). "The geometry of classical Regge calculus". Class. Quantum Grav. 4 (6): 1565–1576. Bibcode:1987CQGra...4.1565B. doi:10.1088/0264-9381/4/6/015. S2CID 250783980. Available (subscribers only) at "Classical and Quantum Gravity". == External links == Regge calculus on ScienceWorld
Wikipedia/Regge_calculus
TensorFlow is a software library for machine learning and artificial intelligence. It can be used across a range of tasks, but is used mainly for training and inference of neural networks. It is one of the most popular deep learning frameworks, alongside others such as PyTorch. It is free and open-source software released under the Apache License 2.0. It was developed by the Google Brain team for Google's internal use in research and production. The initial version was released under the Apache License 2.0 in 2015. Google released an updated version, TensorFlow 2.0, in September 2019. TensorFlow can be used in a wide variety of programming languages, including Python, JavaScript, C++, and Java, facilitating its use in a range of applications in many sectors. == History == === DistBelief === Starting in 2011, Google Brain built DistBelief as a proprietary machine learning system based on deep learning neural networks. Its use grew rapidly across diverse Alphabet companies in both research and commercial applications. Google assigned multiple computer scientists, including Jeff Dean, to simplify and refactor the codebase of DistBelief into a faster, more robust application-grade library, which became TensorFlow. In 2009, the team, led by Geoffrey Hinton, had implemented generalized backpropagation and other improvements, which allowed generation of neural networks with substantially higher accuracy, for instance a 25% reduction in errors in speech recognition. === TensorFlow === TensorFlow is Google Brain's second-generation system. Version 1.0.0 was released on February 11, 2017. While the reference implementation runs on single devices, TensorFlow can run on multiple CPUs and GPUs (with optional CUDA and SYCL extensions for general-purpose computing on graphics processing units). TensorFlow is available on 64-bit Linux, macOS, Windows, and mobile computing platforms including Android and iOS. Its flexible architecture allows for easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. TensorFlow computations are expressed as stateful dataflow graphs. The name TensorFlow derives from the operations that such neural networks perform on multidimensional data arrays, which are referred to as tensors. During the Google I/O Conference in June 2016, Jeff Dean stated that 1,500 repositories on GitHub mentioned TensorFlow, of which only 5 were from Google. In March 2018, Google announced TensorFlow.js version 1.0 for machine learning in JavaScript. In Jan 2019, Google announced TensorFlow 2.0. It became officially available in September 2019. In May 2019, Google announced TensorFlow Graphics for deep learning in computer graphics. === Tensor processing unit (TPU) === In May 2016, Google announced its Tensor processing unit (TPU), an application-specific integrated circuit (ASIC, a hardware chip) built specifically for machine learning and tailored for TensorFlow. A TPU is a programmable AI accelerator designed to provide high throughput of low-precision arithmetic (e.g., 8-bit), and oriented toward using or running models rather than training them. Google announced they had been running TPUs inside their data centers for more than a year, and had found them to deliver an order of magnitude better-optimized performance per watt for machine learning. In May 2017, Google announced the second-generation, as well as the availability of the TPUs in Google Compute Engine. The second-generation TPUs deliver up to 180 teraflops of performance, and when organized into clusters of 64 TPUs, provide up to 11.5 petaflops. In May 2018, Google announced the third-generation TPUs delivering up to 420 teraflops of performance and 128 GB high bandwidth memory (HBM). Cloud TPU v3 Pods offer 100+ petaflops of performance and 32 TB HBM. In February 2018, Google announced that they were making TPUs available in beta on the Google Cloud Platform. === Edge TPU === In July 2018, the Edge TPU was announced. Edge TPU is Google's purpose-built ASIC chip designed to run TensorFlow Lite machine learning (ML) models on small client computing devices such as smartphones known as edge computing. === TensorFlow Lite === In May 2017, Google announced a software stack specifically for mobile development, TensorFlow Lite. In January 2019, the TensorFlow team released a developer preview of the mobile GPU inference engine with OpenGL ES 3.1 Compute Shaders on Android devices and Metal Compute Shaders on iOS devices. In May 2019, Google announced that their TensorFlow Lite Micro (also known as TensorFlow Lite for Microcontrollers) and ARM's uTensor would be merging. === TensorFlow 2.0 === As TensorFlow's market share among research papers was declining to the advantage of PyTorch, the TensorFlow Team announced a release of a new major version of the library in September 2019. TensorFlow 2.0 introduced many changes, the most significant being TensorFlow eager, which changed the automatic differentiation scheme from the static computational graph to the "Define-by-Run" scheme originally made popular by Chainer and later PyTorch. Other major changes included removal of old libraries, cross-compatibility between trained models on different versions of TensorFlow, and significant improvements to the performance on GPU. == Features == === AutoDifferentiation === AutoDifferentiation is the process of automatically calculating the gradient vector of a model with respect to each of its parameters. With this feature, TensorFlow can automatically compute the gradients for the parameters in a model, which is useful to algorithms such as backpropagation which require gradients to optimize performance. To do so, the framework must keep track of the order of operations done to the input Tensors in a model, and then compute the gradients with respect to the appropriate parameters. === Eager execution === TensorFlow includes an “eager execution” mode, which means that operations are evaluated immediately as opposed to being added to a computational graph which is executed later. Code executed eagerly can be examined step-by step-through a debugger, since data is augmented at each line of code rather than later in a computational graph. This execution paradigm is considered to be easier to debug because of its step by step transparency. === Distribute === In both eager and graph executions, TensorFlow provides an API for distributing computation across multiple devices with various distribution strategies. This distributed computing can often speed up the execution of training and evaluating of TensorFlow models and is a common practice in the field of AI. === Losses === To train and assess models, TensorFlow provides a set of loss functions (also known as cost functions). Some popular examples include mean squared error (MSE) and binary cross entropy (BCE). === Metrics === In order to assess the performance of machine learning models, TensorFlow gives API access to commonly used metrics. Examples include various accuracy metrics (binary, categorical, sparse categorical) along with other metrics such as Precision, Recall, and Intersection-over-Union (IoU). === TF.nn === TensorFlow.nn is a module for executing primitive neural network operations on models. Some of these operations include variations of convolutions (1/2/3D, Atrous, depthwise), activation functions (Softmax, RELU, GELU, Sigmoid, etc.) and their variations, and other operations (max-pooling, bias-add, etc.). === Optimizers === TensorFlow offers a set of optimizers for training neural networks, including ADAM, ADAGRAD, and Stochastic Gradient Descent (SGD). When training a model, different optimizers offer different modes of parameter tuning, often affecting a model's convergence and performance. == Usage and extensions == === TensorFlow === TensorFlow serves as a core platform and library for machine learning. TensorFlow's APIs use Keras to allow users to make their own machine-learning models. In addition to building and training their model, TensorFlow can also help load the data to train the model, and deploy it using TensorFlow Serving. TensorFlow provides a stable Python Application Program Interface (API), as well as APIs without backwards compatibility guarantee for Javascript, C++, and Java. Third-party language binding packages are also available for C#, Haskell, Julia, MATLAB, Object Pascal, R, Scala, Rust, OCaml, and Crystal. Bindings that are now archived and unsupported include Go and Swift. === TensorFlow.js === TensorFlow also has a library for machine learning in JavaScript. Using the provided JavaScript APIs, TensorFlow.js allows users to use either Tensorflow.js models or converted models from TensorFlow or TFLite, retrain the given models, and run on the web. === LiteRT === LiteRT, formerly known as TensorFlow Lite, has APIs for mobile apps or embedded devices to generate and deploy TensorFlow models. These models are compressed and optimized in order to be more efficient and have a higher performance on smaller capacity devices. LiteRT uses FlatBuffers as the data serialization format for network models, eschewing the Protocol Buffers format used by standard TensorFlow models. === TFX === TensorFlow Extended (abbrev. TFX) provides numerous components to perform all the operations needed for end-to-end production. Components include loading, validating, and transforming data, tuning, training, and evaluating the machine learning model, and pushing the model itself into production. === Integrations === ==== Numpy ==== Numpy is one of the most popular Python data libraries, and TensorFlow offers integration and compatibility with its data structures. Numpy NDarrays, the library's native datatype, are automatically converted to TensorFlow Tensors in TF operations; the same is also true vice versa. This allows for the two libraries to work in unison without requiring the user to write explicit data conversions. Moreover, the integration extends to memory optimization by having TF Tensors share the underlying memory representations of Numpy NDarrays whenever possible. === Extensions === TensorFlow also offers a variety of libraries and extensions to advance and extend the models and methods used. For example, TensorFlow Recommenders and TensorFlow Graphics are libraries for their respective functional. Other add-ons, libraries, and frameworks include TensorFlow Model Optimization, TensorFlow Probability, TensorFlow Quantum, and TensorFlow Decision Forests. ==== Google Colab ==== Google also released Colaboratory, a TensorFlow Jupyter notebook environment that does not require any setup. It runs on Google Cloud and allows users free access to GPUs and the ability to store and share notebooks on Google Drive. ==== Google JAX ==== Google JAX is a machine learning framework for transforming numerical functions. It is described as bringing together a modified version of autograd (automatic obtaining of the gradient function through differentiation of a function) and TensorFlow's XLA (Accelerated Linear Algebra). It is designed to follow the structure and workflow of NumPy as closely as possible and works with TensorFlow as well as other frameworks such as PyTorch. The primary functions of JAX are: grad: automatic differentiation jit: compilation vmap: auto-vectorization pmap: SPMD programming == Applications == === Medical === GE Healthcare used TensorFlow to increase the speed and accuracy of MRIs in identifying specific body parts. Google used TensorFlow to create DermAssist, a free mobile application that allows users to take pictures of their skin and identify potential health complications. Sinovation Ventures used TensorFlow to identify and classify eye diseases from optical coherence tomography (OCT) scans. === Social media === Twitter implemented TensorFlow to rank tweets by importance for a given user, and changed their platform to show tweets in order of this ranking. Previously, tweets were simply shown in reverse chronological order. The photo sharing app VSCO used TensorFlow to help suggest custom filters for photos. === Search Engine === Google officially released RankBrain on October 26, 2015, backed by TensorFlow. === Education === InSpace, a virtual learning platform, used TensorFlow to filter out toxic chat messages in classrooms. Liulishuo, an online English learning platform, utilized TensorFlow to create an adaptive curriculum for each student. TensorFlow was used to accurately assess a student's current abilities, and also helped decide the best future content to show based on those capabilities. === Retail === The e-commerce platform Carousell used TensorFlow to provide personalized recommendations for customers. The cosmetics company ModiFace used TensorFlow to create an augmented reality experience for customers to test various shades of make-up on their face. === Research === TensorFlow is the foundation for the automated image-captioning software DeepDream. == See also == Comparison of deep learning software Differentiable programming Keras == References == == Further reading == == External links == Official website Learning TensorFlow.js Book (ENG)
Wikipedia/TensorFlow
In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be called the absolute differential calculus (the foundation of tensor calculus), tensor calculus or tensor analysis developed by Gregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupil Tullio Levi-Civita in 1900. Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and differential geometry in the early twentieth century. The basis of modern tensor analysis was developed by Bernhard Riemann in a paper from 1861. A component of a tensor is a real number that is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to a differential structure are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularly multidimensional arrays. A tensor may be expressed as a linear sum of the tensor product of vector and covector basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value per dimension of the underlying vector space. The number of indices equals the degree (or order) of the tensor. For compactness and convenience, the Ricci calculus incorporates Einstein notation, which implies summation over indices repeated within a term and universal quantification over free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules. == Applications == Tensor calculus has many applications in physics, engineering and computer science including elasticity, continuum mechanics, electromagnetism (see mathematical descriptions of the electromagnetic field), general relativity (see mathematics of general relativity), quantum field theory, and machine learning. Working with a main proponent of the exterior calculus Élie Cartan, the influential geometer Shiing-Shen Chern summarizes the role of tensor calculus:In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus. == Notation for indices == === Basis-related distinctions === ==== Space and time coordinates ==== Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows: The lowercase Latin alphabet a, b, c, ... is used to indicate restriction to 3-dimensional Euclidean space, which take values 1, 2, 3 for the spatial components; and the time-like element, indicated by 0, is shown separately. The lowercase Greek alphabet α, β, γ, ... is used for 4-dimensional spacetime, which typically take values 0 for time components and 1, 2, 3 for the spatial components. Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space. ==== Coordinate and index notation ==== The author(s) will usually make it clear whether a subscript is intended as an index or as a label. For example, in 3-D Euclidean space and using Cartesian coordinates; the coordinate vector A = (A1, A2, A3) = (Ax, Ay, Az) shows a direct correspondence between the subscripts 1, 2, 3 and the labels x, y, z. In the expression Ai, i is interpreted as an index ranging over the values 1, 2, 3, while the x, y, z subscripts are only labels, not variables. In the context of spacetime, the index value 0 conventionally corresponds to the label t. ==== Reference to basis ==== Indices themselves may be labelled using diacritic-like symbols, such as a hat (ˆ), bar (¯), tilde (˜), or prime (′) as in: X ϕ ^ , Y λ ¯ , Z η ~ , T μ ′ {\displaystyle X_{\hat {\phi }}\,,Y_{\bar {\lambda }}\,,Z_{\tilde {\eta }}\,,T_{\mu '}} to denote a possibly different basis for that index. An example is in Lorentz transformations from one frame of reference to another, where one frame could be unprimed and the other primed, as in: v μ ′ = v ν L ν μ ′ . {\displaystyle v^{\mu '}=v^{\nu }L_{\nu }{}^{\mu '}.} This is not to be confused with van der Waerden notation for spinors, which uses hats and overdots on indices to reflect the chirality of a spinor. === Upper and lower indices === Ricci calculus, and index notation more generally, distinguishes between lower indices (subscripts) and upper indices (superscripts); the latter are not exponents, even though they may look as such to the reader only familiar with other parts of mathematics. In the special case that the metric tensor is everywhere equal to the identity matrix, it is possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position. Coordinate formulae in linear algebra such as a i j b j k {\displaystyle a_{ij}b_{jk}} for the product of matrices may be examples of this. But in general, the distinction between upper and lower indices should be maintained. ==== Covariant tensor components ==== A lower index (subscript) indicates covariance of the components with respect to that index: A α β γ ⋯ {\displaystyle A_{\alpha \beta \gamma \cdots }} ==== Contravariant tensor components ==== An upper index (superscript) indicates contravariance of the components with respect to that index: A α β γ ⋯ {\displaystyle A^{\alpha \beta \gamma \cdots }} ==== Mixed-variance tensor components ==== A tensor may have both upper and lower indices: A α β γ δ ⋯ . {\displaystyle A_{\alpha }{}^{\beta }{}_{\gamma }{}^{\delta \cdots }.} Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. with the generalized Kronecker delta). ==== Tensor type and degree ==== The number of each upper and lower indices of a tensor gives its type: a tensor with p upper and q lower indices is said to be of type (p, q), or to be a type-(p, q) tensor. The number of indices of a tensor, regardless of variance, is called the degree of the tensor (alternatively, its valence, order or rank, although rank is ambiguous). Thus, a tensor of type (p, q) has degree p + q. ==== Summation convention ==== The same symbol occurring twice (one upper and one lower) within a term indicates a pair of indices that are summed over: A α B α ≡ ∑ α A α B α or A α B α ≡ ∑ α A α B α . {\displaystyle A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{\alpha }B^{\alpha }\quad {\text{or}}\quad A^{\alpha }B_{\alpha }\equiv \sum _{\alpha }A^{\alpha }B_{\alpha }\,.} The operation implied by such a summation is called tensor contraction: A α B β → A α B α ≡ ∑ α A α B α . {\displaystyle A_{\alpha }B^{\beta }\rightarrow A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{\alpha }B^{\alpha }\,.} This summation may occur more than once within a term with a distinct symbol per pair of indices, for example: A α γ B α C γ β ≡ ∑ α ∑ γ A α γ B α C γ β . {\displaystyle A_{\alpha }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\equiv \sum _{\alpha }\sum _{\gamma }A_{\alpha }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\,.} Other combinations of repeated indices within a term are considered to be ill-formed, such as The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis. ==== Multi-index notation ==== If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list: A i 1 ⋯ i n B i 1 ⋯ i n j 1 ⋯ j m C j 1 ⋯ j m ≡ A I B I J C J , {\displaystyle A_{i_{1}\cdots i_{n}}B^{i_{1}\cdots i_{n}j_{1}\cdots j_{m}}C_{j_{1}\cdots j_{m}}\equiv A_{I}B^{IJ}C_{J},} where I = i1 i2 ⋅⋅⋅ in and J = j1 j2 ⋅⋅⋅ jm. ==== Sequential summation ==== A pair of vertical bars | ⋅ | around a set of all-upper indices or all-lower indices (but not both), associated with contraction with another set of indices when the expression is completely antisymmetric in each of the two sets of indices: A | α β γ | ⋯ B α β γ ⋯ = A α β γ ⋯ B | α β γ | ⋯ = ∑ α < β < γ A α β γ ⋯ B α β γ ⋯ {\displaystyle A_{|\alpha \beta \gamma |\cdots }B^{\alpha \beta \gamma \cdots }=A_{\alpha \beta \gamma \cdots }B^{|\alpha \beta \gamma |\cdots }=\sum _{\alpha <\beta <\gamma }A_{\alpha \beta \gamma \cdots }B^{\alpha \beta \gamma \cdots }} means a restricted sum over index values, where each index is constrained to being strictly less than the next. More than one group can be summed in this way, for example: A | α β γ | | δ ϵ ⋯ λ | B α β γ δ ϵ ⋯ λ | μ ν ⋯ ζ | C μ ν ⋯ ζ = ∑ α < β < γ ∑ δ < ϵ < ⋯ < λ ∑ μ < ν < ⋯ < ζ A α β γ δ ϵ ⋯ λ B α β γ δ ϵ ⋯ λ μ ν ⋯ ζ C μ ν ⋯ ζ {\displaystyle {\begin{aligned}&A_{|\alpha \beta \gamma |}{}^{|\delta \epsilon \cdots \lambda |}B^{\alpha \beta \gamma }{}_{\delta \epsilon \cdots \lambda |\mu \nu \cdots \zeta |}C^{\mu \nu \cdots \zeta }\\[3pt]={}&\sum _{\alpha <\beta <\gamma }~\sum _{\delta <\epsilon <\cdots <\lambda }~\sum _{\mu <\nu <\cdots <\zeta }A_{\alpha \beta \gamma }{}^{\delta \epsilon \cdots \lambda }B^{\alpha \beta \gamma }{}_{\delta \epsilon \cdots \lambda \mu \nu \cdots \zeta }C^{\mu \nu \cdots \zeta }\end{aligned}}} When using multi-index notation, an underarrow is placed underneath the block of indices: A P ⇁ Q ⇁ B P Q R ⇁ C R = ∑ P ⇁ ∑ Q ⇁ ∑ R ⇁ A P Q B P Q R C R {\displaystyle A_{\underset {\rightharpoondown }{P}}{}^{\underset {\rightharpoondown }{Q}}B^{P}{}_{Q{\underset {\rightharpoondown }{R}}}C^{R}=\sum _{\underset {\rightharpoondown }{P}}\sum _{\underset {\rightharpoondown }{Q}}\sum _{\underset {\rightharpoondown }{R}}A_{P}{}^{Q}B^{P}{}_{QR}C^{R}} where P ⇁ = | α β γ | , Q ⇁ = | δ ϵ ⋯ λ | , R ⇁ = | μ ν ⋯ ζ | {\displaystyle {\underset {\rightharpoondown }{P}}=|\alpha \beta \gamma |\,,\quad {\underset {\rightharpoondown }{Q}}=|\delta \epsilon \cdots \lambda |\,,\quad {\underset {\rightharpoondown }{R}}=|\mu \nu \cdots \zeta |} ==== Raising and lowering indices ==== By contracting an index with a non-singular metric tensor, the type of a tensor can be changed, converting a lower index to an upper index or vice versa: B γ β ⋯ = g γ α A α β ⋯ and A α β ⋯ = g α γ B γ β ⋯ {\displaystyle B^{\gamma }{}_{\beta \cdots }=g^{\gamma \alpha }A_{\alpha \beta \cdots }\quad {\text{and}}\quad A_{\alpha \beta \cdots }=g_{\alpha \gamma }B^{\gamma }{}_{\beta \cdots }} The base symbol in many cases is retained (e.g. using A where B appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation. === Correlations between index positions and invariance === This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a passive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation. The Kronecker delta is used, see also below. == General outlines for index notation and operations == Tensors are equal if and only if every corresponding component is equal; e.g., tensor A equals tensor B if and only if A α β γ = B α β γ {\displaystyle A^{\alpha }{}_{\beta \gamma }=B^{\alpha }{}_{\beta \gamma }} for all α, β, γ. Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure to dimensional analysis). === Free and dummy indices === Indices not involved in contractions are called free indices. Indices used in contractions are termed dummy indices, or summation indices. === A tensor equation represents many ordinary (real-valued) equations === The components of tensors (like Aα, Bβγ etc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality has n free indices, and if the dimensionality of the underlying vector space is m, the equality represents mn equations: each index takes on every value of a specific set of values. For instance, if A α B β γ C γ δ + D α β E δ = T α β δ {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }=T^{\alpha }{}_{\beta }{}_{\delta }} is in four dimensions (that is, each index runs from 0 to 3 or from 1 to 4), then because there are three free indices (α, β, δ), there are 43 = 64 equations. Three of these are: A 0 B 1 0 C 00 + A 0 B 1 1 C 10 + A 0 B 1 2 C 20 + A 0 B 1 3 C 30 + D 0 1 E 0 = T 0 1 0 A 1 B 0 0 C 00 + A 1 B 0 1 C 10 + A 1 B 0 2 C 20 + A 1 B 0 3 C 30 + D 1 0 E 0 = T 1 0 0 A 1 B 2 0 C 02 + A 1 B 2 1 C 12 + A 1 B 2 2 C 22 + A 1 B 2 3 C 32 + D 1 2 E 2 = T 1 2 2 . {\displaystyle {\begin{aligned}A^{0}B_{1}{}^{0}C_{00}+A^{0}B_{1}{}^{1}C_{10}+A^{0}B_{1}{}^{2}C_{20}+A^{0}B_{1}{}^{3}C_{30}+D^{0}{}_{1}{}E_{0}&=T^{0}{}_{1}{}_{0}\\A^{1}B_{0}{}^{0}C_{00}+A^{1}B_{0}{}^{1}C_{10}+A^{1}B_{0}{}^{2}C_{20}+A^{1}B_{0}{}^{3}C_{30}+D^{1}{}_{0}{}E_{0}&=T^{1}{}_{0}{}_{0}\\A^{1}B_{2}{}^{0}C_{02}+A^{1}B_{2}{}^{1}C_{12}+A^{1}B_{2}{}^{2}C_{22}+A^{1}B_{2}{}^{3}C_{32}+D^{1}{}_{2}{}E_{2}&=T^{1}{}_{2}{}_{2}.\end{aligned}}} This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation. === Indices are replaceable labels === Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verify vector calculus identities or identities of the Kronecker delta and Levi-Civita symbol (see also below). An example of a correct change is: A α B β γ C γ δ + D α β E δ → A λ B β μ C μ δ + D λ β E δ , {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\rightarrow A^{\lambda }B_{\beta }{}^{\mu }C_{\mu \delta }+D^{\lambda }{}_{\beta }{}E_{\delta }\,,} whereas an erroneous change is: A α B β γ C γ δ + D α β E δ ↛ A λ B β γ C μ δ + D α β E δ . {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\nrightarrow A^{\lambda }B_{\beta }{}^{\gamma }C_{\mu \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\,.} In the first replacement, λ replaced α and μ replaced γ everywhere, so the expression still has the same meaning. In the second, λ did not fully replace α, and μ did not fully replace γ (incidentally, the contraction on the γ index became a tensor product), which is entirely inconsistent for reasons shown next. === Indices are the same in every term === The free indices in a tensor expression always appear in the same (upper or lower) position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices (which implies a summation over that index) need not be the same, for example: A α B β γ C γ δ + D α δ E β = T α β δ {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\delta }E_{\beta }=T^{\alpha }{}_{\beta }{}_{\delta }} as for an erroneous expression: A α B β γ C γ δ + D α β γ E δ . {\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D_{\alpha }{}_{\beta }{}^{\gamma }E^{\delta }.} In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity, α, β, δ line up throughout and γ occurs twice in one term due to a contraction (once as an upper index and once as a lower index), and thus it is a valid expression. In the invalid expression, while β lines up, α and δ do not, and γ appears twice in one term (contraction) and once in another term, which is inconsistent. === Brackets and punctuation used once where implied === When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply. If the brackets enclose covariant indices – the rule applies only to all covariant indices enclosed in the brackets, not to any contravariant indices which happen to be placed intermediately between the brackets. Similarly if brackets enclose contravariant indices – the rule applies only to all enclosed contravariant indices, not to intermediately placed covariant indices. == Symmetric and antisymmetric parts == === Symmetric part of tensor === Parentheses, ( ), around multiple indices denotes the symmetrized part of the tensor. When symmetrizing p indices using σ to range over permutations of the numbers 1 to p, one takes a sum over the permutations of those indices ασ(i) for i = 1, 2, 3, ..., p, and then divides by the number of permutations: A ( α 1 α 2 ⋯ α p ) α p + 1 ⋯ α q = 1 p ! ∑ σ A α σ ( 1 ) ⋯ α σ ( p ) α p + 1 ⋯ α q . {\displaystyle A_{(\alpha _{1}\alpha _{2}\cdots \alpha _{p})\alpha _{p+1}\cdots \alpha _{q}}={\dfrac {1}{p!}}\sum _{\sigma }A_{\alpha _{\sigma (1)}\cdots \alpha _{\sigma (p)}\alpha _{p+1}\cdots \alpha _{q}}\,.} For example, two symmetrizing indices mean there are two indices to permute and sum over: A ( α β ) γ ⋯ = 1 2 ! ( A α β γ ⋯ + A β α γ ⋯ ) {\displaystyle A_{(\alpha \beta )\gamma \cdots }={\dfrac {1}{2!}}\left(A_{\alpha \beta \gamma \cdots }+A_{\beta \alpha \gamma \cdots }\right)} while for three symmetrizing indices, there are three indices to sum over and permute: A ( α β γ ) δ ⋯ = 1 3 ! ( A α β γ δ ⋯ + A γ α β δ ⋯ + A β γ α δ ⋯ + A α γ β δ ⋯ + A γ β α δ ⋯ + A β α γ δ ⋯ ) {\displaystyle A_{(\alpha \beta \gamma )\delta \cdots }={\dfrac {1}{3!}}\left(A_{\alpha \beta \gamma \delta \cdots }+A_{\gamma \alpha \beta \delta \cdots }+A_{\beta \gamma \alpha \delta \cdots }+A_{\alpha \gamma \beta \delta \cdots }+A_{\gamma \beta \alpha \delta \cdots }+A_{\beta \alpha \gamma \delta \cdots }\right)} The symmetrization is distributive over addition; A ( α ( B β ) γ ⋯ + C β ) γ ⋯ ) = A ( α B β ) γ ⋯ + A ( α C β ) γ ⋯ {\displaystyle A_{(\alpha }\left(B_{\beta )\gamma \cdots }+C_{\beta )\gamma \cdots }\right)=A_{(\alpha }B_{\beta )\gamma \cdots }+A_{(\alpha }C_{\beta )\gamma \cdots }} Indices are not part of the symmetrization when they are: not on the same level, for example; A ( α B β γ ) = 1 2 ! ( A α B β γ + A γ B β α ) {\displaystyle A_{(\alpha }B^{\beta }{}_{\gamma )}={\dfrac {1}{2!}}\left(A_{\alpha }B^{\beta }{}_{\gamma }+A_{\gamma }B^{\beta }{}_{\alpha }\right)} within the parentheses and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example; A ( α B | β | γ ) = 1 2 ! ( A α B β γ + A γ B β α ) {\displaystyle A_{(\alpha }B_{|\beta |}{}_{\gamma )}={\dfrac {1}{2!}}\left(A_{\alpha }B_{\beta \gamma }+A_{\gamma }B_{\beta \alpha }\right)} Here the α and γ indices are symmetrized, β is not. === Antisymmetric or alternating part of tensor === Square brackets, [ ], around multiple indices denotes the antisymmetrized part of the tensor. For p antisymmetrizing indices – the sum over the permutations of those indices ασ(i) multiplied by the signature of the permutation sgn(σ) is taken, then divided by the number of permutations: A [ α 1 ⋯ α p ] α p + 1 ⋯ α q = 1 p ! ∑ σ sgn ⁡ ( σ ) A α σ ( 1 ) ⋯ α σ ( p ) α p + 1 ⋯ α q = δ α 1 ⋯ α p β 1 … β p A β 1 ⋯ β p α p + 1 ⋯ α q {\displaystyle {\begin{aligned}&A_{[\alpha _{1}\cdots \alpha _{p}]\alpha _{p+1}\cdots \alpha _{q}}\\[3pt]={}&{\dfrac {1}{p!}}\sum _{\sigma }\operatorname {sgn}(\sigma )A_{\alpha _{\sigma (1)}\cdots \alpha _{\sigma (p)}\alpha _{p+1}\cdots \alpha _{q}}\\={}&\delta _{\alpha _{1}\cdots \alpha _{p}}^{\beta _{1}\dots \beta _{p}}A_{\beta _{1}\cdots \beta _{p}\alpha _{p+1}\cdots \alpha _{q}}\\\end{aligned}}} where δβ1⋅⋅⋅βpα1⋅⋅⋅αp is the generalized Kronecker delta of degree 2p, with scaling as defined below. For example, two antisymmetrizing indices imply: A [ α β ] γ ⋯ = 1 2 ! ( A α β γ ⋯ − A β α γ ⋯ ) {\displaystyle A_{[\alpha \beta ]\gamma \cdots }={\dfrac {1}{2!}}\left(A_{\alpha \beta \gamma \cdots }-A_{\beta \alpha \gamma \cdots }\right)} while three antisymmetrizing indices imply: A [ α β γ ] δ ⋯ = 1 3 ! ( A α β γ δ ⋯ + A γ α β δ ⋯ + A β γ α δ ⋯ − A α γ β δ ⋯ − A γ β α δ ⋯ − A β α γ δ ⋯ ) {\displaystyle A_{[\alpha \beta \gamma ]\delta \cdots }={\dfrac {1}{3!}}\left(A_{\alpha \beta \gamma \delta \cdots }+A_{\gamma \alpha \beta \delta \cdots }+A_{\beta \gamma \alpha \delta \cdots }-A_{\alpha \gamma \beta \delta \cdots }-A_{\gamma \beta \alpha \delta \cdots }-A_{\beta \alpha \gamma \delta \cdots }\right)} as for a more specific example, if F represents the electromagnetic tensor, then the equation 0 = F [ α β , γ ] = 1 3 ! ( F α β , γ + F γ α , β + F β γ , α − F β α , γ − F α γ , β − F γ β , α ) {\displaystyle 0=F_{[\alpha \beta ,\gamma ]}={\dfrac {1}{3!}}\left(F_{\alpha \beta ,\gamma }+F_{\gamma \alpha ,\beta }+F_{\beta \gamma ,\alpha }-F_{\beta \alpha ,\gamma }-F_{\alpha \gamma ,\beta }-F_{\gamma \beta ,\alpha }\right)\,} represents Gauss's law for magnetism and Faraday's law of induction. As before, the antisymmetrization is distributive over addition; A [ α ( B β ] γ ⋯ + C β ] γ ⋯ ) = A [ α B β ] γ ⋯ + A [ α C β ] γ ⋯ {\displaystyle A_{[\alpha }\left(B_{\beta ]\gamma \cdots }+C_{\beta ]\gamma \cdots }\right)=A_{[\alpha }B_{\beta ]\gamma \cdots }+A_{[\alpha }C_{\beta ]\gamma \cdots }} As with symmetrization, indices are not antisymmetrized when they are: not on the same level, for example; A [ α B β γ ] = 1 2 ! ( A α B β γ − A γ B β α ) {\displaystyle A_{[\alpha }B^{\beta }{}_{\gamma ]}={\dfrac {1}{2!}}\left(A_{\alpha }B^{\beta }{}_{\gamma }-A_{\gamma }B^{\beta }{}_{\alpha }\right)} within the square brackets and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example; A [ α B | β | γ ] = 1 2 ! ( A α B β γ − A γ B β α ) {\displaystyle A_{[\alpha }B_{|\beta |}{}_{\gamma ]}={\dfrac {1}{2!}}\left(A_{\alpha }B_{\beta \gamma }-A_{\gamma }B_{\beta \alpha }\right)} Here the α and γ indices are antisymmetrized, β is not. === Sum of symmetric and antisymmetric parts === Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices: A α β γ ⋯ = A ( α β ) γ ⋯ + A [ α β ] γ ⋯ {\displaystyle A_{\alpha \beta \gamma \cdots }=A_{(\alpha \beta )\gamma \cdots }+A_{[\alpha \beta ]\gamma \cdots }} as can be seen by adding the above expressions for A(αβ)γ⋅⋅⋅ and A[αβ]γ⋅⋅⋅. This does not hold for other than two indices. == Differentiation == For compactness, derivatives may be indicated by adding indices after a comma or semicolon. === Partial derivative === While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with a coordinate basis: a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted by xμ, but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple of differences in coordinates, Δxμ, can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors do not in general transform covariantly, but are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant, exterior and Lie derivatives below. To indicate partial differentiation of the components of a tensor field with respect to a coordinate variable xγ, a comma is placed before an appended lower index of the coordinate variable. A α β ⋯ , γ = ∂ ∂ x γ A α β ⋯ {\displaystyle A_{\alpha \beta \cdots ,\gamma }={\dfrac {\partial }{\partial x^{\gamma }}}A_{\alpha \beta \cdots }} This may be repeated (without adding further commas): A α 1 α 2 ⋯ α p , α p + 1 ⋯ α q = ∂ ∂ x α q ⋯ ∂ ∂ x α p + 2 ∂ ∂ x α p + 1 A α 1 α 2 ⋯ α p . {\displaystyle A_{\alpha _{1}\alpha _{2}\cdots \alpha _{p}\,,\,\alpha _{p+1}\cdots \alpha _{q}}={\dfrac {\partial }{\partial x^{\alpha _{q}}}}\cdots {\dfrac {\partial }{\partial x^{\alpha _{p+2}}}}{\dfrac {\partial }{\partial x^{\alpha _{p+1}}}}A_{\alpha _{1}\alpha _{2}\cdots \alpha _{p}}.} These components do not transform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by the product rule and the derivatives of the coordinates x α , γ = δ γ α , {\displaystyle x^{\alpha }{}_{,\gamma }=\delta _{\gamma }^{\alpha },} where δ is the Kronecker delta. === Covariant derivative === The covariant derivative is only defined if a connection is defined. For any tensor field, a semicolon ( ; ) placed before an appended lower (covariant) index indicates covariant differentiation. Less common alternatives to the semicolon include a forward slash ( / ) or in three-dimensional curved space a single vertical bar ( | ). The covariant derivative of a scalar function, a contravariant vector and a covariant vector are: f ; β = f , β {\displaystyle f_{;\beta }=f_{,\beta }} A α ; β = A α , β + Γ α γ β A γ {\displaystyle A^{\alpha }{}_{;\beta }=A^{\alpha }{}_{,\beta }+\Gamma ^{\alpha }{}_{\gamma \beta }A^{\gamma }} A α ; β = A α , β − Γ γ α β A γ , {\displaystyle A_{\alpha ;\beta }=A_{\alpha ,\beta }-\Gamma ^{\gamma }{}_{\alpha \beta }A_{\gamma }\,,} where Γαγβ are the connection coefficients. For an arbitrary tensor: T α 1 ⋯ α r β 1 ⋯ β s ; γ = T α 1 ⋯ α r β 1 ⋯ β s , γ + Γ α 1 δ γ T δ α 2 ⋯ α r β 1 ⋯ β s + ⋯ + Γ α r δ γ T α 1 ⋯ α r − 1 δ β 1 ⋯ β s − Γ δ β 1 γ T α 1 ⋯ α r δ β 2 ⋯ β s − ⋯ − Γ δ β s γ T α 1 ⋯ α r β 1 ⋯ β s − 1 δ . {\displaystyle {\begin{aligned}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\gamma }&\\=T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s},\gamma }&+\,\Gamma ^{\alpha _{1}}{}_{\delta \gamma }T^{\delta \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}+\cdots +\Gamma ^{\alpha _{r}}{}_{\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\delta }{}_{\beta _{1}\cdots \beta _{s}}\\&-\,\Gamma ^{\delta }{}_{\beta _{1}\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\delta \beta _{2}\cdots \beta _{s}}-\cdots -\Gamma ^{\delta }{}_{\beta _{s}\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\delta }\,.\end{aligned}}} An alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol ∇β. For the case of a vector field Aα: ∇ β A α = A α ; β . {\displaystyle \nabla _{\beta }A^{\alpha }=A^{\alpha }{}_{;\beta }\,.} The covariant formulation of the directional derivative of any tensor field along a vector vγ may be expressed as its contraction with the covariant derivative, e.g.: v γ A α ; γ . {\displaystyle v^{\gamma }A_{\alpha ;\gamma }\,.} The components of this derivative of a tensor field transform covariantly, and hence form another tensor field, despite subexpressions (the partial derivative and the connection coefficients) separately not transforming covariantly. This derivative is characterized by the product rule: ( A α β ⋯ B γ δ ⋯ ) ; ϵ = A α β ⋯ ; ϵ B γ δ ⋯ + A α β ⋯ B γ δ ⋯ ; ϵ . {\displaystyle (A^{\alpha }{}_{\beta \cdots }B^{\gamma }{}_{\delta \cdots })_{;\epsilon }=A^{\alpha }{}_{\beta \cdots ;\epsilon }B^{\gamma }{}_{\delta \cdots }+A^{\alpha }{}_{\beta \cdots }B^{\gamma }{}_{\delta \cdots ;\epsilon }\,.} ==== Connection types ==== A Koszul connection on the tangent bundle of a differentiable manifold is called an affine connection. A connection is a metric connection when the covariant derivative of the metric tensor vanishes: g μ ν ; ξ = 0 . {\displaystyle g_{\mu \nu ;\xi }=0\,.} An affine connection that is also a metric connection is called a Riemannian connection. A Riemannian connection that is torsion-free (i.e., for which the torsion tensor vanishes: Tαβγ = 0) is a Levi-Civita connection. The Γαβγ for a Levi-Civita connection in a coordinate basis are called Christoffel symbols of the second kind. === Exterior derivative === The exterior derivative of a totally antisymmetric type (0, s) tensor field with components Aα1⋅⋅⋅αs (also called a differential form) is a derivative that is covariant under basis transformations. It does not depend on either a metric tensor or a connection: it requires only the structure of a differentiable manifold. In a coordinate basis, it may be expressed as the antisymmetrization of the partial derivatives of the tensor components:: 232–233  ( d A ) γ α 1 ⋯ α s = ∂ ∂ x [ γ A α 1 ⋯ α s ] = A [ α 1 ⋯ α s , γ ] . {\displaystyle (\mathrm {d} A)_{\gamma \alpha _{1}\cdots \alpha _{s}}={\frac {\partial }{\partial x^{[\gamma }}}A_{\alpha _{1}\cdots \alpha _{s}]}=A_{[\alpha _{1}\cdots \alpha _{s},\gamma ]}.} This derivative is not defined on any tensor field with contravariant indices or that is not totally antisymmetric. It is characterized by a graded product rule. === Lie derivative === The Lie derivative is another derivative that is covariant under basis transformations. Like the exterior derivative, it does not depend on either a metric tensor or a connection. The Lie derivative of a type (r, s) tensor field T along (the flow of) a contravariant vector field Xρ may be expressed using a coordinate basis as ( L X T ) α 1 ⋯ α r β 1 ⋯ β s = X γ T α 1 ⋯ α r β 1 ⋯ β s , γ − X α 1 , γ T γ α 2 ⋯ α r β 1 ⋯ β s − ⋯ − X α r , γ T α 1 ⋯ α r − 1 γ β 1 ⋯ β s + X γ , β 1 T α 1 ⋯ α r γ β 2 ⋯ β s + ⋯ + X γ , β s T α 1 ⋯ α r β 1 ⋯ β s − 1 γ . {\displaystyle {\begin{aligned}({\mathcal {L}}_{X}T)^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}&\\=X^{\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s},\gamma }&-\,X^{\alpha _{1}}{}_{,\gamma }T^{\gamma \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\cdots -X^{\alpha _{r}}{}_{,\gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\gamma }{}_{\beta _{1}\cdots \beta _{s}}\\&+\,X^{\gamma }{}_{,\beta _{1}}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\gamma \beta _{2}\cdots \beta _{s}}+\cdots +X^{\gamma }{}_{,\beta _{s}}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\gamma }\,.\end{aligned}}} This derivative is characterized by the product rule and the fact that the Lie derivative of a contravariant vector field along itself is zero: ( L X X ) α = X γ X α , γ − X α , γ X γ = 0 . {\displaystyle ({\mathcal {L}}_{X}X)^{\alpha }=X^{\gamma }X^{\alpha }{}_{,\gamma }-X^{\alpha }{}_{,\gamma }X^{\gamma }=0\,.} == Notable tensors == === Kronecker delta === The Kronecker delta is like the identity matrix when multiplied and contracted: δ β α A β = A α δ ν μ B μ = B ν . {\displaystyle {\begin{aligned}\delta _{\beta }^{\alpha }\,A^{\beta }&=A^{\alpha }\\\delta _{\nu }^{\mu }\,B_{\mu }&=B_{\nu }.\end{aligned}}} The components δαβ are the same in any basis and form an invariant tensor of type (1, 1), i.e. the identity of the tangent bundle over the identity mapping of the base manifold, and so its trace is an invariant. Its trace is the dimensionality of the space; for example, in four-dimensional spacetime, δ ρ ρ = δ 0 0 + δ 1 1 + δ 2 2 + δ 3 3 = 4. {\displaystyle \delta _{\rho }^{\rho }=\delta _{0}^{0}+\delta _{1}^{1}+\delta _{2}^{2}+\delta _{3}^{3}=4.} The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree 2p may be defined in terms of the Kronecker delta by (a common definition includes an additional multiplier of p! on the right): δ β 1 ⋯ β p α 1 ⋯ α p = δ β 1 [ α 1 ⋯ δ β p α p ] , {\displaystyle \delta _{\beta _{1}\cdots \beta _{p}}^{\alpha _{1}\cdots \alpha _{p}}=\delta _{\beta _{1}}^{[\alpha _{1}}\cdots \delta _{\beta _{p}}^{\alpha _{p}]},} and acts as an antisymmetrizer on p indices: δ β 1 ⋯ β p α 1 ⋯ α p A β 1 ⋯ β p = A [ α 1 ⋯ α p ] . {\displaystyle \delta _{\beta _{1}\cdots \beta _{p}}^{\alpha _{1}\cdots \alpha _{p}}\,A^{\beta _{1}\cdots \beta _{p}}=A^{[\alpha _{1}\cdots \alpha _{p}]}.} === Torsion tensor === An affine connection has a torsion tensor Tαβγ: T α β γ = Γ α β γ − Γ α γ β − γ α β γ , {\displaystyle T^{\alpha }{}_{\beta \gamma }=\Gamma ^{\alpha }{}_{\beta \gamma }-\Gamma ^{\alpha }{}_{\gamma \beta }-\gamma ^{\alpha }{}_{\beta \gamma },} where γαβγ are given by the components of the Lie bracket of the local basis, which vanish when it is a coordinate basis. For a Levi-Civita connection this tensor is defined to be zero, which for a coordinate basis gives the equations Γ α β γ = Γ α γ β . {\displaystyle \Gamma ^{\alpha }{}_{\beta \gamma }=\Gamma ^{\alpha }{}_{\gamma \beta }.} === Riemann curvature tensor === If this tensor is defined as R ρ σ μ ν = Γ ρ ν σ , μ − Γ ρ μ σ , ν + Γ ρ μ λ Γ λ ν σ − Γ ρ ν λ Γ λ μ σ , {\displaystyle R^{\rho }{}_{\sigma \mu \nu }=\Gamma ^{\rho }{}_{\nu \sigma ,\mu }-\Gamma ^{\rho }{}_{\mu \sigma ,\nu }+\Gamma ^{\rho }{}_{\mu \lambda }\Gamma ^{\lambda }{}_{\nu \sigma }-\Gamma ^{\rho }{}_{\nu \lambda }\Gamma ^{\lambda }{}_{\mu \sigma }\,,} then it is the commutator of the covariant derivative with itself: A ν ; ρ σ − A ν ; σ ρ = A β R β ν ρ σ , {\displaystyle A_{\nu ;\rho \sigma }-A_{\nu ;\sigma \rho }=A_{\beta }R^{\beta }{}_{\nu \rho \sigma }\,,} since the connection is torsionless, which means that the torsion tensor vanishes. This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows: T α 1 ⋯ α r β 1 ⋯ β s ; γ δ − T α 1 ⋯ α r β 1 ⋯ β s ; δ γ = − R α 1 ρ γ δ T ρ α 2 ⋯ α r β 1 ⋯ β s − ⋯ − R α r ρ γ δ T α 1 ⋯ α r − 1 ρ β 1 ⋯ β s + R σ β 1 γ δ T α 1 ⋯ α r σ β 2 ⋯ β s + ⋯ + R σ β s γ δ T α 1 ⋯ α r β 1 ⋯ β s − 1 σ {\displaystyle {\begin{aligned}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\gamma \delta }&-T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\delta \gamma }\\&\!\!\!\!\!\!\!\!\!\!=-R^{\alpha _{1}}{}_{\rho \gamma \delta }T^{\rho \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\cdots -R^{\alpha _{r}}{}_{\rho \gamma \delta }T^{\alpha _{1}\cdots \alpha _{r-1}\rho }{}_{\beta _{1}\cdots \beta _{s}}\\&+R^{\sigma }{}_{\beta _{1}\gamma \delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\sigma \beta _{2}\cdots \beta _{s}}+\cdots +R^{\sigma }{}_{\beta _{s}\gamma \delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\sigma }\,\end{aligned}}} which are often referred to as the Ricci identities. === Metric tensor === The metric tensor gαβ is used for lowering indices and gives the length of any space-like curve length = ∫ y 1 y 2 g α β d x α d γ d x β d γ d γ , {\displaystyle {\text{length}}=\int _{y_{1}}^{y_{2}}{\sqrt {g_{\alpha \beta }{\frac {dx^{\alpha }}{d\gamma }}{\frac {dx^{\beta }}{d\gamma }}}}\,d\gamma \,,} where γ is any smooth strictly monotone parameterization of the path. It also gives the duration of any time-like curve duration = ∫ t 1 t 2 − 1 c 2 g α β d x α d γ d x β d γ d γ , {\displaystyle {\text{duration}}=\int _{t_{1}}^{t_{2}}{\sqrt {{\frac {-1}{c^{2}}}g_{\alpha \beta }{\frac {dx^{\alpha }}{d\gamma }}{\frac {dx^{\beta }}{d\gamma }}}}\,d\gamma \,,} where γ is any smooth strictly monotone parameterization of the trajectory. See also Line element. The inverse matrix gαβ of the metric tensor is another important tensor, used for raising indices: g α β g β γ = δ γ α . {\displaystyle g^{\alpha \beta }g_{\beta \gamma }=\delta _{\gamma }^{\alpha }\,.} == See also == == Notes == == References == == Sources == Bishop, R.L.; Goldberg, S.I. (1968), Tensor Analysis on Manifolds (First Dover 1980 ed.), The Macmillan Company, ISBN 0-486-64039-6 Danielson, Donald A. (2003). Vectors and Tensors in Engineering and Physics (2/e ed.). Westview (Perseus). ISBN 978-0-8133-4080-7. Dimitrienko, Yuriy (2002). Tensor Analysis and Nonlinear Tensor Functions. Kluwer Academic Publishers (Springer). ISBN 1-4020-1015-X. Lovelock, David; Hanno Rund (1989) [1975]. Tensors, Differential Forms, and Variational Principles. Dover. ISBN 978-0-486-65840-7. C. Møller (1952), The Theory of Relativity (3rd ed.), Oxford University Press Synge J.L.; Schild A. (1949). Tensor Calculus. first Dover Publications 1978 edition. ISBN 978-0-486-63612-2. {{cite book}}: ISBN / Date incompatibility (help) J.R. Tyldesley (1975), An introduction to Tensor Analysis: For Engineers and Applied Scientists, Longman, ISBN 0-582-44355-5 D.C. Kay (1988), Tensor Calculus, Schaum's Outlines, McGraw Hill (USA), ISBN 0-07-033484-6 T. Frankel (2012), The Geometry of Physics (3rd ed.), Cambridge University Press, ISBN 978-1107-602601 == Further reading == Dimitrienko, Yuriy (2002). Tensor Analysis and Nonlinear Tensor Functions. Springer. ISBN 1-4020-1015-X. Sokolnikoff, Ivan S (1951). Tensor Analysis: Theory and Applications to Geometry and Mechanics of Continua. Wiley. ISBN 0471810525. {{cite book}}: ISBN / Date incompatibility (help) Borisenko, A.I.; Tarapov, I.E. (1979). Vector and Tensor Analysis with Applications (2nd ed.). Dover. ISBN 0486638332. Itskov, Mikhail (2015). Tensor Algebra and Tensor Analysis for Engineers: With Applications to Continuum Mechanics (2nd ed.). Springer. ISBN 9783319163420. Tyldesley, J. R. (1973). An introduction to Tensor Analysis: For Engineers and Applied Scientists. Longman. ISBN 0-582-44355-5. Kay, D. C. (1988). Tensor Calculus. Schaum’s Outlines. McGraw Hill. ISBN 0-07-033484-6. Grinfeld, P. (2014). Introduction to Tensor Analysis and the Calculus of Moving Surfaces. Springer. ISBN 978-1-4614-7866-9. == External links == Dullemond, Kees; Peeters, Kasper (1991–2010). "Introduction to Tensor Calculus" (PDF). Retrieved 17 May 2018.
Wikipedia/Tensor_index_notation
Graphcore Limited is a British semiconductor company that develops accelerators for AI and machine learning. It has introduced a massively parallel Intelligence Processing Unit (IPU) that holds the complete machine learning model inside the processor. == History == Graphcore was founded in 2016 by Simon Knowles and Nigel Toon. In the autumn of 2016, Graphcore secured a first funding round led by Robert Bosch Venture Capital. Other backers included Samsung, Amadeus Capital Partners, C4 Ventures, Draper Esprit, Foundation Capital, and Pitango. In July 2017, Graphcore secured a round B funding led by Atomico, which was followed a few months later by $50 million in funding from Sequoia Capital. In December 2018, Graphcore closed its series D with $200 million raised at a $1.7 billion valuation, making the company a unicorn. Investors included Microsoft, Samsung and Dell Technologies. On 13 November 2019, Graphcore announced that their Graphcore C2 IPUs were available for preview on Microsoft Azure. Meta Platforms acquired the AI networking technology team from Graphcore in early 2023. In July 2024, Softbank Group agreed to acquire Graphcore for around $500 million. The deal is under review by the UK's Business Department's investment security unit. == Products == In 2016, Graphcore announced the world's first graph tool chain designed for machine intelligence called Poplar Software Stack. In July 2017, Graphcore announced its first chip, called the Colossus GC2, a "16 nm massively parallel, mixed-precision floating point processor", that became available in 2018. Packaged with two chips on a single PCI Express card, called the Graphcore C2 IPU (an Intelligence Processing Unit), it is stated to perform the same role as a GPU in conjunction with standard machine learning frameworks such as TensorFlow. The device relies on scratchpad memory for its performance rather than traditional cache hierarchies. In July 2020, Graphcore presented its second generation processor called GC200, built with TSMC's 7nm FinFET manufacturing process. GC200 is a 59 billion transistor, 823 square-millimeter integrated circuit with 1,472 computational cores and 900 Mbyte of local memory. In 2022, Graphcore and TSMC presented the Bow IPU, a 3D package of a GC200 die bonded face to face to a power-delivery die that allows for higher clock rate at lower core voltage. Graphcore aims at a Good machine, named after I.J. Good, enabling AI models with more parameters than the human brain has synapses. Both the older and newer chips can use 6 threads per tile (for a total of 7,296 and 8,832 threads, respectively) "MIMD (Multiple Instruction, Multiple Data) parallelism and has distributed, local memory as its only form of memory on the device" (except for registers). The older GC2 chip has 256 KiB per tile while the newer GC200 chip has about 630 KiB per tile that are arranged into islands (4 tiles per island), that are arranged into columns, and latency is best within tile. The IPU uses IEEE FP16, with stochastic rounding, and also single-precision FP32, at lower performance. Code and data executed locally must fit in a tile, but with message-passing, all on-chip or off-chip memory can be used, and software for AI makes it transparently possible, e.g. has PyTorch support. == See also == AI accelerator Massively parallel processor array == References == == External links == Graphcore
Wikipedia/Graphcore
In geometric optics, the paraxial approximation is a small-angle approximation used in Gaussian optics and ray tracing of light through an optical system (such as a lens). A paraxial ray is a ray that makes a small angle (θ) to the optical axis of the system, and lies close to the axis throughout the system. Generally, this allows three important approximations (for θ in radians) for calculation of the ray's path, namely: sin ⁡ θ ≈ θ , tan ⁡ θ ≈ θ and cos ⁡ θ ≈ 1. {\displaystyle \sin \theta \approx \theta ,\quad \tan \theta \approx \theta \quad {\text{and}}\quad \cos \theta \approx 1.} The paraxial approximation is used in Gaussian optics and first-order ray tracing. Ray transfer matrix analysis is one method that uses the approximation. In some cases, the second-order approximation is also called "paraxial". The approximations above for sine and tangent do not change for the "second-order" paraxial approximation (the second term in their Taylor series expansion is zero), while for cosine the second order approximation is cos ⁡ θ ≈ 1 − θ 2 2 . {\displaystyle \cos \theta \approx 1-{\theta ^{2} \over 2}\ .} The second-order approximation is accurate within 0.5% for angles under about 10°, but its inaccuracy grows significantly for larger angles. For larger angles it is often necessary to distinguish between meridional rays, which lie in a plane containing the optical axis, and sagittal rays, which do not. Use of the small angle approximations replaces dimensionless trigonometric functions with angles in radians. In dimensional analysis on optics equations radians are dimensionless and therefore can be ignored. A paraxial approximation is also commonly used in physical optics. It is used in the derivation of the paraxial wave equation from the homogeneous Maxwell's equations and, consequently, Gaussian beam optics. == References == == External links == Paraxial Approximation and the Mirror by David Schurig, The Wolfram Demonstrations Project.
Wikipedia/Paraxial_approximation