source
stringclasses 95
values | text
stringlengths 92
3.07k
|
|---|---|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
in mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input which may or may not be in the domain of the function. formal definitions, first devised in the early 19th century, are given below. informally, a function f assigns an output f ( x ) to every input x. we say that the function has a limit l at an input p, if f ( x ) gets closer and closer to l as x moves closer and closer to p. more specifically, the output value can be made arbitrarily close to l if the input to f is taken sufficiently close to p. on the other hand, if some inputs very close to p are taken to outputs that stay a fixed distance apart, then we say the limit does not exist. the notion of a limit has many applications in modern calculus. in particular, the many definitions of continuity employ the concept of limit : roughly, a function is continuous if all of its limits agree with the values of the function. the concept of limit also appears in the definition of the derivative : in the calculus of one variable, this is the limiting value of the slope of secant lines to the graph of a function. imagine a person walking on a landscape represented by the graph y = f ( x ). their horizontal position is given by x, much like the position given by a map of the land or by a global positioning system. their altitude is given by the coordinate y. suppose they walk towards a position x = p, as they get closer and closer to this point, they will notice that their altitude approaches a specific value l. if asked about the altitude corresponding to x = p, they would reply by saying y = l. what, then, does it mean to say, their altitude is approaching l? it means that their altitude gets nearer and nearer to lexcept for a possible small error in accuracy. for example, suppose we set a particular accuracy goal for our traveler : they must get within ten meters of l. they report back that indeed, they can get within ten vertical meters of l, arguing that as long as they are within fifty horizontal meters of p, their altitude is always within ten meters of l. the accuracy goal is then changed : can they get within one vertical meter? yes, supposing that they are able to move within five horizontal meters of p, their altitude will always remain within one meter from the target altitude l. summarizing the aforementioned concept we can say that
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
move within five horizontal meters of p, their altitude will always remain within one meter from the target altitude l. summarizing the aforementioned concept we can say that the traveler's altitude approaches l as their horizontal position approaches p, so as to say that for every target accuracy goal, however small it may be, there is some neighbourhood of p where all ( not just some ) altitudes correspond to all the horizontal positions, except maybe the horizontal position p itself, in that neighbourhood fulfill that accuracy goal. the initial informal statement can now be explicated : in fact, this explicit statement is quite close to the formal definition of the limit of a function, with values in a topological space. more specifically, to say thatlimx \ rightarrowp f ( x ) = l, { \ lim _ { x \ to p } f ( x ) = l, } is to say that f ( x ) can be made as close to l as desired, by making x close enough, but not equal, to p. the following definitions, known as ( \ epsilon, \ delta ) - definitions, are the generally accepted definitions for the limit of a function in various contexts. supposef : r \ rightarrowr { f : \ mathbb { r } arrow \ mathbb { r } } is a function defined on the real line, and there are two real numbers p and l. one would say : the limit of f of x, as x approaches p, exists, and it equals l. and write, limx \ rightarrowp f ( x ) = l, { \ lim _ { x \ to p } f ( x ) = l, } or alternatively, say f ( x ) tends to l as x tends to p, and write, f ( x ) \ rightarrowl asx \ rightarrowp, { f ( x ) \ to l { \ text { as } } x \ to p, } if the following property holds : for every real \ epsilon > 0, there exists a real \ delta > 0 such that for all real x, 0 < | x - p | < \ delta implies | f ( x ) - l | < \ epsilon. symbolically, ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ inr ) ( 0 < | x - p | < \ delta | f ( x ) - l | < \ epsilon ). { ( \ forall \ varep
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
( 0 < | x - p | < \ delta | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in \ mathbb { r } ) \, ( 0 < | x - p | < \ delta \ implies | f ( x ) - l | < \ varepsilon ). } for example, we may saylimx \ rightarrow2 ( 4x + 1 ) = 9 { \ lim _ { x \ to 2 } ( 4x + 1 ) = 9 } because for every real \ epsilon > 0, we can take \ delta = \ epsilon / 4, so that for all real x, if 0 < | x - 2 | < \ delta, then | 4x + 1 - 9 | < \ epsilon. a more general definition applies for functions defined on subsets of the real line. let s be a subset ofr. { \ mathbb { r }. } letf : s \ rightarrowr { f : s \ to \ mathbb { r } } be a real - valued function. let p be a point such that there exists some open interval ( a, b ) containing p with ( a, p ) \ cup ( p, b ) \ subsets. { ( a, p ) \ cup ( p, b ) \ subset s. } it is then said that the limit of f as x approaches p is l, if : or, symbolically : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ in ( a, b ) ) ( 0 < | x - p | < \ delta | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in ( a, b ) ) \, ( 0 < | x - p | < \ delta \ implies | f ( x ) - l | < \ varepsilon ). } for example, we may saylimx \ rightarrow1 x + 3 = 2 { \ lim _ { x \ to 1 } { \ sqrt { x + 3 } } = 2 } because for every real \ epsilon > 0, we can take \ delta = \ epsilon, so that for
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
sqrt { x + 3 } } = 2 } because for every real \ epsilon > 0, we can take \ delta = \ epsilon, so that for all real x \ geq - 3, if 0 < | x - 1 | < \ delta, then | f ( x ) - 2 | < \ epsilon. in this example, s = [ - 3, ) contains open intervals around the point 1 ( for example, the interval ( 0, 2 ) ). here, note that the value of the limit does not depend on f being defined at p, nor on the value f ( p ) if it is defined. for example, letf : [ 0, 1 ) \ cup ( 1, 2 ] \ rightarrowr, f ( x ) = 2x 2 - x - 1x - 1. { f : [ 0, 1 ) \ cup ( 1, 2 ] \ to \ mathbb { r }, f ( x ) = { \ tfrac { 2x ^ { 2 } - x - 1 } { x - 1 } }. } limx \ rightarrow1 f ( x ) = 3 { \ lim _ { x \ to 1 } f ( x ) = 3 } because for every \ epsilon > 0, we can take \ delta = \ epsilon / 2, so that for all real x \ neq 1, if 0 < | x - 1 | < \ delta, then | f ( x ) - 3 | < \ epsilon. note that here f ( 1 ) is undefined. in fact, a limit can exist in { p \ inr | \ exists ( a, b ) \ subsetr : p \ in ( a, b ) and ( a, p ) \ cup ( p, b ) \ subsets }, { \ { p \ in \ mathbb { r } \, | \, \ exists ( a, b ) \ subset \ mathbb { r } : \, p \ in ( a, b ) { \ text { and } } ( a, p ) \ cup ( p, b ) \ subset s \ }, } which equalsints \ cupisos c, { \ operatorname { int } s \ cup \ operatorname { iso } s ^ { c }, } where int s is the interior of s, and iso sc are the isolated points of the complement of s. in our previous example wheres = [ 0, 1 ) \ cup ( 1, 2
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
, and iso sc are the isolated points of the complement of s. in our previous example wheres = [ 0, 1 ) \ cup ( 1, 2 ], { s = [ 0, 1 ) \ cup ( 1, 2 ], } ints = ( 0, 1 ) \ cup ( 1, 2 ), { \ operatorname { int } s = ( 0, 1 ) \ cup ( 1, 2 ), } isos c = { 1 }. { \ operatorname { iso } s ^ { c } = \ { 1 \ }. } we see, specifically, this definition of limit allows a limit to exist at 1, but not 0 or 2. the letters \ epsilon and \ delta can be understood as " error " and " distance ". in fact, cauchy used \ epsilon as an abbreviation for " error " in some of his work, though in his definition of continuity, he used an infinitesimal \ alpha { \ alpha } rather than either \ epsilon or \ delta ( see cours d'analyse ). in these terms, the error ( \ epsilon ) in the measurement of the value at the limit can be made as small as desired, by reducing the distance ( \ delta ) to the limit point. as discussed below, this definition also works for functions in a more general context. the idea that \ delta and \ epsilon represent distances helps suggest these generalizations. alternatively, x may approach p from above ( right ) or below ( left ), in which case the limits may be written aslimx \ rightarrowp + f ( x ) = l { \ lim _ { x \ to p ^ { + } } f ( x ) = l } orlimx \ rightarrowp - f ( x ) = l { \ lim _ { x \ to p ^ { - } } f ( x ) = l } respectively. if these limits exist at p and are equal there, then this can be referred to as the limit of f ( x ) at p. if the one - sided limits exist at p, but are unequal, then there is no limit at p ( i. e., the limit at p does not exist ). if either one - sided limit does not exist at p, then the limit at p also does not exist. a formal definition is as follows. the limit of f as x approaches p from above is l if : for every \ epsilon > 0, there exists a \ delta > 0
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
is as follows. the limit of f as x approaches p from above is l if : for every \ epsilon > 0, there exists a \ delta > 0 such that whenever 0 < x - p < \ delta, we have | f ( x ) - l | < \ epsilon. ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ in ( a, b ) ) ( 0 < x - p < \ delta | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in ( a, b ) ) \, ( 0 < x - p < \ delta \ implies | f ( x ) - l | < \ varepsilon ). } the limit of f as x approaches p from below is l if : for every \ epsilon > 0, there exists a \ delta > 0 such that whenever 0 < p - x < \ delta, we have | f ( x ) - l | < \ epsilon. ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ in ( a, b ) ) ( 0 < p - x < \ delta | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in ( a, b ) ) \, ( 0 < p - x < \ delta \ implies | f ( x ) - l | < \ varepsilon ). } if the limit does not exist, then the oscillation of f at p is non - zero. limits can also be defined by approaching from subsets of the domain. in general : letf : s \ rightarrowr { f : s \ to \ mathbb { r } } be a real - valued function defined on somes \ subseteqr. { s \ subseteq \ mathbb { r }. } let p be a limit point of somet \ subsets { t \ subset s } that is, p is the limit of some sequence of elements of t distinct from p. then we say the limit of f, as x approaches p from values in t, is l, writtenlimx \ rightarrowp x \ int f ( x
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
say the limit of f, as x approaches p from values in t, is l, writtenlimx \ rightarrowp x \ int f ( x ) = l { \ lim _ { { x \ to p } \ atop { x \ in t } } f ( x ) = l } if the following holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ int ) ( 0 < | x - p | < \ delta | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in t ) \, ( 0 < | x - p | < \ delta \ implies | f ( x ) - l | < \ varepsilon ). } note, t can be any subset of s, the domain of f. and the limit might depend on the selection of t. this generalization includes as special cases limits on an interval, as well as left - handed limits of real - valued functions ( e. g., by taking t to be an open interval of the form (, a ) ), and right - handed limits ( e. g., by taking t to be an open interval of the form ( a, ) ). it also extends the notion of one - sided limits to the included endpoints of ( half - ) closed intervals, so the square root functionf ( x ) = x { f ( x ) = { \ sqrt { x } } } can have limit 0 as x approaches 0 from above : limx \ rightarrow0 x \ in [ 0, ) x = 0 { \ lim _ { { x \ to 0 } \ atop { x \ in [ 0, \ infty ) } } { \ sqrt { x } } = 0 } since for every \ epsilon > 0, we may take \ delta = \ epsilon2 such that for all x \ geq 0, if 0 < | x - 0 | < \ delta, then | f ( x ) - 0 | < \ epsilon. this definition allows a limit to be defined at limit points of the domain s, if a suitable subset t which has the same limit point is chosen. notably, the previous two - sided definition works onints \ cupisos c, { \ operatorname { int } s \ cup \ operatorname {
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
notably, the previous two - sided definition works onints \ cupisos c, { \ operatorname { int } s \ cup \ operatorname { iso } s ^ { c }, } which is a subset of the limit points of s. for example, lets = [ 0, 1 ) \ cup ( 1, 2 ]. { s = [ 0, 1 ) \ cup ( 1, 2 ]. } the previous two - sided definition would work at1 \ inisos c = { 1 }, { 1 \ in \ operatorname { iso } s ^ { c } = \ { 1 \ }, } but it wouldn't work at 0 or 2, which are limit points of s. the definition of limit given here does not depend on how ( or whether ) f is defined at p. bartle refers to this as a deleted limit, because it excludes the value of f at p. the corresponding non - deleted limit does depend on the value of f at p, if p is in the domain of f. letf : s \ rightarrowr { f : s \ to \ mathbb { r } } be a real - valued function. the non - deleted limit of f, as x approaches p, is l if ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( | x - p | < \ delta | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( | x - p | < \ delta \ implies | f ( x ) - l | < \ varepsilon ). } the definition is the same, except that the neighborhood | x - p | < \ delta now includes the point p, in contrast to the deleted neighborhood 0 < | x - p | < \ delta. this makes the definition of a non - deleted limit less general. one of the advantages of working with non - deleted limits is that they allow to state the theorem about limits of compositions without any constraints on the functions ( other than the existence of their non - deleted limits ). bartle notes that although by " limit " some authors do mean this non - deleted limit, deleted limits are the most popular. the functionf ( x ) = { sin5 x - 1forx < 10 forx
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
this non - deleted limit, deleted limits are the most popular. the functionf ( x ) = { sin5 x - 1forx < 10 forx = 11 10x - 10forx > 1 { f ( x ) = { \ begin { cases } \ sin { \ frac { 5 } { x - 1 } } & { \ text { for } } x < 1 \ \ 0 & { \ text { for } } x = 1 \ \ [ 2pt ] { \ frac { 1 } { 10x - 10 } } & { \ text { for } } x > 1 \ end { cases } } } has no limit at x0 = 1 ( the left - hand limit does not exist due to the oscillatory nature of the sine function, and the right - hand limit does not exist due to the asymptotic behaviour of the reciprocal function, see picture ), but has a limit at every other x - coordinate. the functionf ( x ) = { 1x rational0 xirrational { f ( x ) = { \ begin { cases } 1 & x { \ text { rational } } \ \ 0 & x { \ text { irrational } } \ end { cases } } } ( a. k. a., the dirichlet function ) has no limit at any x - coordinate. the functionf ( x ) = { 1forx < 02 forx \ geq0 { f ( x ) = { \ begin { cases } 1 & { \ text { for } } x < 0 \ \ 2 & { \ text { for } } x \ geq 0 \ end { cases } } } has a limit at every non - zero x - coordinate ( the limit equals 1 for negative x and equals 2 for positive x ). the limit at x = 0 does not exist ( the left - hand limit equals 1, whereas the right - hand limit equals 2 ). the functionsf ( x ) = { xx rational0 xirrational { f ( x ) = { \ begin { cases } x & x { \ text { rational } } \ \ 0 & x { \ text { irrational } } \ end { cases } } } andf ( x ) = { | x | xrational0 xirrational { f ( x ) = { \ begin { cases } | x | & x { \ text { rational } } \ \ 0 & x { \ text { irrational } } \
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
= { \ begin { cases } | x | & x { \ text { rational } } \ \ 0 & x { \ text { irrational } } \ end { cases } } } both have a limit at x = 0 and it equals 0. the functionf ( x ) = { sinx xirrational1 xrational { f ( x ) = { \ begin { cases } \ sin x & x { \ text { irrational } } \ \ 1 & x { \ text { rational } } \ end { cases } } } has a limit at any x - coordinate of the form \ pi2 + 2n \ pi, { { \ tfrac { \ pi } { 2 } } + 2n \ pi, } where n is any integer. letf : s \ rightarrowr { f : s \ to \ mathbb { r } } be a function defined ons \ subseteqr. { s \ subseteq \ mathbb { r }. } the limit of f as x approaches infinity is l, denotedlimx \ rightarrowf ( x ) = l, { \ lim _ { x \ to \ infty } f ( x ) = l, } means that : ( \ forall \ epsilon > 0 ) ( \ existsc > 0 ) ( \ forallx \ ins ) ( x > c | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists c > 0 ) \, ( \ forall x \ in s ) \, ( x > c \ implies | f ( x ) - l | < \ varepsilon ). } similarly, the limit of f as x approaches minus infinity is l, denotedlimx \ rightarrow - f ( x ) = l, { \ lim _ { x \ to - \ infty } f ( x ) = l, } means that : ( \ forall \ epsilon > 0 ) ( \ existsc > 0 ) ( \ forallx \ ins ) ( x < - c | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists c > 0 ) \, ( \ forall x \ in s ) \, ( x < - c \ implies | f ( x ) - l | < \ varepsilon ). } for example
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
\ in s ) \, ( x < - c \ implies | f ( x ) - l | < \ varepsilon ). } for example, limx \ rightarrow ( - 3sinx x + 4 ) = 4 { \ lim _ { x \ to \ infty } ( - { \ frac { 3 \ sin x } { x } } + 4 ) = 4 } because for every \ epsilon > 0, we can take c = 3 / \ epsilon such that for all real x, if x > c, then | f ( x ) - 4 | < \ epsilon. another example is thatlimx \ rightarrow - ex = 0 { \ lim _ { x \ to - \ infty } e ^ { x } = 0 } because for every \ epsilon > 0, we can take c = max { 1, - ln ( \ epsilon ) } such that for all real x, if x < - c, then | f ( x ) - 0 | < \ epsilon. for a function whose values grow without bound, the function diverges and the usual limit does not exist. however, in this case one may introduce limits with infinite values. letf : s \ rightarrowr { f : s \ to \ mathbb { r } } be a function defined ons \ subseteqr. { s \ subseteq \ mathbb { r }. } the statement the limit of f as x approaches p is infinity, denotedlimx \ rightarrowp f ( x ) =, { \ lim _ { x \ to p } f ( x ) = \ infty, } means that : ( \ foralln > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( 0 < | x - p | < \ deltaf ( x ) > n ). { ( \ forall n > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( 0 < | x - p | < \ delta \ implies f ( x ) > n ). } the statement the limit of f as x approaches p is minus infinity, denotedlimx \ rightarrowp f ( x ) = -, { \ lim _ { x \ to p } f ( x ) = - \ infty, } means that : ( \ foralln > 0 ) ( \ exists \ delta >
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
to p } f ( x ) = - \ infty, } means that : ( \ foralln > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( 0 < | x - p | < \ deltaf ( x ) < - n ). { ( \ forall n > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( 0 < | x - p | < \ delta \ implies f ( x ) < - n ). } for example, limx \ rightarrow1 1 ( x - 1 ) 2 = { \ lim _ { x \ to 1 } { \ frac { 1 } { ( x - 1 ) ^ { 2 } } } = \ infty } because for every n > 0, we can take \ delta = 1n \ delta = 1n { \ textstyle \ delta = { \ tfrac { 1 } { { \ sqrt { n } } \ delta } } = { \ tfrac { 1 } { \ sqrt { n } } } } such that for all real x > 0, if 0 < x - 1 < \ delta, then f ( x ) > n. these ideas can be used together to produce definitions for different combinations, such aslimx \ rightarrowf ( x ) =, { \ lim _ { x \ to \ infty } f ( x ) = \ infty, } orlimx \ rightarrowp + f ( x ) = -. { \ lim _ { x \ to p ^ { + } } f ( x ) = - \ infty. } for example, limx \ rightarrow0 + lnx = - { \ lim _ { x \ to 0 ^ { + } } \ ln x = - \ infty } because for every n > 0, we can take \ delta = e - n such that for all real x > 0, if 0 < x - 0 < \ delta, then f ( x ) < - n. limits involving infinity are connected with the concept of asymptotes. these notions of a limit attempt to provide a metric space interpretation to limits at infinity. in fact, they are consistent with the topological space definition of limit ifa neighborhood of - is defined to contain an interval [ -, c ) for somec \ inr, { c \
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
the topological space definition of limit ifa neighborhood of - is defined to contain an interval [ -, c ) for somec \ inr, { c \ in \ mathbb { r }, } a neighborhood of is defined to contain an interval ( c, ] wherec \ inr, { c \ in \ mathbb { r }, } anda neighborhood ofa \ inr { a \ in \ mathbb { r } } is defined in the normal way metric spacer. { \ mathbb { r }. } in this case, r { { \ overline { \ mathbb { r } } } } is a topological space and any function of the formf : x \ rightarrowy { f : x \ to y } withx, y \ subseteqr { x, y \ subseteq { \ overline { \ mathbb { r } } } } is subject to the topological definition of a limit. note that with this topological definition, it is easy to define infinite limits at finite points, which have not been defined above in the metric sense. many authors allow for the projectively extended real line to be used as a way to include infinite values as well as extended real line. with this notation, the extended real line is given asr \ cup { -, + } { \ mathbb { r } \ cup \ { - \ infty, + \ infty \ } } and the projectively extended real line isr \ cup { } { \ mathbb { r } \ cup \ { \ infty \ } } where a neighborhood of is a set of the form { x : | x | > c }. { \ { x : | x | > c \ }. } the advantage is that one only needs three definitions for limits ( left, right, and central ) to cover all the cases. as presented above, for a completely rigorous account, we would need to consider 15 separate cases for each combination of infinities ( five directions : -, left, central, right, and + ; three bounds : -, finite, or + ). there are also noteworthy pitfalls. for example, when working with the extended real line, x - 1 { x ^ { - 1 } } does not possess a central limit ( which is normal ) : limx \ rightarrow0 + 1x = +, limx \ rightarrow0 - 1x = -. { \ lim _ { x \
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
##x \ rightarrow0 + 1x = +, limx \ rightarrow0 - 1x = -. { \ lim _ { x \ to 0 ^ { + } } { 1 \ over x } = + \ infty, \ quad \ lim _ { x \ to 0 ^ { - } } { 1 \ over x } = - \ infty. } in contrast, when working with the projective real line, infinities ( much like 0 ) are unsigned, so, the central limit does exist in that context : limx \ rightarrow0 + 1x = limx \ rightarrow0 - 1x = limx \ rightarrow0 1x =. { \ lim _ { x \ to 0 ^ { + } } { 1 \ over x } = \ lim _ { x \ to 0 ^ { - } } { 1 \ over x } = \ lim _ { x \ to 0 } { 1 \ over x } = \ infty. } in fact there are a plethora of conflicting formal systems in use. in certain applications of numerical differentiation and integration, it is, for example, convenient to have signed zeroes. a simple reason has to do with the converse oflimx \ rightarrow0 - x - 1 = -, { \ lim _ { x \ to 0 ^ { - } } { x ^ { - 1 } } = - \ infty, } namely, it is convenient forlimx \ rightarrow - x - 1 = - 0 { \ lim _ { x \ to - \ infty } { x ^ { - 1 } } = - 0 } to be considered true. such zeroes can be seen as an approximation to infinitesimals. there are three basic rules for evaluating limits at infinity for a rational functionf ( x ) = p ( x ) q ( x ) { f ( x ) = { \ tfrac { p ( x ) } { q ( x ) } } } ( where p and q are polynomials ) : if the degree of p is greater than the degree of q, then the limit is positive or negative infinity depending on the signs of the leading coefficients ; if the degree of p and q are equal, the limit is the leading coefficient of p divided by the leading coefficient of q ; if the degree of p is less than the degree of q, the limit is 0. if the limit at infinity exists, it represents a horizontal
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
of q ; if the degree of p is less than the degree of q, the limit is 0. if the limit at infinity exists, it represents a horizontal asymptote at y = l. polynomials do not have horizontal asymptotes ; such asymptotes may however occur with rational functions. by noting that | x - p | represents a distance, the definition of a limit can be extended to functions of more than one variable. in the case of a functionf : s \ timest \ rightarrowr { f : s \ times t \ to \ mathbb { r } } defined ons \ timest \ subseteqr 2, { s \ times t \ subseteq \ mathbb { r } ^ { 2 }, } we defined the limit as follows : the limit of f as ( x, y ) approaches ( p, q ) is l, writtenlim ( x, y ) \ rightarrow ( p, q ) f ( x, y ) = l { \ lim _ { ( x, y ) \ to ( p, q ) } f ( x, y ) = l } if the following condition holds : for every \ epsilon > 0, there exists a \ delta > 0 such that for all x in s and y in t, whenever0 < ( x - p ) 2 + ( y - q ) 2 < \ delta, { \ textstyle 0 < { \ sqrt { ( x - p ) ^ { 2 } + ( y - q ) ^ { 2 } } } < \ delta, } we have | f ( x, y ) - l | < \ epsilon, or formally : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( \ forally \ int ) ( 0 < ( x - p ) 2 + ( y - q ) 2 < \ delta | f ( x, y ) - l | < \ epsilon ) ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( \ forall y \ in t ) \, ( 0 < { \ sqrt { ( x - p ) ^ { 2 } + ( y - q ) ^ { 2 } } } < \ delta \ implies | f ( x, y ) - l | < \ varepsilon ) ).
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
q ) ^ { 2 } } } < \ delta \ implies | f ( x, y ) - l | < \ varepsilon ) ). } here ( x - p ) 2 + ( y - q ) 2 { \ textstyle { \ sqrt { ( x - p ) ^ { 2 } + ( y - q ) ^ { 2 } } } } is the euclidean distance between ( x, y ) and ( p, q ). ( this can in fact be replaced by any norm | | ( x, y ) - ( p, q ) | |, and be extended to any number of variables. ) for example, we may saylim ( x, y ) \ rightarrow ( 0, 0 ) x4 x2 + y2 = 0 { \ lim _ { ( x, y ) \ to ( 0, 0 ) } { \ frac { x ^ { 4 } } { x ^ { 2 } + y ^ { 2 } } } = 0 } because for every \ epsilon > 0, we can take \ delta = \ epsilon { \ textstyle \ delta = { \ sqrt { \ varepsilon } } } such that for all real x \ neq 0 and real y \ neq 0, if0 < ( x - 0 ) 2 + ( y - 0 ) 2 < \ delta, { \ textstyle 0 < { \ sqrt { ( x - 0 ) ^ { 2 } + ( y - 0 ) ^ { 2 } } } < \ delta, } then | f ( x, y ) - 0 | < \ epsilon. similar to the case in single variable, the value of f at ( p, q ) does not matter in this definition of limit. for such a multivariable limit to exist, this definition requires the value of f approaches l along every possible path approaching ( p, q ). in the above example, the functionf ( x, y ) = x4 x2 + y2 { f ( x, y ) = { \ frac { x ^ { 4 } } { x ^ { 2 } + y ^ { 2 } } } } satisfies this condition. this can be seen by considering the polar coordinates ( x, y ) = ( rcos, rsin ) \ rightarrow ( 0, 0 ), { ( x, y ) = ( r \ cos \ theta, r \ sin \ theta ) \ to ( 0, 0 ),
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
0 ), { ( x, y ) = ( r \ cos \ theta, r \ sin \ theta ) \ to ( 0, 0 ), } which giveslimr \ rightarrow0 f ( rcos, rsin ) = limr \ rightarrow0 r4 cos4 r2 = limr \ rightarrow0 r2 cos4. { \ lim _ { r \ to 0 } f ( r \ cos \ theta, r \ sin \ theta ) = \ lim _ { r \ to 0 } { \ frac { r ^ { 4 } \ cos ^ { 4 } \ theta } { r ^ { 2 } } } = \ lim _ { r \ to 0 } r ^ { 2 } \ cos ^ { 4 } \ theta. } here = ( r ) is a function of r which controls the shape of the path along which f is approaching ( p, q ). since cos is bounded between [ - 1, 1 ], by the sandwich theorem, this limit tends to 0. in contrast, the functionf ( x, y ) = xy x2 + y2 { f ( x, y ) = { \ frac { xy } { x ^ { 2 } + y ^ { 2 } } } } does not have a limit at ( 0, 0 ). taking the path ( x, y ) = ( t, 0 ) \ rightarrow ( 0, 0 ), we obtainlimt \ rightarrow0 f ( t, 0 ) = limt \ rightarrow0 0t 2 = 0, { \ lim _ { t \ to 0 } f ( t, 0 ) = \ lim _ { t \ to 0 } { \ frac { 0 } { t ^ { 2 } } } = 0, } while taking the path ( x, y ) = ( t, t ) \ rightarrow ( 0, 0 ), we obtainlimt \ rightarrow0 f ( t, t ) = limt \ rightarrow0 t2 t2 + t2 = 12. { \ lim _ { t \ to 0 } f ( t, t ) = \ lim _ { t \ to 0 } { \ frac { t ^ { 2 } } { t ^ { 2 } + t ^ { 2 } } } = { \ frac { 1 } { 2 } }. } since the two values do not agree, f
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
} + t ^ { 2 } } } = { \ frac { 1 } { 2 } }. } since the two values do not agree, f does not tend to a single value as ( x, y ) approaches ( 0, 0 ). although less commonly used, there is another type of limit for a multivariable function, known as the multiple limit. for a two - variable function, this is the double limit. letf : s \ timest \ rightarrowr { f : s \ times t \ to \ mathbb { r } } be defined ons \ timest \ subseteqr 2, { s \ times t \ subseteq \ mathbb { r } ^ { 2 }, } we say the double limit of f as x approaches p and y approaches q is l, writtenlimx \ rightarrowp y \ rightarrowq f ( x, y ) = l { \ lim _ { { x \ to p } \ atop { y \ to q } } f ( x, y ) = l } if the following condition holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( \ forally \ int ) ( ( 0 < | x - p | < \ delta ) \ land ( 0 < | y - q | < \ delta ) | f ( x, y ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( \ forall y \ in t ) \, ( ( 0 < | x - p | < \ delta ) \ land ( 0 < | y - q | < \ delta ) \ implies | f ( x, y ) - l | < \ varepsilon ). } for such a double limit to exist, this definition requires the value of f approaches l along every possible path approaching ( p, q ), excluding the two lines x = p and y = q. as a result, the multiple limit is a weaker notion than the ordinary limit : if the ordinary limit exists and equals l, then the multiple limit exists and also equals l. the converse is not true : the existence of the multiple limits does not imply the existence of the ordinary limit. consider the examplef ( x, y ) = { 1forx y \ neq0
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
the multiple limits does not imply the existence of the ordinary limit. consider the examplef ( x, y ) = { 1forx y \ neq0 0forx y = 0 { f ( x, y ) = { \ begin { cases } 1 \ quad { \ text { for } } \ quad xy \ neq 0 \ \ 0 \ quad { \ text { for } } \ quad xy = 0 \ end { cases } } } wherelimx \ rightarrow0 y \ rightarrow0 f ( x, y ) = 1 { \ lim _ { { x \ to 0 } \ atop { y \ to 0 } } f ( x, y ) = 1 } butlim ( x, y ) \ rightarrow ( 0, 0 ) f ( x, y ) { \ lim _ { ( x, y ) \ to ( 0, 0 ) } f ( x, y ) } does not exist. if the domain of f is restricted to ( s { p } ) \ times ( t { q } ), { ( s \ setminus \ { p \ } ) \ times ( t \ setminus \ { q \ } ), } then the two definitions of limits coincide. the concept of multiple limit can extend to the limit at infinity, in a way similar to that of a single variable function. forf : s \ timest \ rightarrowr, { f : s \ times t \ to \ mathbb { r }, } we say the double limit of f as x and y approaches infinity is l, writtenlimx \ rightarrowy \ rightarrowf ( x, y ) = l { \ lim _ { { x \ to \ infty } \ atop { y \ to \ infty } } f ( x, y ) = l } if the following condition holds : ( \ forall \ epsilon > 0 ) ( \ existsc > 0 ) ( \ forallx \ ins ) ( \ forally \ int ) ( ( x > c ) \ land ( y > c ) | f ( x, y ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists c > 0 ) \, ( \ forall x \ in s ) \, ( \ forall y \ in t ) \, ( ( x > c ) \ land ( y > c ) \ implies | f (
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
) \, ( \ forall y \ in t ) \, ( ( x > c ) \ land ( y > c ) \ implies | f ( x, y ) - l | < \ varepsilon ). } we say the double limit of f as x and y approaches minus infinity is l, writtenlimx \ rightarrow - y \ rightarrow - f ( x, y ) = l { \ lim _ { { x \ to - \ infty } \ atop { y \ to - \ infty } } f ( x, y ) = l } if the following condition holds : ( \ forall \ epsilon > 0 ) ( \ existsc > 0 ) ( \ forallx \ ins ) ( \ forally \ int ) ( ( x < - c ) \ land ( y < - c ) | f ( x, y ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists c > 0 ) \, ( \ forall x \ in s ) \, ( \ forall y \ in t ) \, ( ( x < - c ) \ land ( y < - c ) \ implies | f ( x, y ) - l | < \ varepsilon ). } letf : s \ timest \ rightarrowr. { f : s \ times t \ to \ mathbb { r }. } instead of taking limit as ( x, y ) \ rightarrow ( p, q ), we may consider taking the limit of just one variable, say, x \ rightarrow p, to obtain a single - variable function of y, namelyg : t \ rightarrowr. { g : t \ to \ mathbb { r }. } in fact, this limiting process can be done in two distinct ways. the first one is called pointwise limit. we say the pointwise limit of f as x approaches p is g, denotedlimx \ rightarrowp f ( x, y ) = g ( y ), { \ lim _ { x \ to p } f ( x, y ) = g ( y ), } orlimx \ rightarrowp f ( x, y ) = g ( y ) pointwise. { \ lim _ { x \ to p } f ( x, y ) = g ( y ) \ ; \ ; { \ text { pointwise }
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
{ \ lim _ { x \ to p } f ( x, y ) = g ( y ) \ ; \ ; { \ text { pointwise } }. } alternatively, we may say f tends to g pointwise as x approaches p, denotedf ( x, y ) \ rightarrowg ( y ) asx \ rightarrowp, { f ( x, y ) \ to g ( y ) \ ; \ ; { \ text { as } } \ ; \ ; x \ to p, } orf ( x, y ) \ rightarrowg ( y ) pointwiseasx \ rightarrowp. { f ( x, y ) \ to g ( y ) \ ; \ ; { \ text { pointwise } } \ ; \ ; { \ text { as } } \ ; \ ; x \ to p. } this limit exists if the following holds : ( \ forall \ epsilon > 0 ) ( \ forally \ int ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( 0 < | x - p | < \ delta | f ( x, y ) - g ( y ) | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ forall y \ in t ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( 0 < | x - p | < \ delta \ implies | f ( x, y ) - g ( y ) | < \ varepsilon ). } here, \ delta = \ delta ( \ epsilon, y ) is a function of both \ epsilon and y. each \ delta is chosen for a specific point of y. hence we say the limit is pointwise in y. for example, f ( x, y ) = xcosy { f ( x, y ) = { \ frac { x } { \ cos y } } } has a pointwise limit of constant zero functionlimx \ rightarrow0 f ( x, y ) = 0 ( y ) pointwise { \ lim _ { x \ to 0 } f ( x, y ) = 0 ( y ) \ ; \ ; { \ text { pointwise } } } because for every fixed y, the limit is clearly 0. this argument fails if y is not fixed : if y is very close to \ pi / 2, the value of the fraction
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
, the limit is clearly 0. this argument fails if y is not fixed : if y is very close to \ pi / 2, the value of the fraction may deviate from 0. this leads to another definition of limit, namely the uniform limit. we say the uniform limit of f on t as x approaches p is g, denotedu ni flimx \ rightarrowp y \ int f ( x, y ) = g ( y ), { { \ underset { { x \ to p } \ atop { y \ in t } } { unif \ lim \ ; } } f ( x, y ) = g ( y ), } orlimx \ rightarrowp f ( x, y ) = g ( y ) uniformly ont. { \ lim _ { x \ to p } f ( x, y ) = g ( y ) \ ; \ ; { \ text { uniformly on } } \ ; t. } alternatively, we may say f tends to g uniformly on t as x approaches p, denotedf ( x, y ) g ( y ) ont asx \ rightarrowp, { f ( x, y ) rightarrows g ( y ) \ ; { \ text { on } } \ ; t \ ; \ ; { \ text { as } } \ ; \ ; x \ to p, } orf ( x, y ) \ rightarrowg ( y ) uniformly ont asx \ rightarrowp. { f ( x, y ) \ to g ( y ) \ ; \ ; { \ text { uniformly on } } \ ; t \ ; \ ; { \ text { as } } \ ; \ ; x \ to p. } this limit exists if the following holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( \ forally \ int ) ( 0 < | x - p | < \ delta | f ( x, y ) - g ( y ) | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( \ forall y \ in t ) \, ( 0 < | x - p | < \ delta \ implies | f ( x, y ) - g ( y ) | < \ varepsilon ). } here, \
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
p | < \ delta \ implies | f ( x, y ) - g ( y ) | < \ varepsilon ). } here, \ delta = \ delta ( \ epsilon ) is a function of only \ epsilon but not y. in other words, \ delta is uniformly applicable to all y in t. hence we say the limit is uniform in y. for example, f ( x, y ) = xcosy { f ( x, y ) = x \ cos y } has a uniform limit of constant zero functionlimx \ rightarrow0 f ( x, y ) = 0 ( y ) uniformly onr { \ lim _ { x \ to 0 } f ( x, y ) = 0 ( y ) \ ; \ ; { \ text { uniformly on } } \ ; \ mathbb { r } } because for all real y, cos y is bounded between [ - 1, 1 ]. hence no matter how y behaves, we may use the sandwich theorem to show that the limit is 0. letf : s \ timest \ rightarrowr. { f : s \ times t \ to \ mathbb { r }. } we may consider taking the limit of just one variable, say, x \ rightarrow p, to obtain a single - variable function of y, namelyg : t \ rightarrowr, { g : t \ to \ mathbb { r }, } and then take limit in the other variable, namely y \ rightarrow q, to get a number l. symbolically, limy \ rightarrowq limx \ rightarrowp f ( x, y ) = limy \ rightarrowq g ( y ) = l. { \ lim _ { y \ to q } \ lim _ { x \ to p } f ( x, y ) = \ lim _ { y \ to q } g ( y ) = l. } this limit is known as iterated limit of the multivariable function. the order of taking limits may affect the result, i. e., limy \ rightarrowq limx \ rightarrowp f ( x, y ) \ neqlimx \ rightarrowp limy \ rightarrowq f ( x, y ) { \ lim _ { y \ to q } \ lim _ { x \ to p } f ( x, y ) \ neq \ lim _ { x \ to p
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
{ y \ to q } \ lim _ { x \ to p } f ( x, y ) \ neq \ lim _ { x \ to p } \ lim _ { y \ to q } f ( x, y ) } in general. a sufficient condition of equality is given by the moore - osgood theorem, which requires the limitlimx \ rightarrowp f ( x, y ) = g ( y ) { \ lim _ { x \ to p } f ( x, y ) = g ( y ) } to be uniform on t. suppose m and n are subsets of metric spaces a and b, respectively, and f : m \ rightarrow n is defined between m and n, with x \ in m, p a limit point of m and l \ in n. it is said that the limit of f as x approaches p is l and writelimx \ rightarrowp f ( x ) = l { \ lim _ { x \ to p } f ( x ) = l } if the following property holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ inm ) ( 0 < da ( x, p ) < \ deltad b ( f ( x ), l ) < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in m ) \, ( 0 < d _ { a } ( x, p ) < \ delta \ implies d _ { b } ( f ( x ), l ) < \ varepsilon ). } again, note that p need not be in the domain of f, nor does l need to be in the range of f, and even if f ( p ) is defined it need not be equal to l. the limit in euclidean space is a direct generalization of limits to vector - valued functions. for example, we may consider a functionf : s \ timest \ rightarrowr 3 { f : s \ times t \ to \ mathbb { r } ^ { 3 } } such thatf ( x, y ) = ( f1 ( x, y ), f2 ( x, y ), f3 ( x, y ) ). { f ( x, y ) = ( f _ { 1 } ( x, y ), f _ { 2 } ( x, y
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
) ). { f ( x, y ) = ( f _ { 1 } ( x, y ), f _ { 2 } ( x, y ), f _ { 3 } ( x, y ) ). } then, under the usual euclidean metric, lim ( x, y ) \ rightarrow ( p, q ) f ( x, y ) = ( l1, l2, l3 ) { \ lim _ { ( x, y ) \ to ( p, q ) } f ( x, y ) = ( l _ { 1 }, l _ { 2 }, l _ { 3 } ) } if the following holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( \ forally \ int ) ( 0 < ( x - p ) 2 + ( y - q ) 2 < \ delta ( f1 - l1 ) 2 + ( f2 - l2 ) 2 + ( f3 - l3 ) 2 < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( \ forall y \ in t ) \, ( 0 < { \ sqrt { ( x - p ) ^ { 2 } + ( y - q ) ^ { 2 } } } < \ delta \ implies { \ sqrt { ( f _ { 1 } - l _ { 1 } ) ^ { 2 } + ( f _ { 2 } - l _ { 2 } ) ^ { 2 } + ( f _ { 3 } - l _ { 3 } ) ^ { 2 } } } < \ varepsilon ). } in this example, the function concerned are finite - dimension vector - valued function. in this case, the limit theorem for vector - valued function states that if the limit of each component exists, then the limit of a vector - valued function equals the vector with each component taken the limit : lim ( x, y ) \ rightarrow ( p, q ) ( f1 ( x, y ), f2 ( x, y ), f3 ( x, y ) ) = ( lim ( x, y ) \ rightarrow ( p, q ) f1 ( x, y ), lim ( x, y ) \ rightarrow ( p, q ) f2 ( x, y ), lim
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
) f1 ( x, y ), lim ( x, y ) \ rightarrow ( p, q ) f2 ( x, y ), lim ( x, y ) \ rightarrow ( p, q ) f3 ( x, y ) ). { \ lim _ { ( x, y ) \ to ( p, q ) } { \ bigl ( } f _ { 1 } ( x, y ), f _ { 2 } ( x, y ), f _ { 3 } ( x, y ) { \ bigr ) } = ( \ lim _ { ( x, y ) \ to ( p, q ) } f _ { 1 } ( x, y ), \ lim _ { ( x, y ) \ to ( p, q ) } f _ { 2 } ( x, y ), \ lim _ { ( x, y ) \ to ( p, q ) } f _ { 3 } ( x, y ) ). } one might also want to consider spaces other than euclidean space. an example would be the manhattan space. considerf : s \ rightarrowr 2 { f : s \ to \ mathbb { r } ^ { 2 } } such thatf ( x ) = ( f1 ( x ), f2 ( x ) ). { f ( x ) = ( f _ { 1 } ( x ), f _ { 2 } ( x ) ). } then, under the manhattan metric, limx \ rightarrowp f ( x ) = ( l1, l2 ) { \ lim _ { x \ to p } f ( x ) = ( l _ { 1 }, l _ { 2 } ) } if the following holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( 0 < | x - p | < \ delta | f1 - l1 | + | f2 - l2 | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( 0 < | x - p | < \ delta \ implies | f _ { 1 } - l _ { 1 } | + | f _ { 2 } - l _ { 2 } | < \ varepsilon ). } since this is also a finite - dimension vector - valued function,
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
{ 2 } - l _ { 2 } | < \ varepsilon ). } since this is also a finite - dimension vector - valued function, the limit theorem stated above also applies. finally, we will discuss the limit in function space, which has infinite dimensions. consider a function f ( x, y ) in the function spaces \ timest \ rightarrowr. { s \ times t \ to \ mathbb { r }. } we want to find out as x approaches p, how f ( x, y ) will tend to another function g ( y ), which is in the function spacet \ rightarrowr. { t \ to \ mathbb { r }. } the " closeness " in this function space may be measured under the uniform metric. then, we will say the uniform limit of f on t as x approaches p is g and writeu ni flimx \ rightarrowp y \ int f ( x, y ) = g ( y ), { { \ underset { { x \ to p } \ atop { y \ in t } } { unif \ lim \ ; } } f ( x, y ) = g ( y ), } orlimx \ rightarrowp f ( x, y ) = g ( y ) uniformly ont, { \ lim _ { x \ to p } f ( x, y ) = g ( y ) \ ; \ ; { \ text { uniformly on } } \ ; t, } if the following holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( 0 < | x - p | < \ deltasupy \ int | f ( x, y ) - g ( y ) | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( 0 < | x - p | < \ delta \ implies \ sup _ { y \ in t } | f ( x, y ) - g ( y ) | < \ varepsilon ). } in fact, one can see that this definition is equivalent to that of the uniform limit of a multivariable function introduced in the previous section. supposex { x } andy { y } are topological spaces withy { y } a hausdorff space. letp { p } be
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
previous section. supposex { x } andy { y } are topological spaces withy { y } a hausdorff space. letp { p } be a limit point of \ omega \ subseteqx { \ omega \ subseteq x }, andl \ iny { l \ in y }. for a functionf : \ omega \ rightarrowy { f : \ omega \ to y }, it is said that the limit off { f } asx { x } approachesp { p } isl { l }, writtenlimx \ rightarrowp f ( x ) = l, { \ lim _ { x \ to p } f ( x ) = l, } if the following property holds : for every open neighborhoodv { v } ofl { l }, there exists an open neighborhoodu { u } ofp { p } such thatf ( u \ cap \ omega - { p } ) \ subseteqv { f ( u \ cap \ omega - \ { p \ } ) \ subseteq v }. this last part of the definition can also be phrased as " there exists an open punctured neighbourhoodu { u } ofp { p } such thatf ( u \ cap \ omega ) \ subseteqv { f ( u \ cap \ omega ) \ subseteq v }. the domain off { f } does not need to containp { p }. if it does, then the value off { f } atp { p } is irrelevant to the definition of the limit. in particular, if the domain off { f } isx { p } { x \ setminus \ { p \ } } ( or all ofx { x } ), then the limit off { f } asx \ rightarrowp { x \ to p } exists and is equal to l if, for all subsets \ omega of x with limit pointp { p }, the limit of the restriction off { f } to \ omega exists and is equal to l. sometimes this criterion is used to establish the non - existence of the two - sided limit of a function onr { \ mathbb { r } } by showing that the one - sided limits either fail to exist or do not agree. such a view is fundamental in the field of general topology, where limits and continuity at a point are defined in terms of special families of subsets, called filters, or generalized sequences known as nets. alternatively, the
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
, where limits and continuity at a point are defined in terms of special families of subsets, called filters, or generalized sequences known as nets. alternatively, the requirement thaty { y } be a hausdorff space can be relaxed to the assumption thaty { y } be a general topological space, but then the limit of a function may not be unique. in particular, one can no longer talk about the limit of a function at a point, but rather a limit or the set of limits at a point. a function is continuous at a limit pointp { p } of and in its domain if and only iff ( p ) { f ( p ) } is the ( or, in the general case, a ) limit off ( x ) { f ( x ) } asx { x } tends top { p }. there is another type of limit of a function, namely the sequential limit. letf : x \ rightarrowy { f : x \ to y } be a mapping from a topological space x into a hausdorff space y, p \ inx { p \ in x } a limit point of x and l \ in y. the sequential limit off { f } asx { x } tends top { p } is l iffor every sequence ( xn ) { ( x _ { n } ) } inx { p } { x \ setminus \ { p \ } } that converges top { p }, the sequencef ( xn ) { f ( x _ { n } ) } converges to l. if l is the limit ( in the sense above ) off { f } asx { x } approachesp { p }, then it is a sequential limit as well ; however, the converse need not hold in general. if in addition x is metrizable, then l is the sequential limit off { f } asx { x } approachesp { p } if and only if it is the limit ( in the sense above ) off { f } asx { x } approachesp { p }. for functions on the real line, one way to define the limit of a function is in terms of the limit of sequences. ( this definition is usually attributed to eduard heine. ) in this setting : limx \ rightarrowa f ( x ) = l { \ lim _ { x \ to a } f ( x ) = l } if, and only if, for all sequences xn ( with, for all n
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
lim _ { x \ to a } f ( x ) = l } if, and only if, for all sequences xn ( with, for all n, xn not equal to a ) converging to a the sequence f ( xn ) converges to l. it was shown by sierpiski in 1916 that proving the equivalence of this definition and the definition above, requires and is equivalent to a weak form of the axiom of choice. note that defining what it means for a sequence xn to converge to a requires the epsilon, delta method. similarly as it was the case of weierstrass's definition, a more general heine definition applies to functions defined on subsets of the real line. let f be a real - valued function with the domain dm ( f ). let a be the limit of a sequence of elements of dm ( f ) \ { a }. then the limit ( in this sense ) of f is l as x approaches aif for every sequence xn \ in dm ( f ) \ { a } ( so that for all n, xn is not equal to a ) that converges to a, the sequence f ( xn ) converges to l. this is the same as the definition of a sequential limit in the preceding section obtained by regarding the subset dm ( f ) ofr { \ mathbb { r } } as a metric space with the induced metric. in non - standard calculus the limit of a function is defined by : limx \ rightarrowa f ( x ) = l { \ lim _ { x \ to a } f ( x ) = l } if and only if for allx \ inr, { x \ in \ mathbb { r } ^ { * }, } f ( x ) - l { f ^ { * } ( x ) - l } is infinitesimal whenever x - a is infinitesimal. herer { \ mathbb { r } ^ { * } } are the hyperreal numbers and f * is the natural extension of f to the non - standard real numbers. keisler proved that such a hyperreal definition of limit reduces the quantifier complexity by two quantifiers. on the other hand, hrbacek writes that for the definitions to be valid for all hyperreal numbers they must implicitly be grounded in the \ epsilon - \ delta method, and claims that, from the pedagogical point of view, the hope
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
numbers they must implicitly be grounded in the \ epsilon - \ delta method, and claims that, from the pedagogical point of view, the hope that non - standard calculus could be done without \ epsilon - \ delta methods cannot be realized in full. baszczyk et al. detail the usefulness of microcontinuity in developing a transparent definition of uniform continuity, and characterize hrbacek's criticism as a " dubious lament ". at the 1908 international congress of mathematics f. riesz introduced an alternate way defining limits and continuity in concept called " nearness ". a point x is defined to be near a seta \ subseteqr { a \ subseteq \ mathbb { r } } if for every r > 0 there is a point a \ in a so that | x - a | < r. in this setting thelimx \ rightarrowa f ( x ) = l { \ lim _ { x \ to a } f ( x ) = l } if and only if for alla \ subseteqr, { a \ subseteq \ mathbb { r }, } l is near f ( a ) whenever a is near a. here f ( a ) is the set { f ( x ) | x \ ina }. { \ { f ( x ) | x \ in a \ }. } this definition can also be extended to metric and topological spaces. the notion of the limit of a function is very closely related to the concept of continuity. a function f is said to be continuous at c if it is both defined at c and its value at c equals the limit of f as x approaches c : limx \ rightarrowc f ( x ) = f ( c ). { \ lim _ { x \ to c } f ( x ) = f ( c ). } we have here assumed that c is a limit point of the domain of f. if a function f is real - valued, then the limit of f at p is l if and only if both the right - handed limit and left - handed limit of f at p exist and are equal to l. the function f is continuous at p if and only if the limit of f ( x ) as x approaches p exists and is equal to f ( p ). if f : m \ rightarrow n is a function between metric spaces m and n, then it is equivalent that f transforms every sequence in m which converges towards p into a
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
rightarrow n is a function between metric spaces m and n, then it is equivalent that f transforms every sequence in m which converges towards p into a sequence in n which converges towards f ( p ). if n is a normed vector space, then the limit operation is linear in the following sense : if the limit of f ( x ) as x approaches p is l and the limit of g ( x ) as x approaches p is p, then the limit of f ( x ) + g ( x ) as x approaches p is l + p. if a is a scalar from the base field, then the limit of af ( x ) as x approaches p is al. if f and g are real - valued ( or complex - valued ) functions, then taking the limit of an operation on f ( x ) and g ( x ) ( e. g., f + g, f - g, f \ times g, f / g, f g ) under certain conditions is compatible with the operation of limits of f ( x ) and g ( x ). this fact is often called the algebraic limit theorem. the main condition needed to apply the following rules is that the limits on the right - hand sides of the equations exist ( in other words, these limits are finite values including 0 ). additionally, the identity for division requires that the denominator on the right - hand side is non - zero ( division by 0 is not defined ), and the identity for exponentiation requires that the base is positive, or zero while the exponent is positive ( finite ). limx \ rightarrowp ( f ( x ) + g ( x ) ) = limx \ rightarrowp f ( x ) + limx \ rightarrowp g ( x ) limx \ rightarrowp ( f ( x ) - g ( x ) ) = limx \ rightarrowp f ( x ) - limx \ rightarrowp g ( x ) limx \ rightarrowp ( f ( x ) \ cdotg ( x ) ) = limx \ rightarrowp f ( x ) \ cdotlimx \ rightarrowp g ( x ) limx \ rightarrowp ( f ( x ) / g ( x ) ) = limx \ rightarrowp f ( x ) / limx \ rightarrowp g ( x ) limx \ rightarrowp f ( x ) g ( x )
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
##p f ( x ) / limx \ rightarrowp g ( x ) limx \ rightarrowp f ( x ) g ( x ) = limx \ rightarrowp f ( x ) limx \ rightarrowp g ( x ) { { \ begin { array } { lcl } \ lim _ { x \ to p } ( f ( x ) + g ( x ) ) & = & \ lim _ { x \ to p } f ( x ) + \ lim _ { x \ to p } g ( x ) \ \ \ lim _ { x \ to p } ( f ( x ) - g ( x ) ) & = & \ lim _ { x \ to p } f ( x ) - \ lim _ { x \ to p } g ( x ) \ \ \ lim _ { x \ to p } ( f ( x ) \ cdot g ( x ) ) & = & \ lim _ { x \ to p } f ( x ) \ cdot \ lim _ { x \ to p } g ( x ) \ \ \ lim _ { x \ to p } ( f ( x ) / g ( x ) ) & = & { \ lim _ { x \ to p } f ( x ) / \ lim _ { x \ to p } g ( x ) } \ \ \ lim _ { x \ to p } f ( x ) ^ { g ( x ) } & = & { \ lim _ { x \ to p } f ( x ) ^ { \ lim _ { x \ to p } g ( x ) } } \ end { array } } } these rules are also valid for one - sided limits, including when p is or -. in each rule above, when one of the limits on the right is or -, the limit on the left may sometimes still be determined by the following rules. q + = ifq \ neq - q \ times = { ifq > 0 - ifq < 0q = 0ifq \ neqandq \ neq - q = { 0ifq < 0ifq > 0q = { 0if0 < q < 1ifq > 1q - = { if0 < q < 10 ifq > 1 { { \ begin { array } { rcl } q + \ infty & = & \ infty { \ text { if } } q \ neq - \ infty \ \ [ 8
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
+ \ infty & = & \ infty { \ text { if } } q \ neq - \ infty \ \ [ 8pt ] q \ times \ infty & = & { \ begin { cases } \ infty & { \ text { if } } q > 0 \ \ - \ infty & { \ text { if } } q < 0 \ end { cases } } \ \ [ 6pt ] { \ frac { q } { \ infty } } & = & 0 { \ text { if } } q \ neq \ infty { \ text { and } } q \ neq - \ infty \ \ [ 6pt ] \ infty ^ { q } & = & { \ begin { cases } 0 & { \ text { if } } q < 0 \ \ \ infty & { \ text { if } } q > 0 \ end { cases } } \ \ [ 4pt ] q ^ { \ infty } & = & { \ begin { cases } 0 & { \ text { if } } 0 < q < 1 \ \ \ infty & { \ text { if } } q > 1 \ end { cases } } \ \ [ 4pt ] q ^ { - \ infty } & = & { \ begin { cases } \ infty & { \ text { if } } 0 < q < 1 \ \ 0 & { \ text { if } } q > 1 \ end { cases } } \ end { array } } } ( see also extended real number line ). in other cases the limit on the left may still exist, although the right - hand side, called an indeterminate form, does not allow one to determine the result. this depends on the functions f and g. these indeterminate forms are : 00 \ pm \ pm0 \ times \ pm + - 00 01 \ pm { { \ begin { array } { cc } { \ frac { 0 } { 0 } } & { \ frac { \ pm \ infty } { \ pm \ infty } } \ \ [ 6pt ] 0 \ times \ pm \ infty & \ infty + - \ infty \ \ [ 8pt ] \ qquad 0 ^ { 0 } \ qquad & \ qquad \ infty ^ { 0 } \ qquad \
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
] \ qquad 0 ^ { 0 } \ qquad & \ qquad \ infty ^ { 0 } \ qquad \ \ [ 8pt ] 1 ^ { \ pm \ infty } \ end { array } } } see further l'hpital's rule below and indeterminate form. in general, from knowing thatlimy \ rightarrowb f ( y ) = c { \ lim _ { y \ to b } f ( y ) = c } andlimx \ rightarrowa g ( x ) = b, { \ lim _ { x \ to a } g ( x ) = b, } it does not follow thatlimx \ rightarrowa f ( g ( x ) ) = c. { \ lim _ { x \ to a } f ( g ( x ) ) = c. } however, this " chain rule " does hold if one of the following additional conditions holds : f ( b ) = c ( that is, f is continuous at b ), org does not take the value b near a ( that is, there exists a \ delta > 0 such that if 0 < | x - a | < \ delta then | g ( x ) - b | > 0 ). as an example of this phenomenon, consider the following function that violates both additional restrictions : f ( x ) = g ( x ) = { 0ifx \ neq0 1ifx = 0 { f ( x ) = g ( x ) = { \ begin { cases } 0 & { \ text { if } } x \ neq 0 \ \ 1 & { \ text { if } } x = 0 \ end { cases } } } since the value at f ( 0 ) is a removable discontinuity, limx \ rightarrowa f ( x ) = 0 { \ lim _ { x \ to a } f ( x ) = 0 } for all a. thus, the nave chain rule would suggest that the limit of f ( f ( x ) ) is 0. however, it is the case thatf ( f ( x ) ) = { 1ifx \ neq0 0ifx = 0 { f ( f ( x ) ) = { \ begin { cases } 1 & { \ text { if } } x \ neq 0 \ \ 0 & { \ text { if } } x = 0 \ end { cases } } } and solimx \
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
} x \ neq 0 \ \ 0 & { \ text { if } } x = 0 \ end { cases } } } and solimx \ rightarrowa f ( f ( x ) ) = 1 { \ lim _ { x \ to a } f ( f ( x ) ) = 1 } for all a. for n a nonnegative integer and constantsa 1, a2, a3,, an { a _ { 1 }, a _ { 2 }, a _ { 3 }, \ ldots, a _ { n } } andb 1, b2, b3,, bn, { b _ { 1 }, b _ { 2 }, b _ { 3 }, \ ldots, b _ { n }, } limx \ rightarrowa 1x n + a2 xn - 1 + a3 xn - 2 + + an b1 xn + b2 xn - 1 + b3 xn - 2 + + bn = a1 b1 { \ lim _ { x \ to \ infty } { \ frac { a _ { 1 } x ^ { n } + a _ { 2 } x ^ { n - 1 } + a _ { 3 } x ^ { n - 2 } + \ dots + a _ { n } } { b _ { 1 } x ^ { n } + b _ { 2 } x ^ { n - 1 } + b _ { 3 } x ^ { n - 2 } + \ dots + b _ { n } } } = { \ frac { a _ { 1 } } { b _ { 1 } } } } this can be proven by dividing both the numerator and denominator by xn. if the numerator is a polynomial of higher degree, the limit does not exist. if the denominator is of higher degree, the limit is 0. limx \ rightarrow0 sinx x = 1limx \ rightarrow0 1 - cosx x = 0 { { \ begin { array } { lcl } \ lim _ { x \ to 0 } { \ frac { \ sin x } { x } } & = & 1 \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { 1 - \ cos x } { x } } & = & 0 \ end { array } } } limx \ rightarrow0 ( 1 + x )
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
\ cos x } { x } } & = & 0 \ end { array } } } limx \ rightarrow0 ( 1 + x ) 1x = limr \ rightarrow ( 1 + 1r ) r = elimx \ rightarrow0 ex - 1x = 1limx \ rightarrow0 ea x - 1b x = ab limx \ rightarrow0 ca x - 1b x = ab lnc limx \ rightarrow0 + xx = 1 { { \ begin { array } { lcl } \ lim _ { x \ to 0 } ( 1 + x ) ^ { \ frac { 1 } { x } } & = & \ lim _ { r \ to \ infty } ( 1 + { \ frac { 1 } { r } } ) ^ { r } = e \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { e ^ { x } - 1 } { x } } & = & 1 \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { e ^ { ax } - 1 } { bx } } & = & { \ frac { a } { b } } \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { c ^ { ax } - 1 } { bx } } & = & { \ frac { a } { b } } \ ln c \ \ [ 4pt ] \ lim _ { x \ to 0 ^ { + } } x ^ { x } & = & 1 \ end { array } } } limx \ rightarrow0 ln ( 1 + x ) x = 1limx \ rightarrow0 ln ( 1 + ax ) bx = ab limx \ rightarrow0 logc ( 1 + ax ) bx = ab lnc { { \ begin { array } { lcl } \ lim _ { x \ to 0 } { \ frac { \ ln ( 1 + x ) } { x } } & = & 1 \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { \ ln ( 1 + ax ) } { bx } } & = & { \ frac { a } { b } } \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { \ log
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
\ frac { a } { b } } \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { \ log _ { c } ( 1 + ax ) } { bx } } & = & { \ frac { a } { b \ ln c } } \ end { array } } } this rule uses derivatives to find limits of indeterminate forms 0 / 0 or \ pm /, and only applies to such cases. other indeterminate forms may be manipulated into this form. given two functions f ( x ) and g ( x ), defined over an open interval i containing the desired limit point c, then if : limx \ rightarrowc f ( x ) = limx \ rightarrowc g ( x ) = 0, { \ lim _ { x \ to c } f ( x ) = \ lim _ { x \ to c } g ( x ) = 0, } orlimx \ rightarrowc f ( x ) = \ pmlimx \ rightarrowc g ( x ) = \ pm, { \ lim _ { x \ to c } f ( x ) = \ pm \ lim _ { x \ to c } g ( x ) = \ pm \ infty, } andf { f } andg { g } are differentiable overi { c }, { i \ setminus \ { c \ }, } andg'( x ) \ neq0 { g'( x ) \ neq 0 } for allx \ ini { c }, { x \ in i \ setminus \ { c \ }, } andlimx \ rightarrowc f'( x ) g'( x ) { \ lim _ { x \ to c } { \ tfrac { f'( x ) } { g'( x ) } } } exists, then : limx \ rightarrowc f ( x ) g ( x ) = limx \ rightarrowc f'( x ) g'( x ). { \ lim _ { x \ to c } { \ frac { f ( x ) } { g ( x ) } } = \ lim _ { x \ to c } { \ frac { f'( x ) } { g'( x ) } }. } normally, the first condition is the most important one. for example : limx \ rightarrow0
|
https://en.wikipedia.org/wiki/Limit_of_a_function
|
} { g'( x ) } }. } normally, the first condition is the most important one. for example : limx \ rightarrow0 sin ( 2x ) sin ( 3x ) = limx \ rightarrow0 2cos ( 2x ) 3cos ( 3x ) = 2 \ cdot1 3 \ cdot1 = 23. { \ lim _ { x \ to 0 } { \ frac { \ sin ( 2x ) } { \ sin ( 3x ) } } = \ lim _ { x \ to 0 } { \ frac { 2 \ cos ( 2x ) } { 3 \ cos ( 3x ) } } = { \ frac { 2 \ cdot 1 } { 3 \ cdot 1 } } = { \ frac { 2 } { 3 } }. } specifying an infinite bound on a summation or integral is a common shorthand for specifying a limit. a short way to write the limitlimn \ rightarrow \ sumi = sn f ( i ) { \ lim _ { n \ to \ infty } \ sum _ { i = s } ^ { n } f ( i ) } is \ sumi = sf ( i ). { \ sum _ { i = s } ^ { \ infty } f ( i ). } an important example of limits of sums such as these are series. a short way to write the limitlimx \ rightarrow \ inta xf ( t ) dt { \ lim _ { x \ to \ infty } \ int _ { a } ^ { x } f ( t ) \ ; dt } is \ inta f ( t ) dt. { \ int _ { a } ^ { \ infty } f ( t ) \ ; dt. } a short way to write the limitlimx \ rightarrow - \ intx bf ( t ) dt { \ lim _ { x \ to - \ infty } \ int _ { x } ^ { b } f ( t ) \ ; dt } is \ int - bf ( t ) dt. { \ int _ { - \ infty } ^ { b } f ( t ) \ ; dt. }
|
https://en.wikipedia.org/wiki/Continuity
|
continuity or continuous may refer to : continuity ( mathematics ), the opposing concept to discreteness ; common examples includecontinuous probability distribution or random variable in probability and statisticscontinuous game, a generalization of games used in game theorylaw of continuity, a heuristic principle of gottfried leibnizcontinuous function, in particular : continuity ( topology ), a generalization to functions between topological spacesscott continuity, for functions between posetscontinuity ( set theory ), for functions between ordinalscontinuity ( category theory ), for functorsgraph continuity, for payoff functions in game theorycontinuity theorem may refer to one of two results : lvy's continuity theorem, on random variableskolmogorov continuity theorem, on stochastic processesin geometry : parametric continuity, for parametrised curvesgeometric continuity, a concept primarily applied to the conic sections and related shapesin probability theorycontinuous stochastic processcontinuity equations applicable to conservation of mass, energy, momentum, electric charge and other conserved quantitiescontinuity test for an unbroken electrical path in an electronic circuit or connectorin materials science : a colloidal system, consists of a dispersed phase evenly intermixed with a continuous phasea continuous wave, an electromagnetic wave of constant amplitude and frequencycontinuity ( broadcasting ), messages played by broadcasters between programscontinuity editing, a form of film editing that combines closely related shots into a sequence highlighting plot points or consistenciescontinuity ( fiction ), consistency of plot elements, such as characterization, location, and costuming, within a work of fiction ( this is a mass noun ) continuity ( setting ), one of several similar but distinct fictional universes in a broad franchise of related works ( this is a count noun ) " continuity " or continuity script, the precursor to a film screenplaycontinuity ( apple ), a set of features introduced by applecontinuity of operations ( disambiguation ) continuous and progressive aspects in linguisticsbusiness continuityhealth care continuitycontinuity in architecture ( part of complementary architecture )
|
https://en.wikipedia.org/wiki/Derivative
|
in mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. the derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. the tangent line is the best linear approximation of the function near that input value. for this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. the process of finding a derivative is called differentiation. there are multiple different notations for differentiation. leibniz notation, named after gottfried wilhelm leibniz, is represented as the ratio of two differentials, whereas prime notation is written by adding a prime mark. higher order notations represent repeated differentiation, and they are usually denoted in leibniz notation by adding superscripts to the differentials, and in prime notation by adding additional prime marks. the higher order derivatives can be applied in physics ; for example, while the first derivative of the position of a moving object with respect to time is the object's velocity, how the position changes as time advances, the second derivative is the object's acceleration, how the velocity changes as time advances. derivatives can be generalized to functions of several real variables. in this case, the derivative is reinterpreted as a linear transformation whose graph is ( after an appropriate translation ) the best linear approximation to the graph of the original function. the jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables. it can be calculated in terms of the partial derivatives with respect to the independent variables. for a real - valued function of several variables, the jacobian matrix reduces to the gradient vector. a function of a real variablef ( x ) { f ( x ) } is differentiable at a pointa { a } of its domain, if its domain contains an open interval containinga { a }, and the limitl = limh \ rightarrow0 f ( a + h ) - f ( a ) h { l = \ lim _ { h \ to 0 } { \ frac { f ( a + h ) - f ( a ) } { h } } } exists. this means that, for every positive real number \ epsilon { \ varepsilon }, there exists a positive real number \ delta { \
|
https://en.wikipedia.org/wiki/Derivative
|
} exists. this means that, for every positive real number \ epsilon { \ varepsilon }, there exists a positive real number \ delta { \ delta } such that, for everyh { h } such that | h | < \ delta { | h | < \ delta } andh \ neq0 { h \ neq 0 } thenf ( a + h ) { f ( a + h ) } is defined, and | l - f ( a + h ) - f ( a ) h | < \ epsilon, { | l - { \ frac { f ( a + h ) - f ( a ) } { h } } | < \ varepsilon, } where the vertical bars denote the absolute value. this is an example of the ( \ epsilon, \ delta ) - definition of limit. if the functionf { f } is differentiable ata { a }, that is if the limitl { l } exists, then this limit is called the derivative off { f } ata { a }. multiple notations for the derivative exist. the derivative off { f } ata { a } can be denotedf'( a ) { f'( a ) }, read as " f { f } prime ofa { a } " ; or it can be denotedd fd x ( a ) { \ textstyle { \ frac { df } { dx } } ( a ) }, read as " the derivative off { f } with respect tox { x } ata { a } " or " df { df } by ( or over ) dx { dx } ata { a } ". see notation below. iff { f } is a function that has a derivative at every point in its domain, then a function can be defined by mapping every pointx { x } to the value of the derivative off { f } atx { x }. this function is writtenf'{ f'} and is called the derivative function or the derivative off { f }. the functionf { f } sometimes has a derivative at most, but not all, points of its domain. the function whose value ata { a } equalsf'( a ) { f'( a ) } wheneverf'( a ) { f'( a ) } is defined and elsewhere is undefined is also called the derivative off { f }. it is still a function, but its domain may be smaller than the domain off { f }
|
https://en.wikipedia.org/wiki/Derivative
|
is undefined is also called the derivative off { f }. it is still a function, but its domain may be smaller than the domain off { f }. for example, letf { f } be the squaring function : f ( x ) = x2 { f ( x ) = x ^ { 2 } }. then the quotient in the definition of the derivative isf ( a + h ) - f ( a ) h = ( a + h ) 2 - a2 h = a2 + 2a h + h2 - a2 h = 2a + h. { { \ frac { f ( a + h ) - f ( a ) } { h } } = { \ frac { ( a + h ) ^ { 2 } - a ^ { 2 } } { h } } = { \ frac { a ^ { 2 } + 2ah + h ^ { 2 } - a ^ { 2 } } { h } } = 2a + h. } the division in the last step is valid as long ash \ neq0 { h \ neq 0 }. the closerh { h } is to0 { 0 }, the closer this expression becomes to the value2 a { 2a }. the limit exists, and for every inputa { a } the limit is2 a { 2a }. so, the derivative of the squaring function is the doubling function : f'( x ) = 2x { f'( x ) = 2x }. the ratio in the definition of the derivative is the slope of the line through two points on the graph of the functionf { f }, specifically the points ( a, f ( a ) ) { ( a, f ( a ) ) } and ( a + h, f ( a + h ) ) { ( a + h, f ( a + h ) ) }. ash { h } is made smaller, these points grow closer together, and the slope of this line approaches the limiting value, the slope of the tangent to the graph off { f } ata { a }. in other words, the derivative is the slope of the tangent. one way to think of the derivatived fd x ( a ) { \ textstyle { \ frac { df } { dx } } ( a ) } is as the ratio of an infinitesimal change in the output of the functionf { f } to an infinitesimal change in its input. in order to make this intuition rigorous
|
https://en.wikipedia.org/wiki/Derivative
|
an infinitesimal change in the output of the functionf { f } to an infinitesimal change in its input. in order to make this intuition rigorous, a system of rules for manipulating infinitesimal quantities is required. the system of hyperreal numbers is a way of treating infinite and infinitesimal quantities. the hyperreals are an extension of the real numbers that contain numbers greater than anything of the form1 + 1 + + 1 { 1 + 1 + \ cdots + 1 } for any finite number of terms. such numbers are infinite, and their reciprocals are infinitesimals. the application of hyperreal numbers to the foundations of calculus is called nonstandard analysis. this provides a way to define the basic concepts of calculus such as the derivative and integral in terms of infinitesimals, thereby giving a precise meaning to thed { d } in the leibniz notation. thus, the derivative off ( x ) { f ( x ) } becomesf'( x ) = st ( f ( x + dx ) - f ( x ) dx ) { f'( x ) = \ operatorname { st } ( { \ frac { f ( x + dx ) - f ( x ) } { dx } } ) } for an arbitrary infinitesimald x { dx }, wherest { \ operatorname { st } } denotes the standard part function, which " rounds off " each finite hyperreal to the nearest real. taking the squaring functionf ( x ) = x2 { f ( x ) = x ^ { 2 } } as an example again, f'( x ) = st ( x2 + 2x \ cdotd x + ( dx ) 2 - x2 dx ) = st ( 2x \ cdotd x + ( dx ) 2d x ) = st ( 2x \ cdotd xd x + ( dx ) 2d x ) = st ( 2x + dx ) = 2x. { { \ begin { aligned } f'( x ) & = \ operatorname { st } ( { \ frac { x ^ { 2 } + 2x \ cdot dx + ( dx ) ^ { 2 } - x ^ { 2 } } { dx } } ) \ \ & = \ operatorname { st } ( { \ frac { 2x \ cdot dx + ( dx ) ^ { 2 } } { dx
|
https://en.wikipedia.org/wiki/Derivative
|
\ operatorname { st } ( { \ frac { 2x \ cdot dx + ( dx ) ^ { 2 } } { dx } } ) \ \ & = \ operatorname { st } ( { \ frac { 2x \ cdot dx } { dx } } + { \ frac { ( dx ) ^ { 2 } } { dx } } ) \ \ & = \ operatorname { st } ( 2x + dx ) \ \ & = 2x. \ end { aligned } } } iff { f } is differentiable ata { a }, thenf { f } must also be continuous ata { a }. as an example, choose a pointa { a } and letf { f } be the step function that returns the value 1 for allx { x } less thana { a }, and returns a different value 10 for allx { x } greater than or equal toa { a }. the functionf { f } cannot have a derivative ata { a }. ifh { h } is negative, thena + h { a + h } is on the low part of the step, so the secant line froma { a } toa + h { a + h } is very steep ; ash { h } tends to zero, the slope tends to infinity. ifh { h } is positive, thena + h { a + h } is on the high part of the step, so the secant line froma { a } toa + h { a + h } has slope zero. consequently, the secant lines do not approach any single slope, so the limit of the difference quotient does not exist. however, even if a function is continuous at a point, it may not be differentiable there. for example, the absolute value function given byf ( x ) = | x | { f ( x ) = | x | } is continuous atx = 0 { x = 0 }, but it is not differentiable there. ifh { h } is positive, then the slope of the secant line from 0 toh { h } is one ; ifh { h } is negative, then the slope of the secant line from0 { 0 } toh { h } is - 1 { - 1 }. this can be seen graphically as a " kink " or a " cusp " in the graph atx = 0 { x =
|
https://en.wikipedia.org/wiki/Derivative
|
- 1 }. this can be seen graphically as a " kink " or a " cusp " in the graph atx = 0 { x = 0 }. even a function with a smooth graph is not differentiable at a point where its tangent is vertical : for instance, the function given byf ( x ) = x1 / 3 { f ( x ) = x ^ { 1 / 3 } } is not differentiable atx = 0 { x = 0 }. in summary, a function that has a derivative is continuous, but there are continuous functions that do not have a derivative. most functions that occur in practice have derivatives at all points or almost every point. early in the history of calculus, many mathematicians assumed that a continuous function was differentiable at most points. under mild conditions ( for example, if the function is a monotone or a lipschitz function ), this is true. however, in 1872, weierstrass found the first example of a function that is continuous everywhere but differentiable nowhere. this example is now known as the weierstrass function. in 1931, stefan banach proved that the set of functions that have a derivative at some point is a meager set in the space of all continuous functions. informally, this means that hardly any random continuous functions have a derivative at even one point. one common way of writing the derivative of a function is leibniz notation, introduced by gottfried wilhelm leibniz in 1675, which denotes a derivative as the quotient of two differentials, such asd y { dy } andd x { dx }. it is still commonly used when the equationy = f ( x ) { y = f ( x ) } is viewed as a functional relationship between dependent and independent variables. the first derivative is denoted byd yd x { \ textstyle { \ frac { dy } { dx } } }, read as " the derivative ofy { y } with respect tox { x } ". this derivative can alternately be treated as the application of a differential operator to a function, dy dx = dd xf ( x ). { \ textstyle { \ frac { dy } { dx } } = { \ frac { d } { dx } } f ( x ). } higher derivatives are expressed using the notationd ny dx n { \ textstyle { \ frac { d ^ { n } y } { dx ^ { n } }
|
https://en.wikipedia.org/wiki/Derivative
|
using the notationd ny dx n { \ textstyle { \ frac { d ^ { n } y } { dx ^ { n } } } } for then { n } - th derivative ofy = f ( x ) { y = f ( x ) }. these are abbreviations for multiple applications of the derivative operator ; for example, d2 yd x2 = dd x ( dd xf ( x ) ). { \ textstyle { \ frac { d ^ { 2 } y } { dx ^ { 2 } } } = { \ frac { d } { dx } } { \ bigl ( } { \ frac { d } { dx } } f ( x ) { \ bigr ) }. } unlike some alternatives, leibniz notation involves explicit specification of the variable for differentiation, in the denominator, which removes ambiguity when working with multiple interrelated quantities. the derivative of a composed function can be expressed using the chain rule : ifu = g ( x ) { u = g ( x ) } andy = f ( g ( x ) ) { y = f ( g ( x ) ) } thend yd x = dy du \ cdotd ud x. { \ textstyle { \ frac { dy } { dx } } = { \ frac { dy } { du } } \ cdot { \ frac { du } { dx } }. } another common notation for differentiation is by using the prime mark in the symbol of a functionf ( x ) { f ( x ) }. this notation, due to joseph - louis lagrange, is now known as prime notation. the first derivative is written asf'( x ) { f'( x ) }, read as " f { f } prime ofx { x } ", ory'{ y'}, read as " y { y } prime ". similarly, the second and the third derivatives can be written asf'' { f'' } andf'''{ f'''}, respectively. for denoting the number of higher derivatives beyond this point, some authors use roman numerals in superscript, whereas others place the number in parentheses, such asf iv { f ^ { iv } } orf ( 4 ) { f ^ { ( 4 ) } }. the latter notation generalizes to yield the notationf ( n ) { f ^ { (
|
https://en.wikipedia.org/wiki/Derivative
|
##f ( 4 ) { f ^ { ( 4 ) } }. the latter notation generalizes to yield the notationf ( n ) { f ^ { ( n ) } } for then { n } th derivative off { f }. in newton's notation or the dot notation, a dot is placed over a symbol to represent a time derivative. ify { y } is a function oft { t }, then the first and second derivatives can be written asy { { \ dot { y } } } andy { { \ ddot { y } } }, respectively. this notation is used exclusively for derivatives with respect to time or arc length. it is typically used in differential equations in physics and differential geometry. however, the dot notation becomes unmanageable for high - order derivatives ( of order 4 or more ) and cannot deal with multiple independent variables. another notation is d - notation, which represents the differential operator by the symbold { d }. the first derivative is writtend f ( x ) { df ( x ) } and higher derivatives are written with a superscript, so then { n } - th derivative isd nf ( x ) { d ^ { n } f ( x ) }. this notation is sometimes called euler notation, although it seems that leonhard euler did not use it, and the notation was introduced by louis franois antoine arbogast. to indicate a partial derivative, the variable differentiated by is indicated with a subscript, for example given the functionu = f ( x, y ) { u = f ( x, y ) }, its partial derivative with respect tox { x } can be writtend xu { d _ { x } u } ord xf ( x, y ) { d _ { x } f ( x, y ) }. higher partial derivatives can be indicated by superscripts or multiple subscripts, e. g. dx yf ( x, y ) = \ partial \ partialy ( \ partial \ partialx f ( x, y ) ) { \ textstyle d _ { xy } f ( x, y ) = { \ frac { \ partial } { \ partial y } } { \ bigl ( } { \ frac { \ partial } { \ partial x } } f ( x, y ) { \ bigr ) } } andd x2 f ( x, y ) = \ partial \ partialx ( \ partial \ partialx f ( x
|
https://en.wikipedia.org/wiki/Derivative
|
{ \ bigr ) } } andd x2 f ( x, y ) = \ partial \ partialx ( \ partial \ partialx f ( x, y ) ) { \ textstyle d _ { x } ^ { 2 } f ( x, y ) = { \ frac { \ partial } { \ partial x } } { \ bigl ( } { \ frac { \ partial } { \ partial x } } f ( x, y ) { \ bigr ) } }. in principle, the derivative of a function can be computed from the definition by considering the difference quotient and computing its limit. once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions from simpler ones. this process of finding a derivative is known as differentiation. the following are the rules for the derivatives of the most common basic functions. here, a { a } is a real number, ande { e } is the base of the natural logarithm, approximately 2. 71828. derivatives of powers : dd xx a = ax a - 1 { { \ frac { d } { dx } } x ^ { a } = ax ^ { a - 1 } } functions of exponential, natural logarithm, and logarithm with general base : dd xe x = ex { { \ frac { d } { dx } } e ^ { x } = e ^ { x } } dd xa x = ax ln ( a ) { { \ frac { d } { dx } } a ^ { x } = a ^ { x } \ ln ( a ) }, fora > 0 { a > 0 } dd xln ( x ) = 1x { { \ frac { d } { dx } } \ ln ( x ) = { \ frac { 1 } { x } } }, forx > 0 { x > 0 } dd xloga ( x ) = 1x ln ( a ) { { \ frac { d } { dx } } \ log _ { a } ( x ) = { \ frac { 1 } { x \ ln ( a ) } } }, forx, a > 0 { x, a > 0 } trigonometric functions : dd xsin ( x ) = cos ( x ) { { \ frac { d } { dx
|
https://en.wikipedia.org/wiki/Derivative
|
a > 0 } trigonometric functions : dd xsin ( x ) = cos ( x ) { { \ frac { d } { dx } } \ sin ( x ) = \ cos ( x ) } dd xcos ( x ) = - sin ( x ) { { \ frac { d } { dx } } \ cos ( x ) = - \ sin ( x ) } dd xtan ( x ) = sec2 ( x ) = 1cos2 ( x ) = 1 + tan2 ( x ) { { \ frac { d } { dx } } \ tan ( x ) = \ sec ^ { 2 } ( x ) = { \ frac { 1 } { \ cos ^ { 2 } ( x ) } } = 1 + \ tan ^ { 2 } ( x ) } inverse trigonometric functions : dd xarcsin ( x ) = 11 - x2 { { \ frac { d } { dx } } \ arcsin ( x ) = { \ frac { 1 } { \ sqrt { 1 - x ^ { 2 } } } } }, for - 1 < x < 1 { - 1 < x < 1 } dd xarccos ( x ) = - 11 - x2 { { \ frac { d } { dx } } \ arccos ( x ) = - { \ frac { 1 } { \ sqrt { 1 - x ^ { 2 } } } } }, for - 1 < x < 1 { - 1 < x < 1 } dd xarctan ( x ) = 11 + x2 { { \ frac { d } { dx } } \ arctan ( x ) = { \ frac { 1 } { 1 + x ^ { 2 } } } } given that thef { f } andg { g } are the functions. the following are some of the most basic rules for deducing the derivative of functions from derivatives of basic functions. constant rule : iff { f } is constant, then for allx { x }, f'( x ) = 0. { f'( x ) = 0. } sum rule : ( \ alphaf + \ betag )'= \ alphaf'+ \ betag'{ ( \ alpha f + \ beta g )'= \ alpha f'+ \ beta g'} for all functionsf { f } andg { g } and
|
https://en.wikipedia.org/wiki/Derivative
|
alpha f + \ beta g )'= \ alpha f'+ \ beta g'} for all functionsf { f } andg { g } and all real numbers \ alpha { \ alpha } and \ beta { \ beta }. product rule : ( fg )'= f'g + fg'{ ( fg )'= f'g + fg'} for all functionsf { f } andg { g }. as a special case, this rule includes the fact ( \ alphaf )'= \ alphaf'{ ( \ alpha f )'= \ alpha f'} whenever \ alpha { \ alpha } is a constant because \ alpha'f = 0 \ cdotf = 0 { \ alpha'f = 0 \ cdot f = 0 } by the constant rule. quotient rule : ( fg )'= f'g - fg'g2 { ( { \ frac { f } { g } } )'= { \ frac { f'g - fg'} { g ^ { 2 } } } } for all functionsf { f } andg { g } at all inputs where g \ neq 0. chain rule for composite functions : iff ( x ) = h ( g ( x ) ) { f ( x ) = h ( g ( x ) ) }, thenf'( x ) = h'( g ( x ) ) \ cdotg'( x ). { f'( x ) = h'( g ( x ) ) \ cdot g'( x ). } the derivative of the function given byf ( x ) = x4 + sin ( x2 ) - ln ( x ) ex + 7 { f ( x ) = x ^ { 4 } + \ sin ( x ^ { 2 } ) - \ ln ( x ) e ^ { x } + 7 } isf'( x ) = 4x ( 4 - 1 ) + d ( x2 ) dx cos ( x2 ) - d ( lnx ) dx ex - ln ( x ) d ( ex ) dx + 0 = 4x 3 + 2x cos ( x2 ) - 1x ex - ln ( x ) ex. { { \ begin { aligned } f'( x ) & = 4x ^ { ( 4 - 1 ) } + { \ frac { d ( x ^ { 2 } ) }
|
https://en.wikipedia.org/wiki/Derivative
|
f'( x ) & = 4x ^ { ( 4 - 1 ) } + { \ frac { d ( x ^ { 2 } ) } { dx } } \ cos ( x ^ { 2 } ) - { \ frac { d ( \ ln { x } ) } { dx } } e ^ { x } - \ ln ( x ) { \ frac { d ( e ^ { x } ) } { dx } } + 0 \ \ & = 4x ^ { 3 } + 2x \ cos ( x ^ { 2 } ) - { \ frac { 1 } { x } } e ^ { x } - \ ln ( x ) e ^ { x }. \ end { aligned } } } here the second term was computed using the chain rule and the third term using the product rule. the known derivatives of the elementary functionsx 2 { x ^ { 2 } }, x4 { x ^ { 4 } }, sin ( x ) { \ sin ( x ) }, ln ( x ) { \ ln ( x ) }, andexp ( x ) = ex { \ exp ( x ) = e ^ { x } }, as well as the constant7 { 7 }, were also used. higher order derivatives are the result of differentiating a function repeatedly. given thatf { f } is a differentiable function, the derivative off { f } is the first derivative, denoted asf'{ f'}. the derivative off'{ f'} is the second derivative, denoted asf'' { f'' }, and the derivative off'' { f'' } is the third derivative, denoted asf'''{ f'''}. by continuing this process, if it exists, then { n } th derivative is the derivative of the ( n - 1 ) { ( n - 1 ) } th derivative or the derivative of ordern { n }. as has been discussed above, the generalization of derivative of a functionf { f } may be denoted asf ( n ) { f ^ { ( n ) } }. a function that hask { k } successive derivatives is calledk { k } times differentiable. if thek { k } - th derivative is continuous, then the function is said to be of differentiability classc k { c ^ { k } }. a function that has infinitely many derivatives is called infinitely
|
https://en.wikipedia.org/wiki/Derivative
|
then the function is said to be of differentiability classc k { c ^ { k } }. a function that has infinitely many derivatives is called infinitely differentiable or smooth. any polynomial function is infinitely differentiable ; taking derivatives repeatedly will eventually result in a constant function, and all subsequent derivatives of that function are zero. one application of higher - order derivatives is in physics. suppose that a function represents the position of an object at the time. the first derivative of that function is the velocity of an object with respect to time, the second derivative of the function is the acceleration of an object with respect to time, and the third derivative is the jerk. a vector - valued functiony { \ mathbf { y } } of a real variable sends real numbers to vectors in some vector spacer n { \ mathbb { r } ^ { n } }. a vector - valued function can be split up into its coordinate functionsy 1 ( t ), y2 ( t ),, yn ( t ) { y _ { 1 } ( t ), y _ { 2 } ( t ), \ dots, y _ { n } ( t ) }, meaning thaty = ( y1 ( t ), y2 ( t ),, yn ( t ) ) { \ mathbf { y } = ( y _ { 1 } ( t ), y _ { 2 } ( t ), \ dots, y _ { n } ( t ) ) }. this includes, for example, parametric curves inr 2 { \ mathbb { r } ^ { 2 } } orr 3 { \ mathbb { r } ^ { 3 } }. the coordinate functions are real - valued functions, so the above definition of derivative applies to them. the derivative ofy ( t ) { \ mathbf { y } ( t ) } is defined to be the vector, called the tangent vector, whose coordinates are the derivatives of the coordinate functions. that is, y'( t ) = limh \ rightarrow0 y ( t + h ) - y ( t ) h, { \ mathbf { y }'( t ) = \ lim _ { h \ to 0 } { \ frac { \ mathbf { y } ( t + h ) - \ mathbf { y } ( t ) } { h } }, } if the limit exists. the subtraction in the numerator is the subtraction of vectors, not scalars. if the
|
https://en.wikipedia.org/wiki/Derivative
|
} }, } if the limit exists. the subtraction in the numerator is the subtraction of vectors, not scalars. if the derivative ofy { \ mathbf { y } } exists for every value oft { t }, theny'{ \ mathbf { y }'} is another vector - valued function. functions can depend upon more than one variable. a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. partial derivatives are used in vector calculus and differential geometry. as with ordinary derivatives, multiple notations exist : the partial derivative of a functionf ( x, y, ) { f ( x, y, \ dots ) } with respect to the variablex { x } is variously denoted byamong other possibilities. it can be thought of as the rate of change of the function in thex { x } - direction. here \ partial is a rounded d called the partial derivative symbol. to distinguish it from the letter d, \ partial is sometimes pronounced " der ", " del ", or " partial " instead of " dee ". for example, letf ( x, y ) = x2 + xy + y2 { f ( x, y ) = x ^ { 2 } + xy + y ^ { 2 } }, then the partial derivative of functionf { f } with respect to both variablesx { x } andy { y } are, respectively : \ partialf \ partialx = 2x + y, \ partialf \ partialy = x + 2y. { { \ frac { \ partial f } { \ partial x } } = 2x + y, \ qquad { \ frac { \ partial f } { \ partial y } } = x + 2y. } in general, the partial derivative of a functionf ( x1,, xn ) { f ( x _ { 1 }, \ dots, x _ { n } ) } in the directionx i { x _ { i } } at the point ( a1,, an ) { ( a _ { 1 }, \ dots, a _ { n } ) } is defined to be : \ partialf \ partialx i ( a1,, an ) = limh \ rightarrow0 f ( a1,, ai + h,, an ) - f ( a1,, ai,, an ) h. { { \ frac { \
|
https://en.wikipedia.org/wiki/Derivative
|
f ( a1,, ai + h,, an ) - f ( a1,, ai,, an ) h. { { \ frac { \ partial f } { \ partial x _ { i } } } ( a _ { 1 }, \ ldots, a _ { n } ) = \ lim _ { h \ to 0 } { \ frac { f ( a _ { 1 }, \ ldots, a _ { i } + h, \ ldots, a _ { n } ) - f ( a _ { 1 }, \ ldots, a _ { i }, \ ldots, a _ { n } ) } { h } }. } this is fundamental for the study of the functions of several real variables. letf ( x1,, xn ) { f ( x _ { 1 }, \ dots, x _ { n } ) } be such a real - valued function. if all partial derivativesf { f } with respect tox j { x _ { j } } are defined at the point ( a1,, an ) { ( a _ { 1 }, \ dots, a _ { n } ) }, these partial derivatives define the vector \ nablaf ( a1,, an ) = ( \ partialf \ partialx 1 ( a1,, an ),, \ partialf \ partialx n ( a1,, an ) ), { \ nabla f ( a _ { 1 }, \ ldots, a _ { n } ) = ( { \ frac { \ partial f } { \ partial x _ { 1 } } } ( a _ { 1 }, \ ldots, a _ { n } ), \ ldots, { \ frac { \ partial f } { \ partial x _ { n } } } ( a _ { 1 }, \ ldots, a _ { n } ) ), } which is called the gradient off { f } ata { a }. iff { f } is differentiable at every point in some domain, then the gradient is a vector - valued function \ nablaf { \ nabla f } that maps the point ( a1,, an ) { ( a _ { 1 }, \ dots, a _ { n } ) } to the vector \ nablaf ( a1,, an ) { \ nabla f ( a _ { 1 }, \ dots, a _ { n } ) }. consequently, the gradient
|
https://en.wikipedia.org/wiki/Derivative
|
a1,, an ) { \ nabla f ( a _ { 1 }, \ dots, a _ { n } ) }. consequently, the gradient determines a vector field. iff { f } is a real - valued function onr n { \ mathbb { r } ^ { n } }, then the partial derivatives off { f } measure its variation in the direction of the coordinate axes. for example, iff { f } is a function ofx { x } andy { y }, then its partial derivatives measure the variation inf { f } in thex { x } andy { y } direction. however, they do not directly measure the variation off { f } in any other direction, such as along the diagonal liney = x { y = x }. these are measured using directional derivatives. given a vectorv = ( v1,, vn ) { \ mathbf { v } = ( v _ { 1 }, \ ldots, v _ { n } ) }, then the directional derivative off { f } in the direction ofv { \ mathbf { v } } at the pointx { \ mathbf { x } } is : dv f ( x ) = limh \ rightarrow0 f ( x + hv ) - f ( x ) h. { d _ { \ mathbf { v } } { f } ( \ mathbf { x } ) = \ lim _ { harrow 0 } { \ frac { f ( \ mathbf { x } + h \ mathbf { v } ) - f ( \ mathbf { x } ) } { h } }. } if all the partial derivatives off { f } exist and are continuous atx { \ mathbf { x } }, then they determine the directional derivative off { f } in the directionv { \ mathbf { v } } by the formula : dv f ( x ) = \ sumj = 1n vj \ partialf \ partialx j. { d _ { \ mathbf { v } } { f } ( \ mathbf { x } ) = \ sum _ { j = 1 } ^ { n } v _ { j } { \ frac { \ partial f } { \ partial x _ { j } } }. } whenf { f } is a function from an open subset ofr n { \ mathbb { r } ^ { n } } tor m { \ mathbb { r } ^ {
|
https://en.wikipedia.org/wiki/Derivative
|
a function from an open subset ofr n { \ mathbb { r } ^ { n } } tor m { \ mathbb { r } ^ { m } }, then the directional derivative off { f } in a chosen direction is the best linear approximation tof { f } at that point and in that direction. however, whenn > 1 { n > 1 }, no single directional derivative can give a complete picture of the behavior off { f }. the total derivative gives a complete picture by considering all directions at once. that is, for any vectorv { \ mathbf { v } } starting ata { \ mathbf { a } }, the linear approximation formula holds : f ( a + v ) \ approxf ( a ) + f'( a ) v. { f ( \ mathbf { a } + \ mathbf { v } ) \ approx f ( \ mathbf { a } ) + f'( \ mathbf { a } ) \ mathbf { v }. } similarly with the single - variable derivative, f'( a ) { f'( \ mathbf { a } ) } is chosen so that the error in this approximation is as small as possible. the total derivative off { f } ata { \ mathbf { a } } is the unique linear transformationf'( a ) : rn \ rightarrowr m { f'( \ mathbf { a } ) \ colon \ mathbb { r } ^ { n } \ to \ mathbb { r } ^ { m } } such thatlimh \ rightarrow0 f ( a + h ) - ( f ( a ) + f'( a ) h ) h = 0. { \ lim _ { \ mathbf { h } \ to 0 } { \ frac { \ lvert f ( \ mathbf { a } + \ mathbf { h } ) - ( f ( \ mathbf { a } ) + f'( \ mathbf { a } ) \ mathbf { h } ) \ rvert } { \ lvert \ mathbf { h } \ rvert } } = 0. } hereh { \ mathbf { h } } is a vector inr n { \ mathbb { r } ^ { n } }, so the norm in the denominator is the standard length onr n { \ mathbb { r } ^ { n } }. however, f'( a ) h { f'( \
|
https://en.wikipedia.org/wiki/Derivative
|
the standard length onr n { \ mathbb { r } ^ { n } }. however, f'( a ) h { f'( \ mathbf { a } ) \ mathbf { h } } is a vector inr m { \ mathbb { r } ^ { m } }, and the norm in the numerator is the standard length onr m { \ mathbb { r } ^ { m } }. ifv { v } is a vector starting ata { a }, thenf'( a ) v { f'( \ mathbf { a } ) \ mathbf { v } } is called the pushforward ofv { \ mathbf { v } } byf { f }. if the total derivative exists ata { \ mathbf { a } }, then all the partial derivatives and directional derivatives off { f } exist ata { \ mathbf { a } }, and for allv { \ mathbf { v } }, f'( a ) v { f'( \ mathbf { a } ) \ mathbf { v } } is the directional derivative off { f } in the directionv { \ mathbf { v } }. iff { f } is written using coordinate functions, so thatf = ( f1, f2,, fm ) { f = ( f _ { 1 }, f _ { 2 }, \ dots, f _ { m } ) }, then the total derivative can be expressed using the partial derivatives as a matrix. this matrix is called the jacobian matrix off { f } ata { \ mathbf { a } } : f'( a ) = jaca = ( \ partialf i \ partialx j ) ij. { f'( \ mathbf { a } ) = \ operatorname { jac } _ { \ mathbf { a } } = ( { \ frac { \ partial f _ { i } } { \ partial x _ { j } } } ) _ { ij }. } the concept of a derivative can be extended to many other settings. the common thread is that the derivative of a function at a point serves as a linear approximation of the function at that point. an important generalization of the derivative concerns complex functions of complex variables, such as functions from ( a domain in ) the complex numbersc { \ mathbb { c } } toc { \ mathbb { c } }. the notion of the derivative of such a
|
https://en.wikipedia.org/wiki/Derivative
|
) the complex numbersc { \ mathbb { c } } toc { \ mathbb { c } }. the notion of the derivative of such a function is obtained by replacing real variables with complex variables in the definition. ifc { \ mathbb { c } } is identified withr 2 { \ mathbb { r } ^ { 2 } } by writing a complex numberz { z } asx + iy { x + iy } then a differentiable function fromc { \ mathbb { c } } toc { \ mathbb { c } } is certainly differentiable as a function fromr 2 { \ mathbb { r } ^ { 2 } } tor 2 { \ mathbb { r } ^ { 2 } } ( in the sense that its partial derivatives all exist ), but the converse is not true in general : the complex derivative only exists if the real derivative is complex linear and this imposes relations between the partial derivatives called the cauchyriemann equations see holomorphic functions. another generalization concerns functions between differentiable or smooth manifolds. intuitively speaking such a manifoldm { m } is a space that can be approximated near each pointx { x } by a vector space called its tangent space : the prototypical example is a smooth surface inr 3 { \ mathbb { r } ^ { 3 } }. the derivative ( or differential ) of a ( differentiable ) mapf : m \ rightarrown { f : m \ to n } between manifolds, at a pointx { x } inm { m }, is then a linear map from the tangent space ofm { m } atx { x } to the tangent space ofn { n } atf ( x ) { f ( x ) }. the derivative function becomes a map between the tangent bundles ofm { m } andn { n }. this definition is used in differential geometry. differentiation can also be defined for maps between vector space, such as banach space, in which those generalizations are the gateaux derivative and the frchet derivative. one deficiency of the classical derivative is that very many functions are not differentiable. nevertheless, there is a way of extending the notion of the derivative so that all continuous functions and many other functions can be differentiated using a concept known as the weak derivative. the idea is to embed the continuous functions in a larger space called the space of distributions and only require that a function is differentiable " on average ".
|
https://en.wikipedia.org/wiki/Derivative
|
the idea is to embed the continuous functions in a larger space called the space of distributions and only require that a function is differentiable " on average ". properties of the derivative have inspired the introduction and study of many similar objects in algebra and topology ; an example is differential algebra. here, it consists of the derivation of some topics in abstract algebra, such as rings, ideals, field, and so on. the discrete equivalent of differentiation is finite differences. the study of differential calculus is unified with the calculus of finite differences in time scale calculus. the arithmetic derivative involves the function that is defined for the integers by the prime factorization. this is an analogy with the product rule.
|
https://en.wikipedia.org/wiki/Chain_rule
|
in calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions f and g in terms of the derivatives of f and g. more precisely, ifh = f \ circg { h = f \ circ g } is the function such thath ( x ) = f ( g ( x ) ) { h ( x ) = f ( g ( x ) ) } for every x, then the chain rule is, in lagrange's notation, h'( x ) = f'( g ( x ) ) g'( x ). { h'( x ) = f'( g ( x ) ) g'( x ). } or, equivalently, h'= ( f \ circg )'= ( f'\ circg ) \ cdotg '. { h'= ( f \ circ g )'= ( f'\ circ g ) \ cdot g '. } the chain rule may also be expressed in leibniz's notation. if a variable z depends on the variable y, which itself depends on the variable x ( that is, y and z are dependent variables ), then z depends on x as well, via the intermediate variable y. in this case, the chain rule is expressed asd zd x = dz dy \ cdotd yd x, { { \ frac { dz } { dx } } = { \ frac { dz } { dy } } \ cdot { \ frac { dy } { dx } }, } andd zd x | x = dz dy | y ( x ) \ cdotd yd x | x, {. { \ frac { dz } { dx } } | _ { x } =. { \ frac { dz } { dy } } | _ { y ( x ) } \ cdot. { \ frac { dy } { dx } } | _ { x }, } for indicating at which points the derivatives have to be evaluated. in integration, the counterpart to the chain rule is the substitution rule. intuitively, the chain rule states that knowing the instantaneous rate of change of z relative to y and that of y relative to x allows one to calculate the instantaneous rate of change of z relative to x as the product of the two rates of change. as put by george f.
|
https://en.wikipedia.org/wiki/Chain_rule
|
x allows one to calculate the instantaneous rate of change of z relative to x as the product of the two rates of change. as put by george f. simmons : " if a car travels twice as fast as a bicycle and the bicycle is four times as fast as a walking man, then the car travels 2 \ times 4 = 8 times as fast as the man. " the relationship between this example and the chain rule is as follows. let z, y and x be the ( variable ) positions of the car, the bicycle, and the walking man, respectively. the rate of change of relative positions of the car and the bicycle isd zd y = 2. { \ textstyle { \ frac { dz } { dy } } = 2. } similarly, dy dx = 4. { \ textstyle { \ frac { dy } { dx } } = 4. } so, the rate of change of the relative positions of the car and the walking man isd zd x = dz dy \ cdotd yd x = 2 \ cdot4 = 8. { { \ frac { dz } { dx } } = { \ frac { dz } { dy } } \ cdot { \ frac { dy } { dx } } = 2 \ cdot 4 = 8. } the rate of change of positions is the ratio of the speeds, and the speed is the derivative of the position with respect to the time ; that is, dz dx = dz dt dx dt, { { \ frac { dz } { dx } } = { \ frac { \ frac { dz } { dt } } { \ frac { dx } { dt } } }, } or, equivalently, dz dt = dz dx \ cdotd xd t, { { \ frac { dz } { dt } } = { \ frac { dz } { dx } } \ cdot { \ frac { dx } { dt } }, } which is also an application of the chain rule. the simplest form of the chain rule is for real - valued functions of one real variable. it states that if g is a function that is differentiable at a point c ( i. e. the derivative g'( c ) exists ) and f is a function that is differentiable at g ( c ), then the
|
https://en.wikipedia.org/wiki/Chain_rule
|
c ( i. e. the derivative g'( c ) exists ) and f is a function that is differentiable at g ( c ), then the composite functionf \ circg { f \ circ g } is differentiable at c, and the derivative is ( f \ circg )'( c ) = f'( g ( c ) ) \ cdotg'( c ). { ( f \ circ g )'( c ) = f'( g ( c ) ) \ cdot g'( c ). } the rule is sometimes abbreviated as ( f \ circg )'= ( f'\ circg ) \ cdotg '. { ( f \ circ g )'= ( f'\ circ g ) \ cdot g '. } if y = f ( u ) and u = g ( x ), then this abbreviated form is written in leibniz notation as : dy dx = dy du \ cdotd ud x. { { \ frac { dy } { dx } } = { \ frac { dy } { du } } \ cdot { \ frac { du } { dx } }. } the points where the derivatives are evaluated may also be stated explicitly : dy dx | x = c = dy du | u = g ( c ) \ cdotd ud x | x = c. {. { \ frac { dy } { dx } } | _ { x = c } =. { \ frac { dy } { du } } | _ { u = g ( c ) } \ cdot. { \ frac { du } { dx } } | _ { x = c }. } carrying the same reasoning further, given n functionsf 1,, fn { f _ { 1 }, \ ldots, f _ { n } \! } with the composite functionf 1 \ circ ( f2 \ circ ( fn - 1 \ circf n ) ) { f _ { 1 } \ circ ( f _ { 2 } \ circ \ cdots ( f _ { n - 1 } \ circ f _ { n } ) ) \! }, if each functionf i { f _ { i } \! } is differentiable at its immediate input, then the composite function is also differentiable by the repeated application of chain rule
|
https://en.wikipedia.org/wiki/Chain_rule
|
{ f _ { i } \! } is differentiable at its immediate input, then the composite function is also differentiable by the repeated application of chain rule, where the derivative is ( in leibniz's notation ) : df 1d x = df 1d f2 df 2d f3 df nd x. { { \ frac { df _ { 1 } } { dx } } = { \ frac { df _ { 1 } } { df _ { 2 } } } { \ frac { df _ { 2 } } { df _ { 3 } } } \ cdots { \ frac { df _ { n } } { dx } }. } the chain rule can be applied to composites of more than two functions. to take the derivative of a composite of more than two functions, notice that the composite of f, g, and h ( in that order ) is the composite of f with g \ circ h. the chain rule states that to compute the derivative of f \ circ g \ circ h, it is sufficient to compute the derivative of f and the derivative of g \ circ h. the derivative of f can be calculated directly, and the derivative of g \ circ h can be calculated by applying the chain rule again. for concreteness, consider the functiony = esin ( x2 ). { y = e ^ { \ sin ( x ^ { 2 } ) }. } this can be decomposed as the composite of three functions : y = f ( u ) = eu, u = g ( v ) = sinv, v = h ( x ) = x2. { { \ begin { aligned } y & = f ( u ) = e ^ { u }, \ \ u & = g ( v ) = \ sin v, \ \ v & = h ( x ) = x ^ { 2 }. \ end { aligned } } } so thaty = f ( g ( h ( x ) ) ) { y = f ( g ( h ( x ) ) ) }. their derivatives are : dy du = f'( u ) = eu, du dv = g'( v ) = cosv, dv dx = h'( x ) = 2x. { { \ begin { aligned } { \ frac { dy } { du } } & = f'( u ) = e ^
|
https://en.wikipedia.org/wiki/Chain_rule
|
2x. { { \ begin { aligned } { \ frac { dy } { du } } & = f'( u ) = e ^ { u }, \ \ { \ frac { du } { dv } } & = g'( v ) = \ cos v, \ \ { \ frac { dv } { dx } } & = h'( x ) = 2x. \ end { aligned } } } the chain rule states that the derivative of their composite at the point x = a is : ( f \ circg \ circh )'( a ) = f'( ( g \ circh ) ( a ) ) \ cdot ( g \ circh )'( a ) = f'( ( g \ circh ) ( a ) ) \ cdotg'( h ( a ) ) \ cdoth'( a ) = ( f'\ circg \ circh ) ( a ) \ cdot ( g'\ circh ) ( a ) \ cdoth'( a ). { { \ begin { aligned } ( f \ circ g \ circ h )'( a ) & = f'( ( g \ circ h ) ( a ) ) \ cdot ( g \ circ h )'( a ) \ \ & = f'( ( g \ circ h ) ( a ) ) \ cdot g'( h ( a ) ) \ cdot h'( a ) \ \ & = ( f'\ circ g \ circ h ) ( a ) \ cdot ( g'\ circ h ) ( a ) \ cdot h'( a ). \ end { aligned } } } in leibniz's notation, this is : dy dx = dy du | u = g ( h ( a ) ) \ cdotd ud v | v = h ( a ) \ cdotd vd x | x = a, { { \ frac { dy } { dx } } =. { \ frac { dy } { du } } | _ { u = g ( h ( a ) ) } \ cdot. { \ frac { du } { dv } } | _ { v = h ( a ) } \ cdot. { \ frac { dv } { dx } } | _ { x = a }, } or for short,
|
https://en.wikipedia.org/wiki/Chain_rule
|
) } \ cdot. { \ frac { dv } { dx } } | _ { x = a }, } or for short, dy dx = dy du \ cdotd ud v \ cdotd vd x. { { \ frac { dy } { dx } } = { \ frac { dy } { du } } \ cdot { \ frac { du } { dv } } \ cdot { \ frac { dv } { dx } }. } the derivative function is therefore : dy dx = esin ( x2 ) \ cdotcos ( x2 ) \ cdot2 x. { { \ frac { dy } { dx } } = e ^ { \ sin ( x ^ { 2 } ) } \ cdot \ cos ( x ^ { 2 } ) \ cdot 2x. } another way of computing this derivative is to view the composite function f \ circ g \ circ h as the composite of f \ circ g and h. applying the chain rule in this manner would yield : ( f \ circg \ circh )'( a ) = ( f \ circg )'( h ( a ) ) \ cdoth'( a ) = f'( g ( h ( a ) ) ) \ cdotg'( h ( a ) ) \ cdoth'( a ). { { \ begin { aligned } ( f \ circ g \ circ h )'( a ) & = ( f \ circ g )'( h ( a ) ) \ cdot h'( a ) \ \ & = f'( g ( h ( a ) ) ) \ cdot g'( h ( a ) ) \ cdot h'( a ). \ end { aligned } } } this is the same as what was computed above. this should be expected because ( f \ circ g ) \ circ h = f \ circ ( g \ circ h ). sometimes, it is necessary to differentiate an arbitrarily long composition of the formf 1 \ circf 2 \ circ \ circf n - 1 \ circf n { f _ { 1 } \ circ f _ { 2 } \ circ \ cdots \ circ f _ { n - 1 } \ circ f _ { n } \! }. in
|
https://en.wikipedia.org/wiki/Chain_rule
|
{ 2 } \ circ \ cdots \ circ f _ { n - 1 } \ circ f _ { n } \! }. in this case, definef a.. b = fa \ circf a + 1 \ circ \ circf b - 1 \ circf b { f _ { a \,. \,. \, b } = f _ { a } \ circ f _ { a + 1 } \ circ \ cdots \ circ f _ { b - 1 } \ circ f _ { b } } wheref a.. a = fa { f _ { a \,. \,. \, a } = f _ { a } } andf a.. b ( x ) = x { f _ { a \,. \,. \, b } ( x ) = x } whenb < a { b < a }. then the chain rule takes the formd f1.. n = ( df 1 \ circf 2.. n ) ( df 2 \ circf 3.. n ) ( df n - 1 \ circf n.. n ) df n = \ prodk = 1n [ df k \ circf ( k + 1 ).. n ] { { \ begin { aligned } df _ { 1 \,. \,. \, n } & = ( df _ { 1 } \ circ f _ { 2 \,. \,. \, n } ) ( df _ { 2 } \ circ f _ { 3 \,. \,. \, n } ) \ cdots ( df _ { n - 1 } \ circ f _ { n \,. \,. \, n } ) df _ { n } \ \ & = \ prod _ { k = 1 } ^ { n } [ df _ { k } \ circ f _ { ( k + 1 ) \,. \,. \, n } ] \ end { aligned } } } or, in the lagrange notation, f1.. n'( x ) = f1'( f2.. n ( x ) ) f2'( f3.. n ( x ) ) fn - 1'( fn.. n ( x ) ) fn'( x ) = \ prodk = 1n fk '
|
https://en.wikipedia.org/wiki/Chain_rule
|
) fn - 1'( fn.. n ( x ) ) fn'( x ) = \ prodk = 1n fk'( f ( k + 1.. n ) ( x ) ) { { \ begin { aligned } f _ { 1 \,. \,. \, n }'( x ) & = f _ { 1 }'( f _ { 2 \,. \,. \, n } ( x ) ) \ ; f _ { 2 }'( f _ { 3 \,. \,. \, n } ( x ) ) \ cdots f _ { n - 1 }'( f _ { n \,. \,. \, n } ( x ) ) \ ; f _ { n }'( x ) \ \ [ 1ex ] & = \ prod _ { k = 1 } ^ { n } f _ { k }'( f _ { ( k + 1 \,. \,. \, n ) } ( x ) ) \ end { aligned } } } the chain rule can be used to derive some well - known differentiation rules. for example, the quotient rule is a consequence of the chain rule and the product rule. to see this, write the function f ( x ) / g ( x ) as the product f ( x ) 1 / g ( x ). first apply the product rule : dd x ( f ( x ) g ( x ) ) = dd x ( f ( x ) \ cdot1 g ( x ) ) = f'( x ) \ cdot1 g ( x ) + f ( x ) \ cdotd dx ( 1g ( x ) ). { { \ begin { aligned } { \ frac { d } { dx } } ( { \ frac { f ( x ) } { g ( x ) } } ) & = { \ frac { d } { dx } } ( f ( x ) \ cdot { \ frac { 1 } { g ( x ) } } ) \ \ & = f'( x ) \ cdot { \ frac { 1 } { g ( x ) } } + f ( x ) \ cdot { \ frac { d } { dx } } ( { \ frac { 1 } { g ( x ) } } ). \ end { aligned } } } to compute the derivative of 1 / g ( x )
|
https://en.wikipedia.org/wiki/Chain_rule
|
frac { 1 } { g ( x ) } } ). \ end { aligned } } } to compute the derivative of 1 / g ( x ), notice that it is the composite of g with the reciprocal function, that is, the function that sends x to 1 / x. the derivative of the reciprocal function is - 1 / x2 { - 1 / x ^ { 2 } \! }. by applying the chain rule, the last expression becomes : f'( x ) \ cdot1 g ( x ) + f ( x ) \ cdot ( - 1g ( x ) 2 \ cdotg'( x ) ) = f'( x ) g ( x ) - f ( x ) g'( x ) g ( x ) 2, { f'( x ) \ cdot { \ frac { 1 } { g ( x ) } } + f ( x ) \ cdot ( - { \ frac { 1 } { g ( x ) ^ { 2 } } } \ cdot g'( x ) ) = { \ frac { f'( x ) g ( x ) - f ( x ) g'( x ) } { g ( x ) ^ { 2 } } }, } which is the usual formula for the quotient rule. suppose that y = g ( x ) has an inverse function. call its inverse function f so that we have x = f ( y ). there is a formula for the derivative of f in terms of the derivative of g. to see this, note that f and g satisfy the formulaf ( g ( x ) ) = x. { f ( g ( x ) ) = x. } and because the functionsf ( g ( x ) ) { f ( g ( x ) ) } and x are equal, their derivatives must be equal. the derivative of x is the constant function with value 1, and the derivative off ( g ( x ) ) { f ( g ( x ) ) } is determined by the chain rule. therefore, we have that : f'( g ( x ) ) g'( x ) = 1. { f'( g ( x ) ) g'( x ) = 1. } to express f'as a function of an independent variable y, we substitutef ( y ) { f ( y ) } for x wherever it appears. then we can solve for f '. f'( g ( f ( y ) ) ) g'(
|
https://en.wikipedia.org/wiki/Chain_rule
|
( y ) } for x wherever it appears. then we can solve for f '. f'( g ( f ( y ) ) ) g'( f ( y ) ) = 1f'( y ) g'( f ( y ) ) = 1f'( y ) = 1g'( f ( y ) ). { { \ begin { aligned } f'( g ( f ( y ) ) ) g'( f ( y ) ) & = 1 \ \ f'( y ) g'( f ( y ) ) & = 1 \ \ f'( y ) = { \ frac { 1 } { g'( f ( y ) ) } }. \ end { aligned } } } for example, consider the function g ( x ) = ex. it has an inverse f ( y ) = ln y. because g'( x ) = ex, the above formula says thatd dy lny = 1e lny = 1y. { { \ frac { d } { dy } } \ ln y = { \ frac { 1 } { e ^ { \ ln y } } } = { \ frac { 1 } { y } }. } this formula is true whenever g is differentiable and its inverse f is also differentiable. this formula can fail when one of these conditions is not true. for example, consider g ( x ) = x3. its inverse is f ( y ) = y1 / 3, which is not differentiable at zero. if we attempt to use the above formula to compute the derivative of f at zero, then we must evaluate 1 / g'( f ( 0 ) ). since f ( 0 ) = 0 and g'( 0 ) = 0, we must evaluate 1 / 0, which is undefined. therefore, the formula fails in this case. this is not surprising because f is not differentiable at zero. the chain rule forms the basis of the back propagation algorithm, which is used in gradient descent of neural networks in deep learning ( artificial intelligence ). fa di bruno's formula generalizes the chain rule to higher derivatives. assuming that y = f ( u ) and u = g ( x ), then the first few derivatives are : dy dx = dy du du dx d2 yd x2 = d2 yd u2 ( du dx ) 2 + dy du d2 ud x2 d3 yd x3 = d3
|
https://en.wikipedia.org/wiki/Chain_rule
|
##2 yd x2 = d2 yd u2 ( du dx ) 2 + dy du d2 ud x2 d3 yd x3 = d3 yd u3 ( du dx ) 3 + 3d 2y du 2d ud xd 2u dx 2 + dy du d3 ud x3 d4 yd x4 = d4 yd u4 ( du dx ) 4 + 6d 3y du 3 ( du dx ) 2d 2u dx 2 + d2 yd u2 ( 4d ud xd 3u dx 3 + 3 ( d2 ud x2 ) 2 ) + dy du d4 ud x4. { { \ begin { aligned } { \ frac { dy } { dx } } & = { \ frac { dy } { du } } { \ frac { du } { dx } } \ \ { \ frac { d ^ { 2 } y } { dx ^ { 2 } } } & = { \ frac { d ^ { 2 } y } { du ^ { 2 } } } ( { \ frac { du } { dx } } ) ^ { 2 } + { \ frac { dy } { du } } { \ frac { d ^ { 2 } u } { dx ^ { 2 } } } \ \ { \ frac { d ^ { 3 } y } { dx ^ { 3 } } } & = { \ frac { d ^ { 3 } y } { du ^ { 3 } } } ( { \ frac { du } { dx } } ) ^ { 3 } + 3 \, { \ frac { d ^ { 2 } y } { du ^ { 2 } } } { \ frac { du } { dx } } { \ frac { d ^ { 2 } u } { dx ^ { 2 } } } + { \ frac { dy } { du } } { \ frac { d ^ { 3 } u } { dx ^ { 3 } } } \ \ { \ frac { d ^ { 4 } y } { dx ^ { 4 } } } & = { \ frac { d ^ { 4 } y } { du ^ { 4 } } } ( { \ frac { du } { dx } } ) ^ { 4 } + 6 \, { \ frac { d ^ { 3 } y }
|
https://en.wikipedia.org/wiki/Chain_rule
|
\ frac { du } { dx } } ) ^ { 4 } + 6 \, { \ frac { d ^ { 3 } y } { du ^ { 3 } } } ( { \ frac { du } { dx } } ) ^ { 2 } { \ frac { d ^ { 2 } u } { dx ^ { 2 } } } + { \ frac { d ^ { 2 } y } { du ^ { 2 } } } ( 4 \, { \ frac { du } { dx } } { \ frac { d ^ { 3 } u } { dx ^ { 3 } } } + 3 \, ( { \ frac { d ^ { 2 } u } { dx ^ { 2 } } } ) ^ { 2 } ) + { \ frac { dy } { du } } { \ frac { d ^ { 4 } u } { dx ^ { 4 } } }. \ end { aligned } } } one proof of the chain rule begins by defining the derivative of the composite function f \ circ g, where we take the limit of the difference quotient for f \ circ g as x approaches a : ( f \ circg )'( a ) = limx \ rightarrowa f ( g ( x ) ) - f ( g ( a ) ) x - a. { ( f \ circ g )'( a ) = \ lim _ { x \ to a } { \ frac { f ( g ( x ) ) - f ( g ( a ) ) } { x - a } }. } assume for the moment thatg ( x ) { g ( x ) \! } does not equalg ( a ) { g ( a ) } for anyx { x } neara { a }. then the previous expression is equal to the product of two factors : limx \ rightarrowa f ( g ( x ) ) - f ( g ( a ) ) g ( x ) - g ( a ) \ cdotg ( x ) - g ( a ) x - a. { \ lim _ { x \ to a } { \ frac { f ( g ( x ) ) - f ( g ( a ) ) } { g ( x ) - g ( a ) } } \ cdot { \ frac { g ( x ) - g ( a ) } { x - a } }
|
https://en.wikipedia.org/wiki/Chain_rule
|
) - g ( a ) } } \ cdot { \ frac { g ( x ) - g ( a ) } { x - a } }. } ifg { g } oscillates near a, then it might happen that no matter how close one gets to a, there is always an even closer x such that g ( x ) = g ( a ). for example, this happens near a = 0 for the continuous function g defined by g ( x ) = 0 for x = 0 and g ( x ) = x2 sin ( 1 / x ) otherwise. whenever this happens, the above expression is undefined because it involves division by zero. to work around this, introduce a functionq { q } as follows : q ( y ) = { f ( y ) - f ( g ( a ) ) y - g ( a ), y \ neqg ( a ), f'( g ( a ) ), y = g ( a ). { q ( y ) = { \ begin { cases } { \ frac { f ( y ) - f ( g ( a ) ) } { y - g ( a ) } }, & y \ neq g ( a ), \ \ f'( g ( a ) ), & y = g ( a ). \ end { cases } } } we will show that the difference quotient for f \ circ g is always equal to : q ( g ( x ) ) \ cdotg ( x ) - g ( a ) x - a. { q ( g ( x ) ) \ cdot { \ frac { g ( x ) - g ( a ) } { x - a } }. } whenever g ( x ) is not equal to g ( a ), this is clear because the factors of g ( x ) - g ( a ) cancel. when g ( x ) equals g ( a ), then the difference quotient for f \ circ g is zero because f ( g ( x ) ) equals f ( g ( a ) ), and the above product is zero because it equals f'( g ( a ) ) times zero. so the above product is always equal to the difference quotient, and to show that the derivative of f \ circ g at a exists and to determine its value, we need only show that the limit as x goes to a of the above product exists and determine its value. to do this, recall
|
https://en.wikipedia.org/wiki/Chain_rule
|
to determine its value, we need only show that the limit as x goes to a of the above product exists and determine its value. to do this, recall that the limit of a product exists if the limits of its factors exist. when this happens, the limit of the product of these two factors will equal the product of the limits of the factors. the two factors are q ( g ( x ) ) and ( g ( x ) - g ( a ) ) / ( x - a ). the latter is the difference quotient for g at a, and because g is differentiable at a by assumption, its limit as x tends to a exists and equals g'( a ). as for q ( g ( x ) ), notice that q is defined wherever f is. furthermore, f is differentiable at g ( a ) by assumption, so q is continuous at g ( a ), by definition of the derivative. the function g is continuous at a because it is differentiable at a, and therefore q \ circ g is continuous at a. so its limit as x goes to a exists and equals q ( g ( a ) ), which is f'( g ( a ) ). this shows that the limits of both factors exist and that they equal f'( g ( a ) ) and g'( a ), respectively. therefore, the derivative of f \ circ g at a exists and equals f'( g ( a ) ) g'( a ). another way of proving the chain rule is to measure the error in the linear approximation determined by the derivative. this proof has the advantage that it generalizes to several variables. it relies on the following equivalent definition of differentiability at a point : a function g is differentiable at a if there exists a real number g'( a ) and a function \ epsilon ( h ) that tends to zero as h tends to zero, and furthermoreg ( a + h ) - g ( a ) = g'( a ) h + \ epsilon ( h ) h. { g ( a + h ) - g ( a ) = g'( a ) h + \ varepsilon ( h ) h. } here the left - hand side represents the true difference between the value of g at a and at a + h, whereas the right - hand side represents the approximation determined by the derivative plus an error term. in the situation of the chain rule, such a function \ epsilon exists because g is assumed to be
|
https://en.wikipedia.org/wiki/Chain_rule
|
represents the approximation determined by the derivative plus an error term. in the situation of the chain rule, such a function \ epsilon exists because g is assumed to be differentiable at a. again by assumption, a similar function also exists for f at g ( a ). calling this function \ eta, we havef ( g ( a ) + k ) - f ( g ( a ) ) = f'( g ( a ) ) k + \ eta ( k ) k. { f ( g ( a ) + k ) - f ( g ( a ) ) = f'( g ( a ) ) k + \ eta ( k ) k. } the above definition imposes no constraints on \ eta ( 0 ), even though it is assumed that \ eta ( k ) tends to zero as k tends to zero. if we set \ eta ( 0 ) = 0, then \ eta is continuous at 0. proving the theorem requires studying the difference f ( g ( a + h ) ) - f ( g ( a ) ) as h tends to zero. the first step is to substitute for g ( a + h ) using the definition of differentiability of g at a : f ( g ( a + h ) ) - f ( g ( a ) ) = f ( g ( a ) + g'( a ) h + \ epsilon ( h ) h ) - f ( g ( a ) ). { f ( g ( a + h ) ) - f ( g ( a ) ) = f ( g ( a ) + g'( a ) h + \ varepsilon ( h ) h ) - f ( g ( a ) ). } the next step is to use the definition of differentiability of f at g ( a ). this requires a term of the form f ( g ( a ) + k ) for some k. in the above equation, the correct k varies with h. set kh = g'( a ) h + \ epsilon ( h ) h and the right hand side becomes f ( g ( a ) + kh ) - f ( g ( a ) ). applying the definition of the derivative gives : f ( g ( a ) + kh ) - f ( g ( a ) ) = f'( g ( a ) ) kh + \ eta ( kh ) kh. { f ( g ( a ) + k _ { h } ) - f ( g ( a ) ) = f'( g (
|
https://en.wikipedia.org/wiki/Chain_rule
|
) kh. { f ( g ( a ) + k _ { h } ) - f ( g ( a ) ) = f'( g ( a ) ) k _ { h } + \ eta ( k _ { h } ) k _ { h }. } to study the behavior of this expression as h tends to zero, expand kh. after regrouping the terms, the right - hand side becomes : f'( g ( a ) ) g'( a ) h + [ f'( g ( a ) ) \ epsilon ( h ) + \ eta ( kh ) g'( a ) + \ eta ( kh ) \ epsilon ( h ) ] h. { f'( g ( a ) ) g'( a ) h + [ f'( g ( a ) ) \ varepsilon ( h ) + \ eta ( k _ { h } ) g'( a ) + \ eta ( k _ { h } ) \ varepsilon ( h ) ] h. } because \ epsilon ( h ) and \ eta ( kh ) tend to zero as h tends to zero, the first two bracketed terms tend to zero as h tends to zero. applying the same theorem on products of limits as in the first proof, the third bracketed term also tends zero. because the above expression is equal to the difference f ( g ( a + h ) ) - f ( g ( a ) ), by the definition of the derivative f \ circ g is differentiable at a and its derivative is f'( g ( a ) ) g'( a ). the role of q in the first proof is played by \ eta in this proof. they are related by the equation : q ( y ) = f'( g ( a ) ) + \ eta ( y - g ( a ) ). { q ( y ) = f'( g ( a ) ) + \ eta ( y - g ( a ) ). } the need to define q at g ( a ) is analogous to the need to define \ eta at zero. constantin carathodory's alternative definition of the differentiability of a function can be used to give an elegant proof of the chain rule. under this definition, a function f is differentiable at a point a if and only if there is a function q, continuous at a and such that f ( x ) - f ( a ) = q ( x ) ( x - a )
|
https://en.wikipedia.org/wiki/Chain_rule
|
if there is a function q, continuous at a and such that f ( x ) - f ( a ) = q ( x ) ( x - a ). there is at most one such function, and if f is differentiable at a then f'( a ) = q ( a ). given the assumptions of the chain rule and the fact that differentiable functions and compositions of continuous functions are continuous, we have that there exist functions q, continuous at g ( a ), and r, continuous at a, and such that, f ( g ( x ) ) - f ( g ( a ) ) = q ( g ( x ) ) ( g ( x ) - g ( a ) ) { f ( g ( x ) ) - f ( g ( a ) ) = q ( g ( x ) ) ( g ( x ) - g ( a ) ) } andg ( x ) - g ( a ) = r ( x ) ( x - a ). { g ( x ) - g ( a ) = r ( x ) ( x - a ). } therefore, f ( g ( x ) ) - f ( g ( a ) ) = q ( g ( x ) ) r ( x ) ( x - a ), { f ( g ( x ) ) - f ( g ( a ) ) = q ( g ( x ) ) r ( x ) ( x - a ), } but the function given by h ( x ) = q ( g ( x ) ) r ( x ) is continuous at a, and we get, for this a ( f ( g ( a ) ) )'= q ( g ( a ) ) r ( a ) = f'( g ( a ) ) g'( a ). { ( f ( g ( a ) ) )'= q ( g ( a ) ) r ( a ) = f'( g ( a ) ) g'( a ). } a similar approach works for continuously differentiable ( vector - ) functions of many variables. this method of factoring also allows a unified approach to stronger forms of differentiability, when the derivative is required to be lipschitz continuous, hlder continuous, etc. differentiation itself can be viewed as the polynomial remainder theorem ( the little bzout theorem, or factor theorem ), generalized to an appropriate class of functions. ify = f ( x ) { y = f ( x ) } andx = g ( t ) { x = g (
|
https://en.wikipedia.org/wiki/Chain_rule
|
appropriate class of functions. ify = f ( x ) { y = f ( x ) } andx = g ( t ) { x = g ( t ) } then choosing infinitesimal \ deltat \ neq0 { \ delta t \ not = 0 } we compute the corresponding \ deltax = g ( t + \ deltat ) - g ( t ) { \ delta x = g ( t + \ delta t ) - g ( t ) } and then the corresponding \ deltay = f ( x + \ deltax ) - f ( x ) { \ delta y = f ( x + \ delta x ) - f ( x ) }, so that \ deltay \ deltat = \ deltay \ deltax \ deltax \ deltat { { \ frac { \ delta y } { \ delta t } } = { \ frac { \ delta y } { \ delta x } } { \ frac { \ delta x } { \ delta t } } } and applying the standard part we obtaind yd t = dy dx dx dt { { \ frac { dy } { dt } } = { \ frac { dy } { dx } } { \ frac { dx } { dt } } } which is the chain rule. the full generalization of the chain rule to multi - variable functions ( such asf : rm \ rightarrowr n { f : \ mathbb { r } ^ { m } \ to \ mathbb { r } ^ { n } } ) is rather technical. however, it is simpler to write in the case of functions of the formf ( g1 ( x ),, gk ( x ) ), { f ( g _ { 1 } ( x ), \ dots, g _ { k } ( x ) ), } wheref : rk \ rightarrowr { f : \ mathbb { r } ^ { k } \ to \ mathbb { r } }, andg i : r \ rightarrowr { g _ { i } : \ mathbb { r } \ to \ mathbb { r } } for eachi = 1, 2,, k. { i = 1, 2, \ dots, k. } as this case occurs often in the study of functions of a single variable, it is worth describing it separately. letf : rk \ rightarrowr { f : \ mathbb { r
|
https://en.wikipedia.org/wiki/Chain_rule
|
of functions of a single variable, it is worth describing it separately. letf : rk \ rightarrowr { f : \ mathbb { r } ^ { k } \ to \ mathbb { r } }, andg i : r \ rightarrowr { g _ { i } : \ mathbb { r } \ to \ mathbb { r } } for eachi = 1, 2,, k. { i = 1, 2, \ dots, k. } to write the chain rule for the composition of functionsx \ mapstof ( g1 ( x ),, gk ( x ) ), { x \ mapsto f ( g _ { 1 } ( x ), \ dots, g _ { k } ( x ) ), } one needs the partial derivatives of f with respect to its k arguments. the usual notations for partial derivatives involve names for the arguments of the function. as these arguments are not named in the above formula, it is simpler and clearer to use d - notation, and to denote byd if { d _ { i } f } the partial derivative of f with respect to its ith argument, and byd if ( z ) { d _ { i } f ( z ) } the value of this derivative at z. with this notation, the chain rule isd dx f ( g1 ( x ),, gk ( x ) ) = \ sumi = 1k ( dd xg i ( x ) ) di f ( g1 ( x ),, gk ( x ) ). { { \ frac { d } { dx } } f ( g _ { 1 } ( x ), \ dots, g _ { k } ( x ) ) = \ sum _ { i = 1 } ^ { k } ( { \ frac { d } { dx } } { g _ { i } } ( x ) ) d _ { i } f ( g _ { 1 } ( x ), \ dots, g _ { k } ( x ) ). } if the function f is addition, that is, iff ( u, v ) = u + v, { f ( u, v ) = u + v, } thend 1f = \ partialf \ partialu = 1 { \ textstyle d _ { 1 } f = { \ frac { \ partial f } { \ partial u } } = 1 } andd 2f =
|
https://en.wikipedia.org/wiki/Chain_rule
|
textstyle d _ { 1 } f = { \ frac { \ partial f } { \ partial u } } = 1 } andd 2f = \ partialf \ partialv = 1 { \ textstyle d _ { 2 } f = { \ frac { \ partial f } { \ partial v } } = 1 }. thus, the chain rule givesd dx ( g ( x ) + h ( x ) ) = ( dd xg ( x ) ) d1 f + ( dd xh ( x ) ) d2 f = dd xg ( x ) + dd xh ( x ). { { \ frac { d } { dx } } ( g ( x ) + h ( x ) ) = ( { \ frac { d } { dx } } g ( x ) ) d _ { 1 } f + ( { \ frac { d } { dx } } h ( x ) ) d _ { 2 } f = { \ frac { d } { dx } } g ( x ) + { \ frac { d } { dx } } h ( x ). } for multiplicationf ( u, v ) = uv, { f ( u, v ) = uv, } the partials ared 1f = v { d _ { 1 } f = v } andd 2f = u { d _ { 2 } f = u }. thus, dd x ( g ( x ) h ( x ) ) = h ( x ) dd xg ( x ) + g ( x ) dd xh ( x ). { { \ frac { d } { dx } } ( g ( x ) h ( x ) ) = h ( x ) { \ frac { d } { dx } } g ( x ) + g ( x ) { \ frac { d } { dx } } h ( x ). } the case of exponentiationf ( u, v ) = uv { f ( u, v ) = u ^ { v } } is slightly more complicated, asd 1f = vu v - 1, { d _ { 1 } f = vu ^ { v - 1 }, } and, asu v = ev lnu, { u ^ { v } = e ^ { v \ ln u }, } d2 f = uv lnu. { d _ { 2 } f = u ^ { v } \ l
|
https://en.wikipedia.org/wiki/Chain_rule
|
{ v \ ln u }, } d2 f = uv lnu. { d _ { 2 } f = u ^ { v } \ ln u. } it follows thatd dx ( g ( x ) h ( x ) ) = h ( x ) g ( x ) h ( x ) - 1d dx g ( x ) + g ( x ) h ( x ) lng ( x ) dd xh ( x ). { { \ frac { d } { dx } } ( g ( x ) ^ { h ( x ) } ) = h ( x ) g ( x ) ^ { h ( x ) - 1 } { \ frac { d } { dx } } g ( x ) + g ( x ) ^ { h ( x ) } \ ln g ( x ) \, { \ frac { d } { dx } } h ( x ). } the simplest way for writing the chain rule in the general case is to use the total derivative, which is a linear transformation that captures all directional derivatives in a single formula. consider differentiable functions f : rm \ rightarrow rk and g : rn \ rightarrow rm, and a point a in rn. let da g denote the total derivative of g at a and dg ( a ) f denote the total derivative of f at g ( a ). these two derivatives are linear transformations rn \ rightarrow rm and rm \ rightarrow rk, respectively, so they can be composed. the chain rule for total derivatives is that their composite is the total derivative of f \ circ g at a : da ( f \ circg ) = dg ( a ) f \ circd ag, { d _ { \ mathbf { a } } ( f \ circ g ) = d _ { g ( \ mathbf { a } ) } f \ circ d _ { \ mathbf { a } } g, } or for short, d ( f \ circg ) = df \ circd g. { d ( f \ circ g ) = df \ circ dg. } the higher - dimensional chain rule can be proved using a technique similar to the second proof given above. because the total derivative is a linear transformation, the functions appearing in the formula can be rewritten as matrices. the matrix corresponding to a total derivative is called a jacobian matrix, and the composite of two derivatives
|
https://en.wikipedia.org/wiki/Chain_rule
|
functions appearing in the formula can be rewritten as matrices. the matrix corresponding to a total derivative is called a jacobian matrix, and the composite of two derivatives corresponds to the product of their jacobian matrices. from this perspective the chain rule therefore says : jf \ circg ( a ) = jf ( g ( a ) ) jg ( a ), { j _ { f \ circ g } ( \ mathbf { a } ) = j _ { f } ( g ( \ mathbf { a } ) ) j _ { g } ( \ mathbf { a } ), } or for short, jf \ circg = ( jf \ circg ) jg. { j _ { f \ circ g } = ( j _ { f } \ circ g ) j _ { g }. } that is, the jacobian of a composite function is the product of the jacobians of the composed functions ( evaluated at the appropriate points ). the higher - dimensional chain rule is a generalization of the one - dimensional chain rule. if k, m, and n are 1, so that f : r \ rightarrow r and g : r \ rightarrow r, then the jacobian matrices of f and g are 1 \ times 1. specifically, they are : jg ( a ) = ( g'( a ) ), jf ( g ( a ) ) = ( f'( g ( a ) ) ). { { \ begin { aligned } j _ { g } ( a ) & = { \ begin { pmatrix } g'( a ) \ end { pmatrix } }, \ \ j _ { f } ( g ( a ) ) & = { \ begin { pmatrix } f'( g ( a ) ) \ end { pmatrix } }. \ end { aligned } } } the jacobian of f \ circ g is the product of these 1 \ times 1 matrices, so it is f'( g ( a ) ) \ cdotg'( a ), as expected from the one - dimensional chain rule. in the language of linear transformations, da ( g ) is the function which scales a vector by a factor of g'( a ) and dg ( a ) ( f ) is the function which scales a vector by a factor of f'( g ( a ) ). the chain rule says that the composite of these two linear transformations is
|
https://en.wikipedia.org/wiki/Chain_rule
|
the function which scales a vector by a factor of f'( g ( a ) ). the chain rule says that the composite of these two linear transformations is the linear transformation da ( f \ circ g ), and therefore it is the function that scales a vector by f'( g ( a ) ) \ cdotg'( a ). another way of writing the chain rule is used when f and g are expressed in terms of their components as y = f ( u ) = ( f1 ( u ),, fk ( u ) ) and u = g ( x ) = ( g1 ( x ),, gm ( x ) ). in this case, the above rule for jacobian matrices is usually written as : \ partial ( y1,, yk ) \ partial ( x1,, xn ) = \ partial ( y1,, yk ) \ partial ( u1,, um ) \ partial ( u1,, um ) \ partial ( x1,, xn ). { { \ frac { \ partial ( y _ { 1 }, \ ldots, y _ { k } ) } { \ partial ( x _ { 1 }, \ ldots, x _ { n } ) } } = { \ frac { \ partial ( y _ { 1 }, \ ldots, y _ { k } ) } { \ partial ( u _ { 1 }, \ ldots, u _ { m } ) } } { \ frac { \ partial ( u _ { 1 }, \ ldots, u _ { m } ) } { \ partial ( x _ { 1 }, \ ldots, x _ { n } ) } }. } the chain rule for total derivatives implies a chain rule for partial derivatives. recall that when the total derivative exists, the partial derivative in the i - th coordinate direction is found by multiplying the jacobian matrix by the i - th basis vector. by doing this to the formula above, we find : \ partial ( y1,, yk ) \ partialx i = \ partial ( y1,, yk ) \ partial ( u1,, um ) \ partial ( u1,, um ) \ partialx i. { { \ frac { \ partial ( y _ { 1 }, \ ldots, y _ { k } ) } { \ partial x _ { i } } } = { \ frac { \ partial ( y _ {
|
https://en.wikipedia.org/wiki/Chain_rule
|
ldots, y _ { k } ) } { \ partial x _ { i } } } = { \ frac { \ partial ( y _ { 1 }, \ ldots, y _ { k } ) } { \ partial ( u _ { 1 }, \ ldots, u _ { m } ) } } { \ frac { \ partial ( u _ { 1 }, \ ldots, u _ { m } ) } { \ partial x _ { i } } }. } since the entries of the jacobian matrix are partial derivatives, we may simplify the above formula to get : \ partial ( y1,, yk ) \ partialx i = \ sum = 1m \ partial ( y1,, yk ) \ partialu \ partialu \ partialx i. { { \ frac { \ partial ( y _ { 1 }, \ ldots, y _ { k } ) } { \ partial x _ { i } } } = \ sum _ { \ ell = 1 } ^ { m } { \ frac { \ partial ( y _ { 1 }, \ ldots, y _ { k } ) } { \ partial u _ { \ ell } } } { \ frac { \ partial u _ { \ ell } } { \ partial x _ { i } } }. } more conceptually, this rule expresses the fact that a change in the xi direction may change all of g1 through gm, and any of these changes may affect f. in the special case where k = 1, so that f is a real - valued function, then this formula simplifies even further : \ partialy \ partialx i = \ sum = 1m \ partialy \ partialu \ partialu \ partialx i. { { \ frac { \ partial y } { \ partial x _ { i } } } = \ sum _ { \ ell = 1 } ^ { m } { \ frac { \ partial y } { \ partial u _ { \ ell } } } { \ frac { \ partial u _ { \ ell } } { \ partial x _ { i } } }. } this can be rewritten as a dot product. recalling that u = ( g1,, gm ), the partial derivative \ partialu / \ partialxi is also a vector, and the chain rule says that : \ partialy \ partialx i = \ nablay \
|
https://en.wikipedia.org/wiki/Chain_rule
|
\ partialu / \ partialxi is also a vector, and the chain rule says that : \ partialy \ partialx i = \ nablay \ cdot \ partialu \ partialx i. { { \ frac { \ partial y } { \ partial x _ { i } } } = \ nabla y \ cdot { \ frac { \ partial \ mathbf { u } } { \ partial x _ { i } } }. } given u ( x, y ) = x2 + 2y where x ( r, t ) = r sin ( t ) and y ( r, t ) = sin2 ( t ), determine the value of \ partialu / \ partialr and \ partialu / \ partialt using the chain rule. \ partialu \ partialr = \ partialu \ partialx \ partialx \ partialr + \ partialu \ partialy \ partialy \ partialr = ( 2x ) ( sin ( t ) ) + ( 2 ) ( 0 ) = 2r sin2 ( t ), { { \ frac { \ partial u } { \ partial r } } = { \ frac { \ partial u } { \ partial x } } { \ frac { \ partial x } { \ partial r } } + { \ frac { \ partial u } { \ partial y } } { \ frac { \ partial y } { \ partial r } } = ( 2x ) ( \ sin ( t ) ) + ( 2 ) ( 0 ) = 2r \ sin ^ { 2 } ( t ), } and \ partialu \ partialt = \ partialu \ partialx \ partialx \ partialt + \ partialu \ partialy \ partialy \ partialt = ( 2x ) ( rcos ( t ) ) + ( 2 ) ( 2sin ( t ) cos ( t ) ) = ( 2r sin ( t ) ) ( rcos ( t ) ) + 4sin ( t ) cos ( t ) = 2 ( r2 + 2 ) sin ( t ) cos ( t ) = ( r2 + 2 ) sin ( 2t ). { { \ begin { aligned } { \ frac { \ partial u } { \ partial t } } & = { \ frac { \ partial u } { \ partial x } } { \ frac { \ partial x } { \ partial t } } + { \ frac { \ partial u }
|
https://en.wikipedia.org/wiki/Chain_rule
|
} { \ partial x } } { \ frac { \ partial x } { \ partial t } } + { \ frac { \ partial u } { \ partial y } } { \ frac { \ partial y } { \ partial t } } \ \ & = ( 2x ) ( r \ cos ( t ) ) + ( 2 ) ( 2 \ sin ( t ) \ cos ( t ) ) \ \ & = ( 2r \ sin ( t ) ) ( r \ cos ( t ) ) + 4 \ sin ( t ) \ cos ( t ) \ \ & = 2 ( r ^ { 2 } + 2 ) \ sin ( t ) \ cos ( t ) \ \ & = ( r ^ { 2 } + 2 ) \ sin ( 2t ). \ end { aligned } } } fa di bruno's formula for higher - order derivatives of single - variable functions generalizes to the multivariable case. if y = f ( u ) is a function of u = g ( x ) as above, then the second derivative of f \ circ g is : \ partial2 y \ partialx i \ partialx j = \ sumk ( \ partialy \ partialu k \ partial2 uk \ partialx i \ partialx j ) + \ sumk, ( \ partial2 y \ partialu k \ partialu \ partialu k \ partialx i \ partialu \ partialx j ). { { \ frac { \ partial ^ { 2 } y } { \ partial x _ { i } \ partial x _ { j } } } = \ sum _ { k } ( { \ frac { \ partial y } { \ partial u _ { k } } } { \ frac { \ partial ^ { 2 } u _ { k } } { \ partial x _ { i } \ partial x _ { j } } } ) + \ sum _ { k, \ ell } ( { \ frac { \ partial ^ { 2 } y } { \ partial u _ { k } \ partial u _ { \ ell } } } { \ frac { \ partial u _ { k } } { \ partial x _ { i } } } { \ frac { \ partial u _ { \ ell } } { \ partial x _ { j } } } ). } all extensions of calculus have a chain rule. in most of these, the formula remains the same, though
|
https://en.wikipedia.org/wiki/Chain_rule
|
partial x _ { j } } } ). } all extensions of calculus have a chain rule. in most of these, the formula remains the same, though the meaning of that formula may be vastly different. one generalization is to manifolds. in this situation, the chain rule represents the fact that the derivative of f \ circ g is the composite of the derivative of f and the derivative of g. this theorem is an immediate consequence of the higher dimensional chain rule given above, and it has exactly the same formula. the chain rule is also valid for frchet derivatives in banach spaces. the same formula holds as before. this case and the previous one admit a simultaneous generalization to banach manifolds. in differential algebra, the derivative is interpreted as a morphism of modules of khler differentials. a ring homomorphism of commutative rings f : r \ rightarrow s determines a morphism of khler differentials df : \ omegar \ rightarrow \ omegas which sends an element dr to d ( f ( r ) ), the exterior differential of f ( r ). the formula d ( f \ circ g ) = df \ circ dg holds in this context as well. the common feature of these examples is that they are expressions of the idea that the derivative is part of a functor. a functor is an operation on spaces and functions between them. it associates to each space a new space and to each function between two spaces a new function between the corresponding new spaces. in each of the above cases, the functor sends each space to its tangent bundle and it sends each function to its derivative. for example, in the manifold case, the derivative sends a cr - manifold to a cr - 1 - manifold ( its tangent bundle ) and a cr - function to its total derivative. there is one requirement for this to be a functor, namely that the derivative of a composite must be the composite of the derivatives. this is exactly the formula d ( f \ circ g ) = df \ circ dg. there are also chain rules in stochastic calculus. one of these, it's lemma, expresses the composite of an it process ( or more generally a semimartingale ) dxt with a twice - differentiable function f. in it's lemma, the derivative of the composite function depends not only on dxt and the derivative of f but also on the second derivative of f. the dependence on
|
https://en.wikipedia.org/wiki/Chain_rule
|
lemma, the derivative of the composite function depends not only on dxt and the derivative of f but also on the second derivative of f. the dependence on the second derivative is a consequence of the non - zero quadratic variation of the stochastic process, which broadly speaking means that the process can move up and down in a very rough way. this variant of the chain rule is not an example of a functor because the two functions being composed are of different types.
|
https://en.wikipedia.org/wiki/Implicit_function_theorem
|
in multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. it does so by representing the relation as the graph of a function. there may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. the implicit function theorem gives a sufficient condition to ensure that there is such a function. more precisely, given a system of m equations fi ( x1,..., xn, y1,..., ym ) = 0, i = 1,..., m ( often abbreviated into f ( x, y ) = 0 ), the theorem states that, under a mild condition on the partial derivatives ( with respect to each yi ) at a point, the m variables yi are differentiable functions of the xj in some neighborhood of the point. as these functions generally cannot be expressed in closed form, they are implicitly defined by the equations, and this motivated the name of the theorem. in other words, under a mild condition on the partial derivatives, the set of zeros of a system of equations is locally the graph of a function. letf : r2 \ rightarrowr { f : \ mathbb { r } ^ { 2 } \ to \ mathbb { r } } be a continuously differentiable function defining the implicit equation of a curvef ( x, y ) = 0 { f ( x, y ) = 0 }. let ( x0, y0 ) { ( x _ { 0 }, y _ { 0 } ) } be a point on the curve, that is, a point such thatf ( x0, y0 ) = 0 { f ( x _ { 0 }, y _ { 0 } ) = 0 }. in this simple case, the implicit function theorem can be stated as follows : proof. by differentiating the equationf ( x, \ phi ( x ) ) = 0 { f ( x, \ varphi ( x ) ) = 0 }, one gets \ partialf \ partialx ( x, \ phi ( x ) ) + \ phi'( x ) \ partialf \ partialy ( x, \ phi ( x ) ) = 0. { { \ frac { \ partial f } { \ partial x } } ( x, \ varphi ( x ) ) + \ varphi'( x ) \, { \ frac
|
https://en.wikipedia.org/wiki/Implicit_function_theorem
|
f } { \ partial x } } ( x, \ varphi ( x ) ) + \ varphi'( x ) \, { \ frac { \ partial f } { \ partial y } } ( x, \ varphi ( x ) ) = 0. } and thus \ phi'( x ) = - \ partialf \ partialx ( x, \ phi ( x ) ) \ partialf \ partialy ( x, \ phi ( x ) ). { \ varphi'( x ) = - { \ frac { { \ frac { \ partial f } { \ partial x } } ( x, \ varphi ( x ) ) } { { \ frac { \ partial f } { \ partial y } } ( x, \ varphi ( x ) ) } }. } this gives an ordinary differential equation for \ phi { \ varphi }, with the initial condition \ phi ( x0 ) = y0 { \ varphi ( x _ { 0 } ) = y _ { 0 } }. since \ partialf \ partialy ( x0, y0 ) \ neq0, { \ textstyle { \ frac { \ partial f } { \ partial y } } ( x _ { 0 }, y _ { 0 } ) \ neq 0, } the right - hand side of the differential equation is continuous. hence, the peano existence theorem applies so there is a ( possibly non - unique ) solution. to see why \ phi { \ textstyle \ varphi } is unique, note that the functiong x ( y ) = f ( x, y ) { \ textstyle g _ { x } ( y ) = f ( x, y ) } is strictly monotone in a neighborhood ofx 0, y0 { \ textstyle x _ { 0 }, y _ { 0 } } ( as \ partialf \ partialy ( x0, y0 ) \ neq0 { \ textstyle { \ frac { \ partial f } { \ partial y } } ( x _ { 0 }, y _ { 0 } ) \ neq 0 } ), thus it is injective. if \ phi, \ varphi { \ textstyle \ varphi, \ phi } are solutions to the differential equation, theng x ( \ phi ( x ) ) = gx ( \ varphi ( x ) ) = 0 { \ textstyle g _ { x } ( \ varphi
|
https://en.wikipedia.org/wiki/Implicit_function_theorem
|
\ phi ( x ) ) = gx ( \ varphi ( x ) ) = 0 { \ textstyle g _ { x } ( \ varphi ( x ) ) = g _ { x } ( \ phi ( x ) ) = 0 } and by injectivity we get, \ phi ( x ) = \ varphi ( x ) { \ textstyle \ varphi ( x ) = \ phi ( x ) }. if we define the function f ( x, y ) = x2 + y2, then the equation f ( x, y ) = 1 cuts out the unit circle as the level set { ( x, y ) | f ( x, y ) = 1 }. there is no way to represent the unit circle as the graph of a function of one variable y = g ( x ) because for each choice of x \ in ( - 1, 1 ), there are two choices of y, namely \ pm1 - x2 { \ pm { \ sqrt { 1 - x ^ { 2 } } } }. however, it is possible to represent part of the circle as the graph of a function of one variable. if we letg 1 ( x ) = 1 - x2 { g _ { 1 } ( x ) = { \ sqrt { 1 - x ^ { 2 } } } } for - 1 \ leq x \ leq 1, then the graph of y = g1 ( x ) provides the upper half of the circle. similarly, ifg 2 ( x ) = - 1 - x2 { g _ { 2 } ( x ) = - { \ sqrt { 1 - x ^ { 2 } } } }, then the graph of y = g2 ( x ) gives the lower half of the circle. the purpose of the implicit function theorem is to tell us that functions like g1 ( x ) and g2 ( x ) almost always exist, even in situations where we cannot write down explicit formulas. it guarantees that g1 ( x ) and g2 ( x ) are differentiable, and it even works in situations where we do not have a formula for f ( x, y ). letf : rn + m \ rightarrowr m { f : \ mathbb { r } ^ { n + m } \ to \ mathbb { r } ^ { m } } be a continuously differentiable function. we think ofr n + m { \ mathbb { r } ^ { n + m
|
https://en.wikipedia.org/wiki/Implicit_function_theorem
|
} ^ { m } } be a continuously differentiable function. we think ofr n + m { \ mathbb { r } ^ { n + m } } as the cartesian productr n \ timesr m, { \ mathbb { r } ^ { n } \ times \ mathbb { r } ^ { m }, } and we write a point of this product as ( x, y ) = ( x1,, xn, y1, ym ). { ( \ mathbf { x }, \ mathbf { y } ) = ( x _ { 1 }, \ ldots, x _ { n }, y _ { 1 }, \ ldots y _ { m } ). } starting from the given functionf { f }, our goal is to construct a functiong : rn \ rightarrowr m { g : \ mathbb { r } ^ { n } \ to \ mathbb { r } ^ { m } } whose graph ( x, g ( x ) ) { ( { \ textbf { x } }, g ( { \ textbf { x } } ) ) } is precisely the set of all ( x, y ) { ( { \ textbf { x } }, { \ textbf { y } } ) } such thatf ( x, y ) = 0 { f ( { \ textbf { x } }, { \ textbf { y } } ) = { \ textbf { 0 } } }. as noted above, this may not always be possible. we will therefore fix a point ( a, b ) = ( a1,, an, b1,, bm ) { ( { \ textbf { a } }, { \ textbf { b } } ) = ( a _ { 1 }, \ dots, a _ { n }, b _ { 1 }, \ dots, b _ { m } ) } which satisfiesf ( a, b ) = 0 { f ( { \ textbf { a } }, { \ textbf { b } } ) = { \ textbf { 0 } } }, and we will ask for ag { g } that works near the point ( a, b ) { ( { \ textbf { a } }, { \ textbf { b } } ) }. in other words, we want an open setu \ subsetr n { u \ subset \ mathbb { r } ^ { n
|
https://en.wikipedia.org/wiki/Implicit_function_theorem
|
} } ) }. in other words, we want an open setu \ subsetr n { u \ subset \ mathbb { r } ^ { n } } containinga { { \ textbf { a } } }, an open setv \ subsetr m { v \ subset \ mathbb { r } ^ { m } } containingb { { \ textbf { b } } }, and a functiong : u \ rightarrowv { g : u \ to v } such that the graph ofg { g } satisfies the relationf = 0 { f = { \ textbf { 0 } } } onu \ timesv { u \ times v }, and that no other points withinu \ timesv { u \ times v } do so. in symbols, { ( x, g ( x ) ) x \ inu } = { ( x, y ) \ inu \ timesv f ( x, y ) = 0 }. { \ { ( \ mathbf { x }, g ( \ mathbf { x } ) ) \ mid \ mathbf { x } \ in u \ } = \ { ( \ mathbf { x }, \ mathbf { y } ) \ in u \ times v \ mid f ( \ mathbf { x }, \ mathbf { y } ) = \ mathbf { 0 } \ }. } to state the implicit function theorem, we need the jacobian matrix off { f }, which is the matrix of the partial derivatives off { f }. abbreviating ( a1,, an, b1,, bm ) { ( a _ { 1 }, \ dots, a _ { n }, b _ { 1 }, \ dots, b _ { m } ) } to ( a, b ) { ( { \ textbf { a } }, { \ textbf { b } } ) }, the jacobian matrix is ( df ) ( a, b ) = [ \ partialf 1 \ partialx 1 ( a, b ) \ partialf 1 \ partialx n ( a, b ) \ partialf 1 \ partialy 1 ( a, b ) \ partialf 1 \ partialy m ( a, b ) \ partialf m \ partialx 1 ( a, b ) \ partialf m \ partialx n ( a, b ) \ partialf m \ partialy 1 ( a, b ) \ partialf m \ partialy m
|
https://en.wikipedia.org/wiki/Implicit_function_theorem
|
##f m \ partialx n ( a, b ) \ partialf m \ partialy 1 ( a, b ) \ partialf m \ partialy m ( a, b ) ] = [ xy ] { ( df ) ( \ mathbf { a }, \ mathbf { b } ) = [ { \ begin { array } { ccc | ccc } { \ frac { \ partial f _ { 1 } } { \ partial x _ { 1 } } } ( \ mathbf { a }, \ mathbf { b } ) & \ cdots & { \ frac { \ partial f _ { 1 } } { \ partial x _ { n } } } ( \ mathbf { a }, \ mathbf { b } ) & { \ frac { \ partial f _ { 1 } } { \ partial y _ { 1 } } } ( \ mathbf { a }, \ mathbf { b } ) & \ cdots & { \ frac { \ partial f _ { 1 } } { \ partial y _ { m } } } ( \ mathbf { a }, \ mathbf { b } ) \ \ \ vdots & \ ddots & \ vdots & \ vdots & \ ddots & \ vdots \ \ { \ frac { \ partial f _ { m } } { \ partial x _ { 1 } } } ( \ mathbf { a }, \ mathbf { b } ) & \ cdots & { \ frac { \ partial f _ { m } } { \ partial x _ { n } } } ( \ mathbf { a }, \ mathbf { b } ) & { \ frac { \ partial f _ { m } } { \ partial y _ { 1 } } } ( \ mathbf { a }, \ mathbf { b } ) & \ cdots & { \ frac { \ partial f _ { m } } { \ partial y _ { m } } } ( \ mathbf { a }, \ mathbf { b } ) \ end { array } } ] = [ { \ begin { array } { c | c } x & y \ end { array } } ] } wherex { x } is the matrix of partial derivatives in the variablesx i { x _ { i } } andy { y } is the matrix of partial derivatives in the variablesy j { y _ { j } }.
|
https://en.wikipedia.org/wiki/Implicit_function_theorem
|
##x i { x _ { i } } andy { y } is the matrix of partial derivatives in the variablesy j { y _ { j } }. the implicit function theorem says that ify { y } is an invertible matrix, then there areu { u }, v { v }, andg { g } as desired. writing all the hypotheses together gives the following statement. letf : rn + m \ rightarrowr m { f : \ mathbb { r } ^ { n + m } \ to \ mathbb { r } ^ { m } } be a continuously differentiable function, and letr n + m { \ mathbb { r } ^ { n + m } } have coordinates ( x, y ) { ( { \ textbf { x } }, { \ textbf { y } } ) }. fix a point ( a, b ) = ( a1,, an, b1,, bm ) { ( { \ textbf { a } }, { \ textbf { b } } ) = ( a _ { 1 }, \ dots, a _ { n }, b _ { 1 }, \ dots, b _ { m } ) } withf ( a, b ) = 0 { f ( { \ textbf { a } }, { \ textbf { b } } ) = \ mathbf { 0 } }, where0 \ inr m { \ mathbf { 0 } \ in \ mathbb { r } ^ { m } } is the zero vector. if the jacobian matrix ( this is the right - hand panel of the jacobian matrix shown in the previous section ) : jf, y ( a, b ) = [ \ partialf i \ partialy j ( a, b ) ] { j _ { f, \ mathbf { y } } ( \ mathbf { a }, \ mathbf { b } ) = [ { \ frac { \ partial f _ { i } } { \ partial y _ { j } } } ( \ mathbf { a }, \ mathbf { b } ) ] } is invertible, then there exists an open setu \ subsetr n { u \ subset \ mathbb { r } ^ { n } } containinga { { \ textbf { a } } } such that there exists a unique functiong : u \ rightarrowr m { g : u \ to \ mathbb
|
https://en.wikipedia.org/wiki/Implicit_function_theorem
|
textbf { a } } } such that there exists a unique functiong : u \ rightarrowr m { g : u \ to \ mathbb { r } ^ { m } } such thatg ( a ) = b { g ( \ mathbf { a } ) = \ mathbf { b } }, andf ( x, g ( x ) ) = 0for allx \ inu { f ( \ mathbf { x }, g ( \ mathbf { x } ) ) = \ mathbf { 0 } ~ { \ text { for all } } ~ \ mathbf { x } \ in u }. moreover, g { g } is continuously differentiable and, denoting the left - hand panel of the jacobian matrix shown in the previous section as : jf, x ( a, b ) = [ \ partialf i \ partialx j ( a, b ) ], { j _ { f, \ mathbf { x } } ( \ mathbf { a }, \ mathbf { b } ) = [ { \ frac { \ partial f _ { i } } { \ partial x _ { j } } } ( \ mathbf { a }, \ mathbf { b } ) ], } the jacobian matrix of partial derivatives ofg { g } inu { u } is given by the matrix product : [ \ partialg i \ partialx j ( x ) ] m \ timesn = - [ jf, y ( x, g ( x ) ) ] m \ timesm - 1 [ jf, x ( x, g ( x ) ) ] m \ timesn { [ { \ frac { \ partial g _ { i } } { \ partial x _ { j } } } ( \ mathbf { x } ) ] _ { m \ times n } = - [ j _ { f, \ mathbf { y } } ( \ mathbf { x }, g ( \ mathbf { x } ) ) ] _ { m \ times m } ^ { - 1 } \, [ j _ { f, \ mathbf { x } } ( \ mathbf { x }, g ( \ mathbf { x } ) ) ] _ { m \ times n } } for a proof, see inverse function theorem # implicit _ function _ theorem. here, the two - dimensional case is detailed. if, moreover, f { f } is analytic or continuously differentiablek {
|
https://en.wikipedia.org/wiki/Implicit_function_theorem
|
implicit _ function _ theorem. here, the two - dimensional case is detailed. if, moreover, f { f } is analytic or continuously differentiablek { k } times in a neighborhood of ( a, b ) { ( { \ textbf { a } }, { \ textbf { b } } ) }, then one may chooseu { u } in order that the same holds true forg { g } insideu { u }. in the analytic case, this is called the analytic implicit function theorem. let us go back to the example of the unit circle. in this case n = m = 1 andf ( x, y ) = x2 + y2 - 1 { f ( x, y ) = x ^ { 2 } + y ^ { 2 } - 1 }. the matrix of partial derivatives is just a 1 \ times 2 matrix, given by ( df ) ( a, b ) = [ \ partialf \ partialx ( a, b ) \ partialf \ partialy ( a, b ) ] = [ 2a 2b ] { ( df ) ( a, b ) = { \ begin { bmatrix } { \ dfrac { \ partial f } { \ partial x } } ( a, b ) & { \ dfrac { \ partial f } { \ partial y } } ( a, b ) \ end { bmatrix } } = { \ begin { bmatrix } 2a & 2b \ end { bmatrix } } } thus, here, the y in the statement of the theorem is just the number 2b ; the linear map defined by it is invertible if and only if b \ neq 0. by the implicit function theorem we see that we can locally write the circle in the form y = g ( x ) for all points where y \ neq 0. for ( \ pm1, 0 ) we run into trouble, as noted before. the implicit function theorem may still be applied to these two points, by writing x as a function of y, that is, x = h ( y ) { x = h ( y ) } ; now the graph of the function will be ( h ( y ), y ) { ( h ( y ), y ) }, since where b = 0 we have a = 1, and the conditions to locally express the function in this form are satisfied. the implicit derivative of y with respect to x, and that of x with respect to y,
|
https://en.wikipedia.org/wiki/Implicit_function_theorem
|
the conditions to locally express the function in this form are satisfied. the implicit derivative of y with respect to x, and that of x with respect to y, can be found by totally differentiating the implicit functionx 2 + y2 - 1 { x ^ { 2 } + y ^ { 2 } - 1 } and equating to 0 : 2x dx + 2y dy = 0, { 2x \, dx + 2y \, dy = 0, } givingd yd x = - xy { { \ frac { dy } { dx } } = - { \ frac { x } { y } } } andd xd y = - yx. { { \ frac { dx } { dy } } = - { \ frac { y } { x } }. } suppose we have an m - dimensional space, parametrised by a set of coordinates ( x1,, xm ) { ( x _ { 1 }, \ ldots, x _ { m } ) }. we can introduce a new coordinate system ( x1 ',, xm') { ( x'_ { 1 }, \ ldots, x'_ { m } ) } by supplying m functionsh 1h m { h _ { 1 } \ ldots h _ { m } } each being continuously differentiable. these functions allow us to calculate the new coordinates ( x1 ',, xm') { ( x'_ { 1 }, \ ldots, x'_ { m } ) } of a point, given the point's old coordinates ( x1,, xm ) { ( x _ { 1 }, \ ldots, x _ { m } ) } usingx 1'= h1 ( x1,, xm ),, xm'= hm ( x1,, xm ) { x'_ { 1 } = h _ { 1 } ( x _ { 1 }, \ ldots, x _ { m } ), \ ldots, x'_ { m } = h _ { m } ( x _ { 1 }, \ ldots, x _ { m } ) }. one might want to verify if the opposite is possible : given coordinates ( x1 ',, xm') { ( x'_ { 1 }, \ ldots, x'_ { m } ) }, can we'go
|
https://en.wikipedia.org/wiki/Implicit_function_theorem
|
,, xm') { ( x'_ { 1 }, \ ldots, x'_ { m } ) }, can we'go back'and calculate the same point's original coordinates ( x1,, xm ) { ( x _ { 1 }, \ ldots, x _ { m } ) }? the implicit function theorem will provide an answer to this question. the ( new and old ) coordinates ( x1 ',, xm ', x1,, xm ) { ( x'_ { 1 }, \ ldots, x'_ { m }, x _ { 1 }, \ ldots, x _ { m } ) } are related by f = 0, withf ( x1 ',, xm ', x1,, xm ) = ( h1 ( x1,, xm ) - x1 ',, hm ( x1,, xm ) - xm'). { f ( x'_ { 1 }, \ ldots, x'_ { m }, x _ { 1 }, \ ldots, x _ { m } ) = ( h _ { 1 } ( x _ { 1 }, \ ldots, x _ { m } ) - x'_ { 1 }, \ ldots, h _ { m } ( x _ { 1 }, \ ldots, x _ { m } ) - x'_ { m } ). } now the jacobian matrix of f at a certain point ( a, b ) [ wherea = ( x1 ',, xm'), b = ( x1,, xm ) { a = ( x'_ { 1 }, \ ldots, x'_ { m } ), b = ( x _ { 1 }, \ ldots, x _ { m } ) } ] is given by ( df ) ( a, b ) = [ - 10 0 - 1 | \ partialh 1 \ partialx 1 ( b ) \ partialh 1 \ partialx m ( b ) \ partialh m \ partialx 1 ( b ) \ partialh m \ partialx m ( b ) ] = [ - im | j ]. { ( df ) ( a, b ) = [ { \ begin { matrix } - 1 & \ cdots & 0 \ \ \ vdots & \ ddots & \ vdots \ \ 0 & \
|
https://en.wikipedia.org/wiki/Implicit_function_theorem
|
begin { matrix } - 1 & \ cdots & 0 \ \ \ vdots & \ ddots & \ vdots \ \ 0 & \ cdots & - 1 \ end { matrix } } | { \ begin { matrix } { \ frac { \ partial h _ { 1 } } { \ partial x _ { 1 } } } ( b ) & \ cdots & { \ frac { \ partial h _ { 1 } } { \ partial x _ { m } } } ( b ) \ \ \ vdots & \ ddots & \ vdots \ \ { \ frac { \ partial h _ { m } } { \ partial x _ { 1 } } } ( b ) & \ cdots & { \ frac { \ partial h _ { m } } { \ partial x _ { m } } } ( b ) \ \ \ end { matrix } }. ] = [ - i _ { m } | j ]. } where im denotes the m \ times m identity matrix, and j is the m \ times m matrix of partial derivatives, evaluated at ( a, b ). ( in the above, these blocks were denoted by x and y. as it happens, in this particular application of the theorem, neither matrix depends on a. ) the implicit function theorem now states that we can locally express ( x1,, xm ) { ( x _ { 1 }, \ ldots, x _ { m } ) } as a function of ( x1 ',, xm') { ( x'_ { 1 }, \ ldots, x'_ { m } ) } if j is invertible. demanding j is invertible is equivalent to det j \ neq 0, thus we see that we can go back from the primed to the unprimed coordinates if the determinant of the jacobian j is non - zero. this statement is also known as the inverse function theorem. as a simple application of the above, consider the plane, parametrised by polar coordinates ( r, ). we can go to a new coordinate system ( cartesian coordinates ) by defining functions x ( r, ) = r cos ( ) and y ( r, ) = r sin ( ). this makes it possible given any point ( r, ) to find corresponding cartesian coordinates ( x, y ). when can we go back and convert cartesian into polar coordinates? by the previous
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 17