text
stringlengths
51
1.68k
source
stringlengths
13
90
in mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input which may or may not be in the domain of the function. formal definitions, first devised in the early 19th century, are given below. informally, a function f assigns an output f ( x ) to every input x. we say that the function has a limit l at an input p, if f ( x ) gets closer and closer to l as x moves closer and closer to p. more specifically, the output value can be made arbitrarily close to l if the input to f is taken sufficiently close to p. on the other hand, if some inputs very close to p are taken to outputs that stay a fixed distance apart, then we say the limit does not exist. the notion of a limit has many applications in modern calculus. in particular, the many definitions of continuity employ the concept of limit : roughly, a function is continuous if all of its limits agree with the values of the function. the concept of limit also appears in the definition of the derivative : in the calculus of one variable, this is the limiting value of the slope of secant lines to the graph of a function. imagine a person walking on a
https://en.wikipedia.org/wiki/Limit_of_a_function
in the calculus of one variable, this is the limiting value of the slope of secant lines to the graph of a function. imagine a person walking on a landscape represented by the graph y = f ( x ). their horizontal position is given by x, much like the position given by a map of the land or by a global positioning system. their altitude is given by the coordinate y. suppose they walk towards a position x = p, as they get closer and closer to this point, they will notice that their altitude approaches a specific value l. if asked about the altitude corresponding to x = p, they would reply by saying y = l. what, then, does it mean to say, their altitude is approaching l? it means that their altitude gets nearer and nearer to lexcept for a possible small error in accuracy. for example, suppose we set a particular accuracy goal for our traveler : they must get within ten meters of l. they report back that indeed, they can get within ten vertical meters of l, arguing that as long as they are within fifty horizontal meters of p, their altitude is always within ten meters of l. the accuracy goal is then changed : can they get within one vertical meter? yes, supposing that they are able to
https://en.wikipedia.org/wiki/Limit_of_a_function
always within ten meters of l. the accuracy goal is then changed : can they get within one vertical meter? yes, supposing that they are able to move within five horizontal meters of p, their altitude will always remain within one meter from the target altitude l. summarizing the aforementioned concept we can say that the traveler's altitude approaches l as their horizontal position approaches p, so as to say that for every target accuracy goal, however small it may be, there is some neighbourhood of p where all ( not just some ) altitudes correspond to all the horizontal positions, except maybe the horizontal position p itself, in that neighbourhood fulfill that accuracy goal. the initial informal statement can now be explicated : in fact, this explicit statement is quite close to the formal definition of the limit of a function, with values in a topological space. more specifically, to say thatlimx \ rightarrowp f ( x ) = l, { \ lim _ { x \ to p } f ( x ) = l, } is to say that f ( x ) can be made as close to l as desired, by making x close enough, but not equal, to p. the following definitions, known as ( \ epsilon, \ delta ) - definitions, are
https://en.wikipedia.org/wiki/Limit_of_a_function
desired, by making x close enough, but not equal, to p. the following definitions, known as ( \ epsilon, \ delta ) - definitions, are the generally accepted definitions for the limit of a function in various contexts. supposef : r \ rightarrowr { f : \ mathbb { r } arrow \ mathbb { r } } is a function defined on the real line, and there are two real numbers p and l. one would say : the limit of f of x, as x approaches p, exists, and it equals l. and write, limx \ rightarrowp f ( x ) = l, { \ lim _ { x \ to p } f ( x ) = l, } or alternatively, say f ( x ) tends to l as x tends to p, and write, f ( x ) \ rightarrowl asx \ rightarrowp, { f ( x ) \ to l { \ text { as } } x \ to p, } if the following property holds : for every real \ epsilon > 0, there exists a real \ delta > 0 such that for all real x, 0 < | x - p | < \ delta implies | f ( x ) - l | <
https://en.wikipedia.org/wiki/Limit_of_a_function
real \ delta > 0 such that for all real x, 0 < | x - p | < \ delta implies | f ( x ) - l | < \ epsilon. symbolically, ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ inr ) ( 0 < | x - p | < \ delta | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in \ mathbb { r } ) \, ( 0 < | x - p | < \ delta \ implies | f ( x ) - l | < \ varepsilon ). } for example, we may saylimx \ rightarrow2 ( 4x + 1 ) = 9 { \ lim _ { x \ to 2 } ( 4x + 1 ) = 9 } because for every real \ epsilon > 0, we can take \ delta = \ epsilon / 4, so that for all real x, if 0 < | x - 2 | < \ delta, then | 4x + 1 - 9 | < \ epsilon. a more general definition applies
https://en.wikipedia.org/wiki/Limit_of_a_function
x, if 0 < | x - 2 | < \ delta, then | 4x + 1 - 9 | < \ epsilon. a more general definition applies for functions defined on subsets of the real line. let s be a subset ofr. { \ mathbb { r }. } letf : s \ rightarrowr { f : s \ to \ mathbb { r } } be a real - valued function. let p be a point such that there exists some open interval ( a, b ) containing p with ( a, p ) \ cup ( p, b ) \ subsets. { ( a, p ) \ cup ( p, b ) \ subset s. } it is then said that the limit of f as x approaches p is l, if : or, symbolically : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ in ( a, b ) ) ( 0 < | x - p | < \ delta | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in ( a, b
https://en.wikipedia.org/wiki/Limit_of_a_function
##all \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in ( a, b ) ) \, ( 0 < | x - p | < \ delta \ implies | f ( x ) - l | < \ varepsilon ). } for example, we may saylimx \ rightarrow1 x + 3 = 2 { \ lim _ { x \ to 1 } { \ sqrt { x + 3 } } = 2 } because for every real \ epsilon > 0, we can take \ delta = \ epsilon, so that for all real x \ geq - 3, if 0 < | x - 1 | < \ delta, then | f ( x ) - 2 | < \ epsilon. in this example, s = [ - 3, ) contains open intervals around the point 1 ( for example, the interval ( 0, 2 ) ). here, note that the value of the limit does not depend on f being defined at p, nor on the value f ( p ) if it is defined. for example, letf : [ 0, 1 ) \ cup ( 1, 2 ] \ rightarrowr, f ( x ) = 2x
https://en.wikipedia.org/wiki/Limit_of_a_function
. for example, letf : [ 0, 1 ) \ cup ( 1, 2 ] \ rightarrowr, f ( x ) = 2x 2 - x - 1x - 1. { f : [ 0, 1 ) \ cup ( 1, 2 ] \ to \ mathbb { r }, f ( x ) = { \ tfrac { 2x ^ { 2 } - x - 1 } { x - 1 } }. } limx \ rightarrow1 f ( x ) = 3 { \ lim _ { x \ to 1 } f ( x ) = 3 } because for every \ epsilon > 0, we can take \ delta = \ epsilon / 2, so that for all real x \ neq 1, if 0 < | x - 1 | < \ delta, then | f ( x ) - 3 | < \ epsilon. note that here f ( 1 ) is undefined. in fact, a limit can exist in { p \ inr | \ exists ( a, b ) \ subsetr : p \ in ( a, b ) and ( a, p ) \ cup ( p, b ) \ subsets }, { \ { p \ in \ mathbb { r } \,
https://en.wikipedia.org/wiki/Limit_of_a_function
and ( a, p ) \ cup ( p, b ) \ subsets }, { \ { p \ in \ mathbb { r } \, | \, \ exists ( a, b ) \ subset \ mathbb { r } : \, p \ in ( a, b ) { \ text { and } } ( a, p ) \ cup ( p, b ) \ subset s \ }, } which equalsints \ cupisos c, { \ operatorname { int } s \ cup \ operatorname { iso } s ^ { c }, } where int s is the interior of s, and iso sc are the isolated points of the complement of s. in our previous example wheres = [ 0, 1 ) \ cup ( 1, 2 ], { s = [ 0, 1 ) \ cup ( 1, 2 ], } ints = ( 0, 1 ) \ cup ( 1, 2 ), { \ operatorname { int } s = ( 0, 1 ) \ cup ( 1, 2 ), } isos c = { 1 }. { \ operatorname { iso } s ^ { c } = \ { 1 \ }. } we see, specifically, this definition of limit allows
https://en.wikipedia.org/wiki/Limit_of_a_function
}. { \ operatorname { iso } s ^ { c } = \ { 1 \ }. } we see, specifically, this definition of limit allows a limit to exist at 1, but not 0 or 2. the letters \ epsilon and \ delta can be understood as " error " and " distance ". in fact, cauchy used \ epsilon as an abbreviation for " error " in some of his work, though in his definition of continuity, he used an infinitesimal \ alpha { \ alpha } rather than either \ epsilon or \ delta ( see cours d'analyse ). in these terms, the error ( \ epsilon ) in the measurement of the value at the limit can be made as small as desired, by reducing the distance ( \ delta ) to the limit point. as discussed below, this definition also works for functions in a more general context. the idea that \ delta and \ epsilon represent distances helps suggest these generalizations. alternatively, x may approach p from above ( right ) or below ( left ), in which case the limits may be written aslimx \ rightarrowp + f ( x ) = l { \ lim _ { x \ to p ^ { + } } f ( x ) = l }
https://en.wikipedia.org/wiki/Limit_of_a_function
rightarrowp + f ( x ) = l { \ lim _ { x \ to p ^ { + } } f ( x ) = l } orlimx \ rightarrowp - f ( x ) = l { \ lim _ { x \ to p ^ { - } } f ( x ) = l } respectively. if these limits exist at p and are equal there, then this can be referred to as the limit of f ( x ) at p. if the one - sided limits exist at p, but are unequal, then there is no limit at p ( i. e., the limit at p does not exist ). if either one - sided limit does not exist at p, then the limit at p also does not exist. a formal definition is as follows. the limit of f as x approaches p from above is l if : for every \ epsilon > 0, there exists a \ delta > 0 such that whenever 0 < x - p < \ delta, we have | f ( x ) - l | < \ epsilon. ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ in ( a, b ) ) ( 0 < x - p < \
https://en.wikipedia.org/wiki/Limit_of_a_function
> 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ in ( a, b ) ) ( 0 < x - p < \ delta | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in ( a, b ) ) \, ( 0 < x - p < \ delta \ implies | f ( x ) - l | < \ varepsilon ). } the limit of f as x approaches p from below is l if : for every \ epsilon > 0, there exists a \ delta > 0 such that whenever 0 < p - x < \ delta, we have | f ( x ) - l | < \ epsilon. ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ in ( a, b ) ) ( 0 < p - x < \ delta | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in ( a, b
https://en.wikipedia.org/wiki/Limit_of_a_function
##all \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in ( a, b ) ) \, ( 0 < p - x < \ delta \ implies | f ( x ) - l | < \ varepsilon ). } if the limit does not exist, then the oscillation of f at p is non - zero. limits can also be defined by approaching from subsets of the domain. in general : letf : s \ rightarrowr { f : s \ to \ mathbb { r } } be a real - valued function defined on somes \ subseteqr. { s \ subseteq \ mathbb { r }. } let p be a limit point of somet \ subsets { t \ subset s } that is, p is the limit of some sequence of elements of t distinct from p. then we say the limit of f, as x approaches p from values in t, is l, writtenlimx \ rightarrowp x \ int f ( x ) = l { \ lim _ { { x \ to p } \ atop { x \ in t } } f ( x ) = l } if the
https://en.wikipedia.org/wiki/Limit_of_a_function
) = l { \ lim _ { { x \ to p } \ atop { x \ in t } } f ( x ) = l } if the following holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ int ) ( 0 < | x - p | < \ delta | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in t ) \, ( 0 < | x - p | < \ delta \ implies | f ( x ) - l | < \ varepsilon ). } note, t can be any subset of s, the domain of f. and the limit might depend on the selection of t. this generalization includes as special cases limits on an interval, as well as left - handed limits of real - valued functions ( e. g., by taking t to be an open interval of the form (, a ) ), and right - handed limits ( e. g., by taking t to be an open interval of the form ( a, ) ). it also extends the notion
https://en.wikipedia.org/wiki/Limit_of_a_function
right - handed limits ( e. g., by taking t to be an open interval of the form ( a, ) ). it also extends the notion of one - sided limits to the included endpoints of ( half - ) closed intervals, so the square root functionf ( x ) = x { f ( x ) = { \ sqrt { x } } } can have limit 0 as x approaches 0 from above : limx \ rightarrow0 x \ in [ 0, ) x = 0 { \ lim _ { { x \ to 0 } \ atop { x \ in [ 0, \ infty ) } } { \ sqrt { x } } = 0 } since for every \ epsilon > 0, we may take \ delta = \ epsilon2 such that for all x \ geq 0, if 0 < | x - 0 | < \ delta, then | f ( x ) - 0 | < \ epsilon. this definition allows a limit to be defined at limit points of the domain s, if a suitable subset t which has the same limit point is chosen. notably, the previous two - sided definition works onints \ cupisos c, { \ operatorname { int } s \ cup \ operatorname {
https://en.wikipedia.org/wiki/Limit_of_a_function
notably, the previous two - sided definition works onints \ cupisos c, { \ operatorname { int } s \ cup \ operatorname { iso } s ^ { c }, } which is a subset of the limit points of s. for example, lets = [ 0, 1 ) \ cup ( 1, 2 ]. { s = [ 0, 1 ) \ cup ( 1, 2 ]. } the previous two - sided definition would work at1 \ inisos c = { 1 }, { 1 \ in \ operatorname { iso } s ^ { c } = \ { 1 \ }, } but it wouldn't work at 0 or 2, which are limit points of s. the definition of limit given here does not depend on how ( or whether ) f is defined at p. bartle refers to this as a deleted limit, because it excludes the value of f at p. the corresponding non - deleted limit does depend on the value of f at p, if p is in the domain of f. letf : s \ rightarrowr { f : s \ to \ mathbb { r } } be a real - valued function. the non - deleted limit of f, as x approaches p
https://en.wikipedia.org/wiki/Limit_of_a_function
{ f : s \ to \ mathbb { r } } be a real - valued function. the non - deleted limit of f, as x approaches p, is l if ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( | x - p | < \ delta | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( | x - p | < \ delta \ implies | f ( x ) - l | < \ varepsilon ). } the definition is the same, except that the neighborhood | x - p | < \ delta now includes the point p, in contrast to the deleted neighborhood 0 < | x - p | < \ delta. this makes the definition of a non - deleted limit less general. one of the advantages of working with non - deleted limits is that they allow to state the theorem about limits of compositions without any constraints on the functions ( other than the existence of their non - deleted limits ). bartle notes that although by " limit " some authors do mean
https://en.wikipedia.org/wiki/Limit_of_a_function
without any constraints on the functions ( other than the existence of their non - deleted limits ). bartle notes that although by " limit " some authors do mean this non - deleted limit, deleted limits are the most popular. the functionf ( x ) = { sin5 x - 1forx < 10 forx = 11 10x - 10forx > 1 { f ( x ) = { \ begin { cases } \ sin { \ frac { 5 } { x - 1 } } & { \ text { for } } x < 1 \ \ 0 & { \ text { for } } x = 1 \ \ [ 2pt ] { \ frac { 1 } { 10x - 10 } } & { \ text { for } } x > 1 \ end { cases } } } has no limit at x0 = 1 ( the left - hand limit does not exist due to the oscillatory nature of the sine function, and the right - hand limit does not exist due to the asymptotic behaviour of the reciprocal function, see picture ), but has a limit at every other x - coordinate. the functionf ( x ) = { 1x rational0 xirrational { f ( x ) = { \
https://en.wikipedia.org/wiki/Limit_of_a_function
limit at every other x - coordinate. the functionf ( x ) = { 1x rational0 xirrational { f ( x ) = { \ begin { cases } 1 & x { \ text { rational } } \ \ 0 & x { \ text { irrational } } \ end { cases } } } ( a. k. a., the dirichlet function ) has no limit at any x - coordinate. the functionf ( x ) = { 1forx < 02 forx \ geq0 { f ( x ) = { \ begin { cases } 1 & { \ text { for } } x < 0 \ \ 2 & { \ text { for } } x \ geq 0 \ end { cases } } } has a limit at every non - zero x - coordinate ( the limit equals 1 for negative x and equals 2 for positive x ). the limit at x = 0 does not exist ( the left - hand limit equals 1, whereas the right - hand limit equals 2 ). the functionsf ( x ) = { xx rational0 xirrational { f ( x ) = { \ begin { cases } x & x { \ text { rational } } \ \ 0 & x { \ text { irrational
https://en.wikipedia.org/wiki/Limit_of_a_function
{ f ( x ) = { \ begin { cases } x & x { \ text { rational } } \ \ 0 & x { \ text { irrational } } \ end { cases } } } andf ( x ) = { | x | xrational0 xirrational { f ( x ) = { \ begin { cases } | x | & x { \ text { rational } } \ \ 0 & x { \ text { irrational } } \ end { cases } } } both have a limit at x = 0 and it equals 0. the functionf ( x ) = { sinx xirrational1 xrational { f ( x ) = { \ begin { cases } \ sin x & x { \ text { irrational } } \ \ 1 & x { \ text { rational } } \ end { cases } } } has a limit at any x - coordinate of the form \ pi2 + 2n \ pi, { { \ tfrac { \ pi } { 2 } } + 2n \ pi, } where n is any integer. letf : s \ rightarrowr { f : s \ to \ mathbb { r } } be a function defined ons \ subseteqr
https://en.wikipedia.org/wiki/Limit_of_a_function
##f : s \ rightarrowr { f : s \ to \ mathbb { r } } be a function defined ons \ subseteqr. { s \ subseteq \ mathbb { r }. } the limit of f as x approaches infinity is l, denotedlimx \ rightarrowf ( x ) = l, { \ lim _ { x \ to \ infty } f ( x ) = l, } means that : ( \ forall \ epsilon > 0 ) ( \ existsc > 0 ) ( \ forallx \ ins ) ( x > c | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists c > 0 ) \, ( \ forall x \ in s ) \, ( x > c \ implies | f ( x ) - l | < \ varepsilon ). } similarly, the limit of f as x approaches minus infinity is l, denotedlimx \ rightarrow - f ( x ) = l, { \ lim _ { x \ to - \ infty } f ( x ) = l, } means that : ( \ forall \ epsilon >
https://en.wikipedia.org/wiki/Limit_of_a_function
{ \ lim _ { x \ to - \ infty } f ( x ) = l, } means that : ( \ forall \ epsilon > 0 ) ( \ existsc > 0 ) ( \ forallx \ ins ) ( x < - c | f ( x ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists c > 0 ) \, ( \ forall x \ in s ) \, ( x < - c \ implies | f ( x ) - l | < \ varepsilon ). } for example, limx \ rightarrow ( - 3sinx x + 4 ) = 4 { \ lim _ { x \ to \ infty } ( - { \ frac { 3 \ sin x } { x } } + 4 ) = 4 } because for every \ epsilon > 0, we can take c = 3 / \ epsilon such that for all real x, if x > c, then | f ( x ) - 4 | < \ epsilon. another example is thatlimx \ rightarrow - ex = 0 { \ lim _ { x \ to - \ infty } e ^ { x }
https://en.wikipedia.org/wiki/Limit_of_a_function
example is thatlimx \ rightarrow - ex = 0 { \ lim _ { x \ to - \ infty } e ^ { x } = 0 } because for every \ epsilon > 0, we can take c = max { 1, - ln ( \ epsilon ) } such that for all real x, if x < - c, then | f ( x ) - 0 | < \ epsilon. for a function whose values grow without bound, the function diverges and the usual limit does not exist. however, in this case one may introduce limits with infinite values. letf : s \ rightarrowr { f : s \ to \ mathbb { r } } be a function defined ons \ subseteqr. { s \ subseteq \ mathbb { r }. } the statement the limit of f as x approaches p is infinity, denotedlimx \ rightarrowp f ( x ) =, { \ lim _ { x \ to p } f ( x ) = \ infty, } means that : ( \ foralln > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( 0 < | x - p | < \ deltaf (
https://en.wikipedia.org/wiki/Limit_of_a_function
> 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( 0 < | x - p | < \ deltaf ( x ) > n ). { ( \ forall n > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( 0 < | x - p | < \ delta \ implies f ( x ) > n ). } the statement the limit of f as x approaches p is minus infinity, denotedlimx \ rightarrowp f ( x ) = -, { \ lim _ { x \ to p } f ( x ) = - \ infty, } means that : ( \ foralln > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( 0 < | x - p | < \ deltaf ( x ) < - n ). { ( \ forall n > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( 0 < | x - p | < \ delta \ implies f ( x ) < - n ). } for example, limx \ rightarrow1 1
https://en.wikipedia.org/wiki/Limit_of_a_function
< | x - p | < \ delta \ implies f ( x ) < - n ). } for example, limx \ rightarrow1 1 ( x - 1 ) 2 = { \ lim _ { x \ to 1 } { \ frac { 1 } { ( x - 1 ) ^ { 2 } } } = \ infty } because for every n > 0, we can take \ delta = 1n \ delta = 1n { \ textstyle \ delta = { \ tfrac { 1 } { { \ sqrt { n } } \ delta } } = { \ tfrac { 1 } { \ sqrt { n } } } } such that for all real x > 0, if 0 < x - 1 < \ delta, then f ( x ) > n. these ideas can be used together to produce definitions for different combinations, such aslimx \ rightarrowf ( x ) =, { \ lim _ { x \ to \ infty } f ( x ) = \ infty, } orlimx \ rightarrowp + f ( x ) = -. { \ lim _ { x \ to p ^ { + } } f ( x ) = - \
https://en.wikipedia.org/wiki/Limit_of_a_function
##arrowp + f ( x ) = -. { \ lim _ { x \ to p ^ { + } } f ( x ) = - \ infty. } for example, limx \ rightarrow0 + lnx = - { \ lim _ { x \ to 0 ^ { + } } \ ln x = - \ infty } because for every n > 0, we can take \ delta = e - n such that for all real x > 0, if 0 < x - 0 < \ delta, then f ( x ) < - n. limits involving infinity are connected with the concept of asymptotes. these notions of a limit attempt to provide a metric space interpretation to limits at infinity. in fact, they are consistent with the topological space definition of limit ifa neighborhood of - is defined to contain an interval [ -, c ) for somec \ inr, { c \ in \ mathbb { r }, } a neighborhood of is defined to contain an interval ( c, ] wherec \ inr, { c \ in \ mathbb { r }, } anda neighborhood ofa \ inr { a \ in \ mathbb { r } } is defined in the normal
https://en.wikipedia.org/wiki/Limit_of_a_function
\ mathbb { r }, } anda neighborhood ofa \ inr { a \ in \ mathbb { r } } is defined in the normal way metric spacer. { \ mathbb { r }. } in this case, r { { \ overline { \ mathbb { r } } } } is a topological space and any function of the formf : x \ rightarrowy { f : x \ to y } withx, y \ subseteqr { x, y \ subseteq { \ overline { \ mathbb { r } } } } is subject to the topological definition of a limit. note that with this topological definition, it is easy to define infinite limits at finite points, which have not been defined above in the metric sense. many authors allow for the projectively extended real line to be used as a way to include infinite values as well as extended real line. with this notation, the extended real line is given asr \ cup { -, + } { \ mathbb { r } \ cup \ { - \ infty, + \ infty \ } } and the projectively extended real line isr \ cup { } { \ mathbb { r } \ cup \
https://en.wikipedia.org/wiki/Limit_of_a_function
, + \ infty \ } } and the projectively extended real line isr \ cup { } { \ mathbb { r } \ cup \ { \ infty \ } } where a neighborhood of is a set of the form { x : | x | > c }. { \ { x : | x | > c \ }. } the advantage is that one only needs three definitions for limits ( left, right, and central ) to cover all the cases. as presented above, for a completely rigorous account, we would need to consider 15 separate cases for each combination of infinities ( five directions : -, left, central, right, and + ; three bounds : -, finite, or + ). there are also noteworthy pitfalls. for example, when working with the extended real line, x - 1 { x ^ { - 1 } } does not possess a central limit ( which is normal ) : limx \ rightarrow0 + 1x = +, limx \ rightarrow0 - 1x = -. { \ lim _ { x \ to 0 ^ { + } } { 1 \ over x } = + \ infty, \ quad \ lim _ { x \ to 0 ^ {
https://en.wikipedia.org/wiki/Limit_of_a_function
to 0 ^ { + } } { 1 \ over x } = + \ infty, \ quad \ lim _ { x \ to 0 ^ { - } } { 1 \ over x } = - \ infty. } in contrast, when working with the projective real line, infinities ( much like 0 ) are unsigned, so, the central limit does exist in that context : limx \ rightarrow0 + 1x = limx \ rightarrow0 - 1x = limx \ rightarrow0 1x =. { \ lim _ { x \ to 0 ^ { + } } { 1 \ over x } = \ lim _ { x \ to 0 ^ { - } } { 1 \ over x } = \ lim _ { x \ to 0 } { 1 \ over x } = \ infty. } in fact there are a plethora of conflicting formal systems in use. in certain applications of numerical differentiation and integration, it is, for example, convenient to have signed zeroes. a simple reason has to do with the converse oflimx \ rightarrow0 - x - 1 = -, { \ lim _ { x \ to 0 ^ { - } } { x ^
https://en.wikipedia.org/wiki/Limit_of_a_function
oflimx \ rightarrow0 - x - 1 = -, { \ lim _ { x \ to 0 ^ { - } } { x ^ { - 1 } } = - \ infty, } namely, it is convenient forlimx \ rightarrow - x - 1 = - 0 { \ lim _ { x \ to - \ infty } { x ^ { - 1 } } = - 0 } to be considered true. such zeroes can be seen as an approximation to infinitesimals. there are three basic rules for evaluating limits at infinity for a rational functionf ( x ) = p ( x ) q ( x ) { f ( x ) = { \ tfrac { p ( x ) } { q ( x ) } } } ( where p and q are polynomials ) : if the degree of p is greater than the degree of q, then the limit is positive or negative infinity depending on the signs of the leading coefficients ; if the degree of p and q are equal, the limit is the leading coefficient of p divided by the leading coefficient of q ; if the degree of p is less than the degree of q, the limit is 0. if the limit at infinity exists, it represents a horizontal
https://en.wikipedia.org/wiki/Limit_of_a_function
of q ; if the degree of p is less than the degree of q, the limit is 0. if the limit at infinity exists, it represents a horizontal asymptote at y = l. polynomials do not have horizontal asymptotes ; such asymptotes may however occur with rational functions. by noting that | x - p | represents a distance, the definition of a limit can be extended to functions of more than one variable. in the case of a functionf : s \ timest \ rightarrowr { f : s \ times t \ to \ mathbb { r } } defined ons \ timest \ subseteqr 2, { s \ times t \ subseteq \ mathbb { r } ^ { 2 }, } we defined the limit as follows : the limit of f as ( x, y ) approaches ( p, q ) is l, writtenlim ( x, y ) \ rightarrow ( p, q ) f ( x, y ) = l { \ lim _ { ( x, y ) \ to ( p, q ) } f ( x, y ) = l } if the following condition holds : for every \ epsilon > 0, there exists a \ delta >
https://en.wikipedia.org/wiki/Limit_of_a_function
, q ) } f ( x, y ) = l } if the following condition holds : for every \ epsilon > 0, there exists a \ delta > 0 such that for all x in s and y in t, whenever0 < ( x - p ) 2 + ( y - q ) 2 < \ delta, { \ textstyle 0 < { \ sqrt { ( x - p ) ^ { 2 } + ( y - q ) ^ { 2 } } } < \ delta, } we have | f ( x, y ) - l | < \ epsilon, or formally : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( \ forally \ int ) ( 0 < ( x - p ) 2 + ( y - q ) 2 < \ delta | f ( x, y ) - l | < \ epsilon ) ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( \ forall y \ in t ) \, ( 0 < { \ sqrt { ( x - p ) ^ { 2 } + ( y -
https://en.wikipedia.org/wiki/Limit_of_a_function
( \ forall y \ in t ) \, ( 0 < { \ sqrt { ( x - p ) ^ { 2 } + ( y - q ) ^ { 2 } } } < \ delta \ implies | f ( x, y ) - l | < \ varepsilon ) ). } here ( x - p ) 2 + ( y - q ) 2 { \ textstyle { \ sqrt { ( x - p ) ^ { 2 } + ( y - q ) ^ { 2 } } } } is the euclidean distance between ( x, y ) and ( p, q ). ( this can in fact be replaced by any norm | | ( x, y ) - ( p, q ) | |, and be extended to any number of variables. ) for example, we may saylim ( x, y ) \ rightarrow ( 0, 0 ) x4 x2 + y2 = 0 { \ lim _ { ( x, y ) \ to ( 0, 0 ) } { \ frac { x ^ { 4 } } { x ^ { 2 } + y ^ { 2 } } } = 0 } because for every \ epsilon > 0, we can take \ delta = \ epsilon {
https://en.wikipedia.org/wiki/Limit_of_a_function
^ { 2 } + y ^ { 2 } } } = 0 } because for every \ epsilon > 0, we can take \ delta = \ epsilon { \ textstyle \ delta = { \ sqrt { \ varepsilon } } } such that for all real x \ neq 0 and real y \ neq 0, if0 < ( x - 0 ) 2 + ( y - 0 ) 2 < \ delta, { \ textstyle 0 < { \ sqrt { ( x - 0 ) ^ { 2 } + ( y - 0 ) ^ { 2 } } } < \ delta, } then | f ( x, y ) - 0 | < \ epsilon. similar to the case in single variable, the value of f at ( p, q ) does not matter in this definition of limit. for such a multivariable limit to exist, this definition requires the value of f approaches l along every possible path approaching ( p, q ). in the above example, the functionf ( x, y ) = x4 x2 + y2 { f ( x, y ) = { \ frac { x ^ { 4 } } { x ^ { 2 } + y ^ { 2 } } } } satisfies
https://en.wikipedia.org/wiki/Limit_of_a_function
y ) = { \ frac { x ^ { 4 } } { x ^ { 2 } + y ^ { 2 } } } } satisfies this condition. this can be seen by considering the polar coordinates ( x, y ) = ( rcos, rsin ) \ rightarrow ( 0, 0 ), { ( x, y ) = ( r \ cos \ theta, r \ sin \ theta ) \ to ( 0, 0 ), } which giveslimr \ rightarrow0 f ( rcos, rsin ) = limr \ rightarrow0 r4 cos4 r2 = limr \ rightarrow0 r2 cos4. { \ lim _ { r \ to 0 } f ( r \ cos \ theta, r \ sin \ theta ) = \ lim _ { r \ to 0 } { \ frac { r ^ { 4 } \ cos ^ { 4 } \ theta } { r ^ { 2 } } } = \ lim _ { r \ to 0 } r ^ { 2 } \ cos ^ { 4 } \ theta. } here = ( r ) is a function of r which controls the shape of the path along which f is approaching ( p
https://en.wikipedia.org/wiki/Limit_of_a_function
^ { 4 } \ theta. } here = ( r ) is a function of r which controls the shape of the path along which f is approaching ( p, q ). since cos is bounded between [ - 1, 1 ], by the sandwich theorem, this limit tends to 0. in contrast, the functionf ( x, y ) = xy x2 + y2 { f ( x, y ) = { \ frac { xy } { x ^ { 2 } + y ^ { 2 } } } } does not have a limit at ( 0, 0 ). taking the path ( x, y ) = ( t, 0 ) \ rightarrow ( 0, 0 ), we obtainlimt \ rightarrow0 f ( t, 0 ) = limt \ rightarrow0 0t 2 = 0, { \ lim _ { t \ to 0 } f ( t, 0 ) = \ lim _ { t \ to 0 } { \ frac { 0 } { t ^ { 2 } } } = 0, } while taking the path ( x, y ) = ( t, t ) \ rightarrow ( 0, 0 ), we obtainlimt \ rightarrow0 f (
https://en.wikipedia.org/wiki/Limit_of_a_function
( x, y ) = ( t, t ) \ rightarrow ( 0, 0 ), we obtainlimt \ rightarrow0 f ( t, t ) = limt \ rightarrow0 t2 t2 + t2 = 12. { \ lim _ { t \ to 0 } f ( t, t ) = \ lim _ { t \ to 0 } { \ frac { t ^ { 2 } } { t ^ { 2 } + t ^ { 2 } } } = { \ frac { 1 } { 2 } }. } since the two values do not agree, f does not tend to a single value as ( x, y ) approaches ( 0, 0 ). although less commonly used, there is another type of limit for a multivariable function, known as the multiple limit. for a two - variable function, this is the double limit. letf : s \ timest \ rightarrowr { f : s \ times t \ to \ mathbb { r } } be defined ons \ timest \ subseteqr 2, { s \ times t \ subseteq \ mathbb { r } ^ { 2 }, } we say the double limit of f as
https://en.wikipedia.org/wiki/Limit_of_a_function
##r 2, { s \ times t \ subseteq \ mathbb { r } ^ { 2 }, } we say the double limit of f as x approaches p and y approaches q is l, writtenlimx \ rightarrowp y \ rightarrowq f ( x, y ) = l { \ lim _ { { x \ to p } \ atop { y \ to q } } f ( x, y ) = l } if the following condition holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( \ forally \ int ) ( ( 0 < | x - p | < \ delta ) \ land ( 0 < | y - q | < \ delta ) | f ( x, y ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( \ forall y \ in t ) \, ( ( 0 < | x - p | < \ delta ) \ land ( 0 < | y - q | < \ delta ) \ implies | f ( x, y )
https://en.wikipedia.org/wiki/Limit_of_a_function
| x - p | < \ delta ) \ land ( 0 < | y - q | < \ delta ) \ implies | f ( x, y ) - l | < \ varepsilon ). } for such a double limit to exist, this definition requires the value of f approaches l along every possible path approaching ( p, q ), excluding the two lines x = p and y = q. as a result, the multiple limit is a weaker notion than the ordinary limit : if the ordinary limit exists and equals l, then the multiple limit exists and also equals l. the converse is not true : the existence of the multiple limits does not imply the existence of the ordinary limit. consider the examplef ( x, y ) = { 1forx y \ neq0 0forx y = 0 { f ( x, y ) = { \ begin { cases } 1 \ quad { \ text { for } } \ quad xy \ neq 0 \ \ 0 \ quad { \ text { for } } \ quad xy = 0 \ end { cases } } } wherelimx \ rightarrow0 y \ rightarrow0 f ( x, y ) = 1 { \ lim _ { { x \ to 0 }
https://en.wikipedia.org/wiki/Limit_of_a_function
##limx \ rightarrow0 y \ rightarrow0 f ( x, y ) = 1 { \ lim _ { { x \ to 0 } \ atop { y \ to 0 } } f ( x, y ) = 1 } butlim ( x, y ) \ rightarrow ( 0, 0 ) f ( x, y ) { \ lim _ { ( x, y ) \ to ( 0, 0 ) } f ( x, y ) } does not exist. if the domain of f is restricted to ( s { p } ) \ times ( t { q } ), { ( s \ setminus \ { p \ } ) \ times ( t \ setminus \ { q \ } ), } then the two definitions of limits coincide. the concept of multiple limit can extend to the limit at infinity, in a way similar to that of a single variable function. forf : s \ timest \ rightarrowr, { f : s \ times t \ to \ mathbb { r }, } we say the double limit of f as x and y approaches infinity is l, writtenlimx \ rightarrowy \ rightarrowf ( x, y ) = l { \ lim _
https://en.wikipedia.org/wiki/Limit_of_a_function
x and y approaches infinity is l, writtenlimx \ rightarrowy \ rightarrowf ( x, y ) = l { \ lim _ { { x \ to \ infty } \ atop { y \ to \ infty } } f ( x, y ) = l } if the following condition holds : ( \ forall \ epsilon > 0 ) ( \ existsc > 0 ) ( \ forallx \ ins ) ( \ forally \ int ) ( ( x > c ) \ land ( y > c ) | f ( x, y ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists c > 0 ) \, ( \ forall x \ in s ) \, ( \ forall y \ in t ) \, ( ( x > c ) \ land ( y > c ) \ implies | f ( x, y ) - l | < \ varepsilon ). } we say the double limit of f as x and y approaches minus infinity is l, writtenlimx \ rightarrow - y \ rightarrow - f ( x, y ) = l { \ lim _ { { x \ to
https://en.wikipedia.org/wiki/Limit_of_a_function
, writtenlimx \ rightarrow - y \ rightarrow - f ( x, y ) = l { \ lim _ { { x \ to - \ infty } \ atop { y \ to - \ infty } } f ( x, y ) = l } if the following condition holds : ( \ forall \ epsilon > 0 ) ( \ existsc > 0 ) ( \ forallx \ ins ) ( \ forally \ int ) ( ( x < - c ) \ land ( y < - c ) | f ( x, y ) - l | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists c > 0 ) \, ( \ forall x \ in s ) \, ( \ forall y \ in t ) \, ( ( x < - c ) \ land ( y < - c ) \ implies | f ( x, y ) - l | < \ varepsilon ). } letf : s \ timest \ rightarrowr. { f : s \ times t \ to \ mathbb { r }. } instead of taking limit as ( x, y ) \ rightarrow ( p,
https://en.wikipedia.org/wiki/Limit_of_a_function
: s \ times t \ to \ mathbb { r }. } instead of taking limit as ( x, y ) \ rightarrow ( p, q ), we may consider taking the limit of just one variable, say, x \ rightarrow p, to obtain a single - variable function of y, namelyg : t \ rightarrowr. { g : t \ to \ mathbb { r }. } in fact, this limiting process can be done in two distinct ways. the first one is called pointwise limit. we say the pointwise limit of f as x approaches p is g, denotedlimx \ rightarrowp f ( x, y ) = g ( y ), { \ lim _ { x \ to p } f ( x, y ) = g ( y ), } orlimx \ rightarrowp f ( x, y ) = g ( y ) pointwise. { \ lim _ { x \ to p } f ( x, y ) = g ( y ) \ ; \ ; { \ text { pointwise } }. } alternatively, we may say f tends to g pointwise as x approaches p, denotedf ( x, y ) \ rightarrowg (
https://en.wikipedia.org/wiki/Limit_of_a_function
}. } alternatively, we may say f tends to g pointwise as x approaches p, denotedf ( x, y ) \ rightarrowg ( y ) asx \ rightarrowp, { f ( x, y ) \ to g ( y ) \ ; \ ; { \ text { as } } \ ; \ ; x \ to p, } orf ( x, y ) \ rightarrowg ( y ) pointwiseasx \ rightarrowp. { f ( x, y ) \ to g ( y ) \ ; \ ; { \ text { pointwise } } \ ; \ ; { \ text { as } } \ ; \ ; x \ to p. } this limit exists if the following holds : ( \ forall \ epsilon > 0 ) ( \ forally \ int ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( 0 < | x - p | < \ delta | f ( x, y ) - g ( y ) | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ forall y \ in t ) \, ( \ exists \ delta > 0 ) \, ( \ forall
https://en.wikipedia.org/wiki/Limit_of_a_function
##silon > 0 ) \, ( \ forall y \ in t ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( 0 < | x - p | < \ delta \ implies | f ( x, y ) - g ( y ) | < \ varepsilon ). } here, \ delta = \ delta ( \ epsilon, y ) is a function of both \ epsilon and y. each \ delta is chosen for a specific point of y. hence we say the limit is pointwise in y. for example, f ( x, y ) = xcosy { f ( x, y ) = { \ frac { x } { \ cos y } } } has a pointwise limit of constant zero functionlimx \ rightarrow0 f ( x, y ) = 0 ( y ) pointwise { \ lim _ { x \ to 0 } f ( x, y ) = 0 ( y ) \ ; \ ; { \ text { pointwise } } } because for every fixed y, the limit is clearly 0. this argument fails if y is not fixed : if y is very close to \ pi / 2, the value of the fraction
https://en.wikipedia.org/wiki/Limit_of_a_function
, the limit is clearly 0. this argument fails if y is not fixed : if y is very close to \ pi / 2, the value of the fraction may deviate from 0. this leads to another definition of limit, namely the uniform limit. we say the uniform limit of f on t as x approaches p is g, denotedu ni flimx \ rightarrowp y \ int f ( x, y ) = g ( y ), { { \ underset { { x \ to p } \ atop { y \ in t } } { unif \ lim \ ; } } f ( x, y ) = g ( y ), } orlimx \ rightarrowp f ( x, y ) = g ( y ) uniformly ont. { \ lim _ { x \ to p } f ( x, y ) = g ( y ) \ ; \ ; { \ text { uniformly on } } \ ; t. } alternatively, we may say f tends to g uniformly on t as x approaches p, denotedf ( x, y ) g ( y ) ont asx \ rightarrowp, { f ( x, y ) rightarrows g ( y ) \ ; { \ text {
https://en.wikipedia.org/wiki/Limit_of_a_function
) ont asx \ rightarrowp, { f ( x, y ) rightarrows g ( y ) \ ; { \ text { on } } \ ; t \ ; \ ; { \ text { as } } \ ; \ ; x \ to p, } orf ( x, y ) \ rightarrowg ( y ) uniformly ont asx \ rightarrowp. { f ( x, y ) \ to g ( y ) \ ; \ ; { \ text { uniformly on } } \ ; t \ ; \ ; { \ text { as } } \ ; \ ; x \ to p. } this limit exists if the following holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( \ forally \ int ) ( 0 < | x - p | < \ delta | f ( x, y ) - g ( y ) | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( \ forall y \ in t ) \, ( 0 < | x -
https://en.wikipedia.org/wiki/Limit_of_a_function
0 ) \, ( \ forall x \ in s ) \, ( \ forall y \ in t ) \, ( 0 < | x - p | < \ delta \ implies | f ( x, y ) - g ( y ) | < \ varepsilon ). } here, \ delta = \ delta ( \ epsilon ) is a function of only \ epsilon but not y. in other words, \ delta is uniformly applicable to all y in t. hence we say the limit is uniform in y. for example, f ( x, y ) = xcosy { f ( x, y ) = x \ cos y } has a uniform limit of constant zero functionlimx \ rightarrow0 f ( x, y ) = 0 ( y ) uniformly onr { \ lim _ { x \ to 0 } f ( x, y ) = 0 ( y ) \ ; \ ; { \ text { uniformly on } } \ ; \ mathbb { r } } because for all real y, cos y is bounded between [ - 1, 1 ]. hence no matter how y behaves, we may use the sandwich theorem to show that the limit is 0. letf : s \ timest \ right
https://en.wikipedia.org/wiki/Limit_of_a_function
. hence no matter how y behaves, we may use the sandwich theorem to show that the limit is 0. letf : s \ timest \ rightarrowr. { f : s \ times t \ to \ mathbb { r }. } we may consider taking the limit of just one variable, say, x \ rightarrow p, to obtain a single - variable function of y, namelyg : t \ rightarrowr, { g : t \ to \ mathbb { r }, } and then take limit in the other variable, namely y \ rightarrow q, to get a number l. symbolically, limy \ rightarrowq limx \ rightarrowp f ( x, y ) = limy \ rightarrowq g ( y ) = l. { \ lim _ { y \ to q } \ lim _ { x \ to p } f ( x, y ) = \ lim _ { y \ to q } g ( y ) = l. } this limit is known as iterated limit of the multivariable function. the order of taking limits may affect the result, i. e., limy \ rightarrowq limx \ rightarrowp f
https://en.wikipedia.org/wiki/Limit_of_a_function
function. the order of taking limits may affect the result, i. e., limy \ rightarrowq limx \ rightarrowp f ( x, y ) \ neqlimx \ rightarrowp limy \ rightarrowq f ( x, y ) { \ lim _ { y \ to q } \ lim _ { x \ to p } f ( x, y ) \ neq \ lim _ { x \ to p } \ lim _ { y \ to q } f ( x, y ) } in general. a sufficient condition of equality is given by the moore - osgood theorem, which requires the limitlimx \ rightarrowp f ( x, y ) = g ( y ) { \ lim _ { x \ to p } f ( x, y ) = g ( y ) } to be uniform on t. suppose m and n are subsets of metric spaces a and b, respectively, and f : m \ rightarrow n is defined between m and n, with x \ in m, p a limit point of m and l \ in n. it is said that the limit of f as x approaches p is l and writelimx \ rightarrowp f (
https://en.wikipedia.org/wiki/Limit_of_a_function
m and l \ in n. it is said that the limit of f as x approaches p is l and writelimx \ rightarrowp f ( x ) = l { \ lim _ { x \ to p } f ( x ) = l } if the following property holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ inm ) ( 0 < da ( x, p ) < \ deltad b ( f ( x ), l ) < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in m ) \, ( 0 < d _ { a } ( x, p ) < \ delta \ implies d _ { b } ( f ( x ), l ) < \ varepsilon ). } again, note that p need not be in the domain of f, nor does l need to be in the range of f, and even if f ( p ) is defined it need not be equal to l. the limit in euclidean space is a direct generalization of limits to vector - valued functions. for example, we may consider a
https://en.wikipedia.org/wiki/Limit_of_a_function
need not be equal to l. the limit in euclidean space is a direct generalization of limits to vector - valued functions. for example, we may consider a functionf : s \ timest \ rightarrowr 3 { f : s \ times t \ to \ mathbb { r } ^ { 3 } } such thatf ( x, y ) = ( f1 ( x, y ), f2 ( x, y ), f3 ( x, y ) ). { f ( x, y ) = ( f _ { 1 } ( x, y ), f _ { 2 } ( x, y ), f _ { 3 } ( x, y ) ). } then, under the usual euclidean metric, lim ( x, y ) \ rightarrow ( p, q ) f ( x, y ) = ( l1, l2, l3 ) { \ lim _ { ( x, y ) \ to ( p, q ) } f ( x, y ) = ( l _ { 1 }, l _ { 2 }, l _ { 3 } ) } if the following holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \
https://en.wikipedia.org/wiki/Limit_of_a_function
3 } ) } if the following holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( \ forally \ int ) ( 0 < ( x - p ) 2 + ( y - q ) 2 < \ delta ( f1 - l1 ) 2 + ( f2 - l2 ) 2 + ( f3 - l3 ) 2 < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( \ forall y \ in t ) \, ( 0 < { \ sqrt { ( x - p ) ^ { 2 } + ( y - q ) ^ { 2 } } } < \ delta \ implies { \ sqrt { ( f _ { 1 } - l _ { 1 } ) ^ { 2 } + ( f _ { 2 } - l _ { 2 } ) ^ { 2 } + ( f _ { 3 } - l _ { 3 } ) ^ { 2 } } } < \ varepsilon ). } in this example, the function concerned are finite - dimension vector - valued function.
https://en.wikipedia.org/wiki/Limit_of_a_function
) ^ { 2 } } } < \ varepsilon ). } in this example, the function concerned are finite - dimension vector - valued function. in this case, the limit theorem for vector - valued function states that if the limit of each component exists, then the limit of a vector - valued function equals the vector with each component taken the limit : lim ( x, y ) \ rightarrow ( p, q ) ( f1 ( x, y ), f2 ( x, y ), f3 ( x, y ) ) = ( lim ( x, y ) \ rightarrow ( p, q ) f1 ( x, y ), lim ( x, y ) \ rightarrow ( p, q ) f2 ( x, y ), lim ( x, y ) \ rightarrow ( p, q ) f3 ( x, y ) ). { \ lim _ { ( x, y ) \ to ( p, q ) } { \ bigl ( } f _ { 1 } ( x, y ), f _ { 2 } ( x, y ), f _ { 3 } ( x, y ) { \ bigr ) } = ( \ lim _ { ( x, y )
https://en.wikipedia.org/wiki/Limit_of_a_function
x, y ), f _ { 3 } ( x, y ) { \ bigr ) } = ( \ lim _ { ( x, y ) \ to ( p, q ) } f _ { 1 } ( x, y ), \ lim _ { ( x, y ) \ to ( p, q ) } f _ { 2 } ( x, y ), \ lim _ { ( x, y ) \ to ( p, q ) } f _ { 3 } ( x, y ) ). } one might also want to consider spaces other than euclidean space. an example would be the manhattan space. considerf : s \ rightarrowr 2 { f : s \ to \ mathbb { r } ^ { 2 } } such thatf ( x ) = ( f1 ( x ), f2 ( x ) ). { f ( x ) = ( f _ { 1 } ( x ), f _ { 2 } ( x ) ). } then, under the manhattan metric, limx \ rightarrowp f ( x ) = ( l1, l2 ) { \ lim _ { x \ to p } f ( x ) = ( l _ { 1 }, l _ { 2 }
https://en.wikipedia.org/wiki/Limit_of_a_function
##1, l2 ) { \ lim _ { x \ to p } f ( x ) = ( l _ { 1 }, l _ { 2 } ) } if the following holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( 0 < | x - p | < \ delta | f1 - l1 | + | f2 - l2 | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( 0 < | x - p | < \ delta \ implies | f _ { 1 } - l _ { 1 } | + | f _ { 2 } - l _ { 2 } | < \ varepsilon ). } since this is also a finite - dimension vector - valued function, the limit theorem stated above also applies. finally, we will discuss the limit in function space, which has infinite dimensions. consider a function f ( x, y ) in the function spaces \ timest \ rightarrowr. { s \ times t \ to \ mathbb { r }. } we want to
https://en.wikipedia.org/wiki/Limit_of_a_function
) in the function spaces \ timest \ rightarrowr. { s \ times t \ to \ mathbb { r }. } we want to find out as x approaches p, how f ( x, y ) will tend to another function g ( y ), which is in the function spacet \ rightarrowr. { t \ to \ mathbb { r }. } the " closeness " in this function space may be measured under the uniform metric. then, we will say the uniform limit of f on t as x approaches p is g and writeu ni flimx \ rightarrowp y \ int f ( x, y ) = g ( y ), { { \ underset { { x \ to p } \ atop { y \ in t } } { unif \ lim \ ; } } f ( x, y ) = g ( y ), } orlimx \ rightarrowp f ( x, y ) = g ( y ) uniformly ont, { \ lim _ { x \ to p } f ( x, y ) = g ( y ) \ ; \ ; { \ text { uniformly on } } \ ; t, } if the following holds : ( \ forall \
https://en.wikipedia.org/wiki/Limit_of_a_function
= g ( y ) \ ; \ ; { \ text { uniformly on } } \ ; t, } if the following holds : ( \ forall \ epsilon > 0 ) ( \ exists \ delta > 0 ) ( \ forallx \ ins ) ( 0 < | x - p | < \ deltasupy \ int | f ( x, y ) - g ( y ) | < \ epsilon ). { ( \ forall \ varepsilon > 0 ) \, ( \ exists \ delta > 0 ) \, ( \ forall x \ in s ) \, ( 0 < | x - p | < \ delta \ implies \ sup _ { y \ in t } | f ( x, y ) - g ( y ) | < \ varepsilon ). } in fact, one can see that this definition is equivalent to that of the uniform limit of a multivariable function introduced in the previous section. supposex { x } andy { y } are topological spaces withy { y } a hausdorff space. letp { p } be a limit point of \ omega \ subseteqx { \ omega \ subseteq x }, andl \ iny { l \ in y }
https://en.wikipedia.org/wiki/Limit_of_a_function
a limit point of \ omega \ subseteqx { \ omega \ subseteq x }, andl \ iny { l \ in y }. for a functionf : \ omega \ rightarrowy { f : \ omega \ to y }, it is said that the limit off { f } asx { x } approachesp { p } isl { l }, writtenlimx \ rightarrowp f ( x ) = l, { \ lim _ { x \ to p } f ( x ) = l, } if the following property holds : for every open neighborhoodv { v } ofl { l }, there exists an open neighborhoodu { u } ofp { p } such thatf ( u \ cap \ omega - { p } ) \ subseteqv { f ( u \ cap \ omega - \ { p \ } ) \ subseteq v }. this last part of the definition can also be phrased as " there exists an open punctured neighbourhoodu { u } ofp { p } such thatf ( u \ cap \ omega ) \ subseteqv { f ( u \ cap \ omega ) \ subseteq v }. the domain off { f }
https://en.wikipedia.org/wiki/Limit_of_a_function
\ cap \ omega ) \ subseteqv { f ( u \ cap \ omega ) \ subseteq v }. the domain off { f } does not need to containp { p }. if it does, then the value off { f } atp { p } is irrelevant to the definition of the limit. in particular, if the domain off { f } isx { p } { x \ setminus \ { p \ } } ( or all ofx { x } ), then the limit off { f } asx \ rightarrowp { x \ to p } exists and is equal to l if, for all subsets \ omega of x with limit pointp { p }, the limit of the restriction off { f } to \ omega exists and is equal to l. sometimes this criterion is used to establish the non - existence of the two - sided limit of a function onr { \ mathbb { r } } by showing that the one - sided limits either fail to exist or do not agree. such a view is fundamental in the field of general topology, where limits and continuity at a point are defined in terms of special families of subsets, called filters, or generalized sequences known as nets. alternatively, the
https://en.wikipedia.org/wiki/Limit_of_a_function
, where limits and continuity at a point are defined in terms of special families of subsets, called filters, or generalized sequences known as nets. alternatively, the requirement thaty { y } be a hausdorff space can be relaxed to the assumption thaty { y } be a general topological space, but then the limit of a function may not be unique. in particular, one can no longer talk about the limit of a function at a point, but rather a limit or the set of limits at a point. a function is continuous at a limit pointp { p } of and in its domain if and only iff ( p ) { f ( p ) } is the ( or, in the general case, a ) limit off ( x ) { f ( x ) } asx { x } tends top { p }. there is another type of limit of a function, namely the sequential limit. letf : x \ rightarrowy { f : x \ to y } be a mapping from a topological space x into a hausdorff space y, p \ inx { p \ in x } a limit point of x and l \ in y. the sequential limit off { f } asx { x } tends top { p } is
https://en.wikipedia.org/wiki/Limit_of_a_function
in x } a limit point of x and l \ in y. the sequential limit off { f } asx { x } tends top { p } is l iffor every sequence ( xn ) { ( x _ { n } ) } inx { p } { x \ setminus \ { p \ } } that converges top { p }, the sequencef ( xn ) { f ( x _ { n } ) } converges to l. if l is the limit ( in the sense above ) off { f } asx { x } approachesp { p }, then it is a sequential limit as well ; however, the converse need not hold in general. if in addition x is metrizable, then l is the sequential limit off { f } asx { x } approachesp { p } if and only if it is the limit ( in the sense above ) off { f } asx { x } approachesp { p }. for functions on the real line, one way to define the limit of a function is in terms of the limit of sequences. ( this definition is usually attributed to eduard heine. ) in this setting : limx \ rightarrowa f ( x ) = l { \
https://en.wikipedia.org/wiki/Limit_of_a_function
. ( this definition is usually attributed to eduard heine. ) in this setting : limx \ rightarrowa f ( x ) = l { \ lim _ { x \ to a } f ( x ) = l } if, and only if, for all sequences xn ( with, for all n, xn not equal to a ) converging to a the sequence f ( xn ) converges to l. it was shown by sierpiski in 1916 that proving the equivalence of this definition and the definition above, requires and is equivalent to a weak form of the axiom of choice. note that defining what it means for a sequence xn to converge to a requires the epsilon, delta method. similarly as it was the case of weierstrass's definition, a more general heine definition applies to functions defined on subsets of the real line. let f be a real - valued function with the domain dm ( f ). let a be the limit of a sequence of elements of dm ( f ) \ { a }. then the limit ( in this sense ) of f is l as x approaches aif for every sequence xn \ in dm ( f ) \ { a } ( so that for
https://en.wikipedia.org/wiki/Limit_of_a_function
this sense ) of f is l as x approaches aif for every sequence xn \ in dm ( f ) \ { a } ( so that for all n, xn is not equal to a ) that converges to a, the sequence f ( xn ) converges to l. this is the same as the definition of a sequential limit in the preceding section obtained by regarding the subset dm ( f ) ofr { \ mathbb { r } } as a metric space with the induced metric. in non - standard calculus the limit of a function is defined by : limx \ rightarrowa f ( x ) = l { \ lim _ { x \ to a } f ( x ) = l } if and only if for allx \ inr, { x \ in \ mathbb { r } ^ { * }, } f ( x ) - l { f ^ { * } ( x ) - l } is infinitesimal whenever x - a is infinitesimal. herer { \ mathbb { r } ^ { * } } are the hyperreal numbers and f * is the natural extension of f to the non - standard real numbers. keisler proved that such a hyperreal definition of limit reduces the
https://en.wikipedia.org/wiki/Limit_of_a_function
##real numbers and f * is the natural extension of f to the non - standard real numbers. keisler proved that such a hyperreal definition of limit reduces the quantifier complexity by two quantifiers. on the other hand, hrbacek writes that for the definitions to be valid for all hyperreal numbers they must implicitly be grounded in the \ epsilon - \ delta method, and claims that, from the pedagogical point of view, the hope that non - standard calculus could be done without \ epsilon - \ delta methods cannot be realized in full. baszczyk et al. detail the usefulness of microcontinuity in developing a transparent definition of uniform continuity, and characterize hrbacek's criticism as a " dubious lament ". at the 1908 international congress of mathematics f. riesz introduced an alternate way defining limits and continuity in concept called " nearness ". a point x is defined to be near a seta \ subseteqr { a \ subseteq \ mathbb { r } } if for every r > 0 there is a point a \ in a so that | x - a | < r. in this setting thelimx \ rightarrowa f ( x ) = l
https://en.wikipedia.org/wiki/Limit_of_a_function
point a \ in a so that | x - a | < r. in this setting thelimx \ rightarrowa f ( x ) = l { \ lim _ { x \ to a } f ( x ) = l } if and only if for alla \ subseteqr, { a \ subseteq \ mathbb { r }, } l is near f ( a ) whenever a is near a. here f ( a ) is the set { f ( x ) | x \ ina }. { \ { f ( x ) | x \ in a \ }. } this definition can also be extended to metric and topological spaces. the notion of the limit of a function is very closely related to the concept of continuity. a function f is said to be continuous at c if it is both defined at c and its value at c equals the limit of f as x approaches c : limx \ rightarrowc f ( x ) = f ( c ). { \ lim _ { x \ to c } f ( x ) = f ( c ). } we have here assumed that c is a limit point of the domain of f. if a function f is real - valued, then the limit of f at p is
https://en.wikipedia.org/wiki/Limit_of_a_function
have here assumed that c is a limit point of the domain of f. if a function f is real - valued, then the limit of f at p is l if and only if both the right - handed limit and left - handed limit of f at p exist and are equal to l. the function f is continuous at p if and only if the limit of f ( x ) as x approaches p exists and is equal to f ( p ). if f : m \ rightarrow n is a function between metric spaces m and n, then it is equivalent that f transforms every sequence in m which converges towards p into a sequence in n which converges towards f ( p ). if n is a normed vector space, then the limit operation is linear in the following sense : if the limit of f ( x ) as x approaches p is l and the limit of g ( x ) as x approaches p is p, then the limit of f ( x ) + g ( x ) as x approaches p is l + p. if a is a scalar from the base field, then the limit of af ( x ) as x approaches p is al. if f and g are real - valued ( or complex - valued ) functions, then taking the
https://en.wikipedia.org/wiki/Limit_of_a_function
limit of af ( x ) as x approaches p is al. if f and g are real - valued ( or complex - valued ) functions, then taking the limit of an operation on f ( x ) and g ( x ) ( e. g., f + g, f - g, f \ times g, f / g, f g ) under certain conditions is compatible with the operation of limits of f ( x ) and g ( x ). this fact is often called the algebraic limit theorem. the main condition needed to apply the following rules is that the limits on the right - hand sides of the equations exist ( in other words, these limits are finite values including 0 ). additionally, the identity for division requires that the denominator on the right - hand side is non - zero ( division by 0 is not defined ), and the identity for exponentiation requires that the base is positive, or zero while the exponent is positive ( finite ). limx \ rightarrowp ( f ( x ) + g ( x ) ) = limx \ rightarrowp f ( x ) + limx \ rightarrowp g ( x ) limx \ rightarrowp ( f ( x ) - g ( x )
https://en.wikipedia.org/wiki/Limit_of_a_function
( x ) + limx \ rightarrowp g ( x ) limx \ rightarrowp ( f ( x ) - g ( x ) ) = limx \ rightarrowp f ( x ) - limx \ rightarrowp g ( x ) limx \ rightarrowp ( f ( x ) \ cdotg ( x ) ) = limx \ rightarrowp f ( x ) \ cdotlimx \ rightarrowp g ( x ) limx \ rightarrowp ( f ( x ) / g ( x ) ) = limx \ rightarrowp f ( x ) / limx \ rightarrowp g ( x ) limx \ rightarrowp f ( x ) g ( x ) = limx \ rightarrowp f ( x ) limx \ rightarrowp g ( x ) { { \ begin { array } { lcl } \ lim _ { x \ to p } ( f ( x ) + g ( x ) ) & = & \ lim _ { x \ to p } f ( x ) + \ lim _ { x \ to p } g ( x ) \ \ \ lim _ { x \ to p } (
https://en.wikipedia.org/wiki/Limit_of_a_function
p } f ( x ) + \ lim _ { x \ to p } g ( x ) \ \ \ lim _ { x \ to p } ( f ( x ) - g ( x ) ) & = & \ lim _ { x \ to p } f ( x ) - \ lim _ { x \ to p } g ( x ) \ \ \ lim _ { x \ to p } ( f ( x ) \ cdot g ( x ) ) & = & \ lim _ { x \ to p } f ( x ) \ cdot \ lim _ { x \ to p } g ( x ) \ \ \ lim _ { x \ to p } ( f ( x ) / g ( x ) ) & = & { \ lim _ { x \ to p } f ( x ) / \ lim _ { x \ to p } g ( x ) } \ \ \ lim _ { x \ to p } f ( x ) ^ { g ( x ) } & = & { \ lim _ { x \ to p } f ( x ) ^ { \ lim _ { x \ to p } g ( x ) } } \ end { array } } } these rules are also valid for one - sided limits,
https://en.wikipedia.org/wiki/Limit_of_a_function
lim _ { x \ to p } g ( x ) } } \ end { array } } } these rules are also valid for one - sided limits, including when p is or -. in each rule above, when one of the limits on the right is or -, the limit on the left may sometimes still be determined by the following rules. q + = ifq \ neq - q \ times = { ifq > 0 - ifq < 0q = 0ifq \ neqandq \ neq - q = { 0ifq < 0ifq > 0q = { 0if0 < q < 1ifq > 1q - = { if0 < q < 10 ifq > 1 { { \ begin { array } { rcl } q + \ infty & = & \ infty { \ text { if } } q \ neq - \ infty \ \ [ 8pt ] q \ times \ infty & = & { \ begin { cases } \ infty & { \ text { if } } q > 0 \ \ - \ infty & { \ text { if } } q < 0 \ end { cases } } \ \ [ 6pt ] {
https://en.wikipedia.org/wiki/Limit_of_a_function
0 \ \ - \ infty & { \ text { if } } q < 0 \ end { cases } } \ \ [ 6pt ] { \ frac { q } { \ infty } } & = & 0 { \ text { if } } q \ neq \ infty { \ text { and } } q \ neq - \ infty \ \ [ 6pt ] \ infty ^ { q } & = & { \ begin { cases } 0 & { \ text { if } } q < 0 \ \ \ infty & { \ text { if } } q > 0 \ end { cases } } \ \ [ 4pt ] q ^ { \ infty } & = & { \ begin { cases } 0 & { \ text { if } } 0 < q < 1 \ \ \ infty & { \ text { if } } q > 1 \ end { cases } } \ \ [ 4pt ] q ^ { - \ infty } & = & { \ begin { cases } \ infty & { \ text { if } } 0 < q < 1 \ \ 0 & { \ text { if } } q > 1 \ end {
https://en.wikipedia.org/wiki/Limit_of_a_function
##fty & { \ text { if } } 0 < q < 1 \ \ 0 & { \ text { if } } q > 1 \ end { cases } } \ end { array } } } ( see also extended real number line ). in other cases the limit on the left may still exist, although the right - hand side, called an indeterminate form, does not allow one to determine the result. this depends on the functions f and g. these indeterminate forms are : 00 \ pm \ pm0 \ times \ pm + - 00 01 \ pm { { \ begin { array } { cc } { \ frac { 0 } { 0 } } & { \ frac { \ pm \ infty } { \ pm \ infty } } \ \ [ 6pt ] 0 \ times \ pm \ infty & \ infty + - \ infty \ \ [ 8pt ] \ qquad 0 ^ { 0 } \ qquad & \ qquad \ infty ^ { 0 } \ qquad \ \ [ 8pt ] 1 ^ { \ pm \ infty } \ end { array } } } see further l'hpital's rule below
https://en.wikipedia.org/wiki/Limit_of_a_function
\ [ 8pt ] 1 ^ { \ pm \ infty } \ end { array } } } see further l'hpital's rule below and indeterminate form. in general, from knowing thatlimy \ rightarrowb f ( y ) = c { \ lim _ { y \ to b } f ( y ) = c } andlimx \ rightarrowa g ( x ) = b, { \ lim _ { x \ to a } g ( x ) = b, } it does not follow thatlimx \ rightarrowa f ( g ( x ) ) = c. { \ lim _ { x \ to a } f ( g ( x ) ) = c. } however, this " chain rule " does hold if one of the following additional conditions holds : f ( b ) = c ( that is, f is continuous at b ), org does not take the value b near a ( that is, there exists a \ delta > 0 such that if 0 < | x - a | < \ delta then | g ( x ) - b | > 0 ). as an example of this phenomenon, consider the following function that violates both additional restrictions : f ( x ) = g (
https://en.wikipedia.org/wiki/Limit_of_a_function
- b | > 0 ). as an example of this phenomenon, consider the following function that violates both additional restrictions : f ( x ) = g ( x ) = { 0ifx \ neq0 1ifx = 0 { f ( x ) = g ( x ) = { \ begin { cases } 0 & { \ text { if } } x \ neq 0 \ \ 1 & { \ text { if } } x = 0 \ end { cases } } } since the value at f ( 0 ) is a removable discontinuity, limx \ rightarrowa f ( x ) = 0 { \ lim _ { x \ to a } f ( x ) = 0 } for all a. thus, the nave chain rule would suggest that the limit of f ( f ( x ) ) is 0. however, it is the case thatf ( f ( x ) ) = { 1ifx \ neq0 0ifx = 0 { f ( f ( x ) ) = { \ begin { cases } 1 & { \ text { if } } x \ neq 0 \ \ 0 & { \ text { if } } x = 0 \ end { cases } } } and solimx \
https://en.wikipedia.org/wiki/Limit_of_a_function
} x \ neq 0 \ \ 0 & { \ text { if } } x = 0 \ end { cases } } } and solimx \ rightarrowa f ( f ( x ) ) = 1 { \ lim _ { x \ to a } f ( f ( x ) ) = 1 } for all a. for n a nonnegative integer and constantsa 1, a2, a3,, an { a _ { 1 }, a _ { 2 }, a _ { 3 }, \ ldots, a _ { n } } andb 1, b2, b3,, bn, { b _ { 1 }, b _ { 2 }, b _ { 3 }, \ ldots, b _ { n }, } limx \ rightarrowa 1x n + a2 xn - 1 + a3 xn - 2 + + an b1 xn + b2 xn - 1 + b3 xn - 2 + + bn = a1 b1 { \ lim _ { x \ to \ infty } { \ frac { a _ { 1 } x ^ { n } + a _ { 2 } x ^ { n - 1 } + a _ { 3 }
https://en.wikipedia.org/wiki/Limit_of_a_function
frac { a _ { 1 } x ^ { n } + a _ { 2 } x ^ { n - 1 } + a _ { 3 } x ^ { n - 2 } + \ dots + a _ { n } } { b _ { 1 } x ^ { n } + b _ { 2 } x ^ { n - 1 } + b _ { 3 } x ^ { n - 2 } + \ dots + b _ { n } } } = { \ frac { a _ { 1 } } { b _ { 1 } } } } this can be proven by dividing both the numerator and denominator by xn. if the numerator is a polynomial of higher degree, the limit does not exist. if the denominator is of higher degree, the limit is 0. limx \ rightarrow0 sinx x = 1limx \ rightarrow0 1 - cosx x = 0 { { \ begin { array } { lcl } \ lim _ { x \ to 0 } { \ frac { \ sin x } { x } } & = & 1 \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { 1 -
https://en.wikipedia.org/wiki/Limit_of_a_function
x } { x } } & = & 1 \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { 1 - \ cos x } { x } } & = & 0 \ end { array } } } limx \ rightarrow0 ( 1 + x ) 1x = limr \ rightarrow ( 1 + 1r ) r = elimx \ rightarrow0 ex - 1x = 1limx \ rightarrow0 ea x - 1b x = ab limx \ rightarrow0 ca x - 1b x = ab lnc limx \ rightarrow0 + xx = 1 { { \ begin { array } { lcl } \ lim _ { x \ to 0 } ( 1 + x ) ^ { \ frac { 1 } { x } } & = & \ lim _ { r \ to \ infty } ( 1 + { \ frac { 1 } { r } } ) ^ { r } = e \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { e ^ { x } - 1 } { x } } & = & 1 \ \ [ 4pt ] \ lim _ { x \
https://en.wikipedia.org/wiki/Limit_of_a_function
\ frac { e ^ { x } - 1 } { x } } & = & 1 \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { e ^ { ax } - 1 } { bx } } & = & { \ frac { a } { b } } \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { c ^ { ax } - 1 } { bx } } & = & { \ frac { a } { b } } \ ln c \ \ [ 4pt ] \ lim _ { x \ to 0 ^ { + } } x ^ { x } & = & 1 \ end { array } } } limx \ rightarrow0 ln ( 1 + x ) x = 1limx \ rightarrow0 ln ( 1 + ax ) bx = ab limx \ rightarrow0 logc ( 1 + ax ) bx = ab lnc { { \ begin { array } { lcl } \ lim _ { x \ to 0 } { \ frac { \ ln ( 1 + x ) } { x } } & = & 1 \ \ [ 4pt ]
https://en.wikipedia.org/wiki/Limit_of_a_function
\ to 0 } { \ frac { \ ln ( 1 + x ) } { x } } & = & 1 \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { \ ln ( 1 + ax ) } { bx } } & = & { \ frac { a } { b } } \ \ [ 4pt ] \ lim _ { x \ to 0 } { \ frac { \ log _ { c } ( 1 + ax ) } { bx } } & = & { \ frac { a } { b \ ln c } } \ end { array } } } this rule uses derivatives to find limits of indeterminate forms 0 / 0 or \ pm /, and only applies to such cases. other indeterminate forms may be manipulated into this form. given two functions f ( x ) and g ( x ), defined over an open interval i containing the desired limit point c, then if : limx \ rightarrowc f ( x ) = limx \ rightarrowc g ( x ) = 0, { \ lim _ { x \ to c } f ( x ) = \ lim _ { x \ to c } g (
https://en.wikipedia.org/wiki/Limit_of_a_function
( x ) = 0, { \ lim _ { x \ to c } f ( x ) = \ lim _ { x \ to c } g ( x ) = 0, } orlimx \ rightarrowc f ( x ) = \ pmlimx \ rightarrowc g ( x ) = \ pm, { \ lim _ { x \ to c } f ( x ) = \ pm \ lim _ { x \ to c } g ( x ) = \ pm \ infty, } andf { f } andg { g } are differentiable overi { c }, { i \ setminus \ { c \ }, } andg'( x ) \ neq0 { g'( x ) \ neq 0 } for allx \ ini { c }, { x \ in i \ setminus \ { c \ }, } andlimx \ rightarrowc f'( x ) g'( x ) { \ lim _ { x \ to c } { \ tfrac { f'( x ) } { g'( x ) } } } exists, then : limx \ rightarrowc f ( x ) g ( x ) = lim
https://en.wikipedia.org/wiki/Limit_of_a_function
) } { g'( x ) } } } exists, then : limx \ rightarrowc f ( x ) g ( x ) = limx \ rightarrowc f'( x ) g'( x ). { \ lim _ { x \ to c } { \ frac { f ( x ) } { g ( x ) } } = \ lim _ { x \ to c } { \ frac { f'( x ) } { g'( x ) } }. } normally, the first condition is the most important one. for example : limx \ rightarrow0 sin ( 2x ) sin ( 3x ) = limx \ rightarrow0 2cos ( 2x ) 3cos ( 3x ) = 2 \ cdot1 3 \ cdot1 = 23. { \ lim _ { x \ to 0 } { \ frac { \ sin ( 2x ) } { \ sin ( 3x ) } } = \ lim _ { x \ to 0 } { \ frac { 2 \ cos ( 2x ) } { 3 \ cos ( 3x ) } } = { \ frac { 2 \ cdot 1 } { 3 \ cdot 1
https://en.wikipedia.org/wiki/Limit_of_a_function
##x ) } { 3 \ cos ( 3x ) } } = { \ frac { 2 \ cdot 1 } { 3 \ cdot 1 } } = { \ frac { 2 } { 3 } }. } specifying an infinite bound on a summation or integral is a common shorthand for specifying a limit. a short way to write the limitlimn \ rightarrow \ sumi = sn f ( i ) { \ lim _ { n \ to \ infty } \ sum _ { i = s } ^ { n } f ( i ) } is \ sumi = sf ( i ). { \ sum _ { i = s } ^ { \ infty } f ( i ). } an important example of limits of sums such as these are series. a short way to write the limitlimx \ rightarrow \ inta xf ( t ) dt { \ lim _ { x \ to \ infty } \ int _ { a } ^ { x } f ( t ) \ ; dt } is \ inta f ( t ) dt. { \ int _ { a } ^ { \ infty } f ( t ) \ ; dt. } a short
https://en.wikipedia.org/wiki/Limit_of_a_function
##a f ( t ) dt. { \ int _ { a } ^ { \ infty } f ( t ) \ ; dt. } a short way to write the limitlimx \ rightarrow - \ intx bf ( t ) dt { \ lim _ { x \ to - \ infty } \ int _ { x } ^ { b } f ( t ) \ ; dt } is \ int - bf ( t ) dt. { \ int _ { - \ infty } ^ { b } f ( t ) \ ; dt. }
https://en.wikipedia.org/wiki/Limit_of_a_function
continuity or continuous may refer to : continuity ( mathematics ), the opposing concept to discreteness ; common examples includecontinuous probability distribution or random variable in probability and statisticscontinuous game, a generalization of games used in game theorylaw of continuity, a heuristic principle of gottfried leibnizcontinuous function, in particular : continuity ( topology ), a generalization to functions between topological spacesscott continuity, for functions between posetscontinuity ( set theory ), for functions between ordinalscontinuity ( category theory ), for functorsgraph continuity, for payoff functions in game theorycontinuity theorem may refer to one of two results : lvy's continuity theorem, on random variableskolmogorov continuity theorem, on stochastic processesin geometry : parametric continuity, for parametrised curvesgeometric continuity, a concept primarily applied to the conic sections and related shapesin probability theorycontinuous stochastic processcontinuity equations applicable to conservation of mass, energy, momentum, electric charge and other conserved quantitiescontinuity test for an unbroken electrical path in an electronic circuit or connectorin materials science : a colloidal system, consists of a dispersed phase evenly intermixed with
https://en.wikipedia.org/wiki/Continuity
test for an unbroken electrical path in an electronic circuit or connectorin materials science : a colloidal system, consists of a dispersed phase evenly intermixed with a continuous phasea continuous wave, an electromagnetic wave of constant amplitude and frequencycontinuity ( broadcasting ), messages played by broadcasters between programscontinuity editing, a form of film editing that combines closely related shots into a sequence highlighting plot points or consistenciescontinuity ( fiction ), consistency of plot elements, such as characterization, location, and costuming, within a work of fiction ( this is a mass noun ) continuity ( setting ), one of several similar but distinct fictional universes in a broad franchise of related works ( this is a count noun ) " continuity " or continuity script, the precursor to a film screenplaycontinuity ( apple ), a set of features introduced by applecontinuity of operations ( disambiguation ) continuous and progressive aspects in linguisticsbusiness continuityhealth care continuitycontinuity in architecture ( part of complementary architecture )
https://en.wikipedia.org/wiki/Continuity
in mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. the derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. the tangent line is the best linear approximation of the function near that input value. for this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. the process of finding a derivative is called differentiation. there are multiple different notations for differentiation. leibniz notation, named after gottfried wilhelm leibniz, is represented as the ratio of two differentials, whereas prime notation is written by adding a prime mark. higher order notations represent repeated differentiation, and they are usually denoted in leibniz notation by adding superscripts to the differentials, and in prime notation by adding additional prime marks. the higher order derivatives can be applied in physics ; for example, while the first derivative of the position of a moving object with respect to time is the object's velocity, how the position changes as time advances, the second derivative is the
https://en.wikipedia.org/wiki/Derivative
derivative of the position of a moving object with respect to time is the object's velocity, how the position changes as time advances, the second derivative is the object's acceleration, how the velocity changes as time advances. derivatives can be generalized to functions of several real variables. in this case, the derivative is reinterpreted as a linear transformation whose graph is ( after an appropriate translation ) the best linear approximation to the graph of the original function. the jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables. it can be calculated in terms of the partial derivatives with respect to the independent variables. for a real - valued function of several variables, the jacobian matrix reduces to the gradient vector. a function of a real variablef ( x ) { f ( x ) } is differentiable at a pointa { a } of its domain, if its domain contains an open interval containinga { a }, and the limitl = limh \ rightarrow0 f ( a + h ) - f ( a ) h { l = \ lim _ { h \ to 0 } { \ frac { f ( a + h ) - f ( a ) } { h } }
https://en.wikipedia.org/wiki/Derivative
l = \ lim _ { h \ to 0 } { \ frac { f ( a + h ) - f ( a ) } { h } } } exists. this means that, for every positive real number \ epsilon { \ varepsilon }, there exists a positive real number \ delta { \ delta } such that, for everyh { h } such that | h | < \ delta { | h | < \ delta } andh \ neq0 { h \ neq 0 } thenf ( a + h ) { f ( a + h ) } is defined, and | l - f ( a + h ) - f ( a ) h | < \ epsilon, { | l - { \ frac { f ( a + h ) - f ( a ) } { h } } | < \ varepsilon, } where the vertical bars denote the absolute value. this is an example of the ( \ epsilon, \ delta ) - definition of limit. if the functionf { f } is differentiable ata { a }, that is if the limitl { l } exists, then this limit is called the derivative off { f } ata { a }. multiple notations for the derivative exist. the derivative
https://en.wikipedia.org/wiki/Derivative
##l { l } exists, then this limit is called the derivative off { f } ata { a }. multiple notations for the derivative exist. the derivative off { f } ata { a } can be denotedf'( a ) { f'( a ) }, read as " f { f } prime ofa { a } " ; or it can be denotedd fd x ( a ) { \ textstyle { \ frac { df } { dx } } ( a ) }, read as " the derivative off { f } with respect tox { x } ata { a } " or " df { df } by ( or over ) dx { dx } ata { a } ". see notation below. iff { f } is a function that has a derivative at every point in its domain, then a function can be defined by mapping every pointx { x } to the value of the derivative off { f } atx { x }. this function is writtenf'{ f'} and is called the derivative function or the derivative off { f }. the functionf { f } sometimes has a derivative at most, but not all, points of its domain. the function whose value ata {
https://en.wikipedia.org/wiki/Derivative
{ f }. the functionf { f } sometimes has a derivative at most, but not all, points of its domain. the function whose value ata { a } equalsf'( a ) { f'( a ) } wheneverf'( a ) { f'( a ) } is defined and elsewhere is undefined is also called the derivative off { f }. it is still a function, but its domain may be smaller than the domain off { f }. for example, letf { f } be the squaring function : f ( x ) = x2 { f ( x ) = x ^ { 2 } }. then the quotient in the definition of the derivative isf ( a + h ) - f ( a ) h = ( a + h ) 2 - a2 h = a2 + 2a h + h2 - a2 h = 2a + h. { { \ frac { f ( a + h ) - f ( a ) } { h } } = { \ frac { ( a + h ) ^ { 2 } - a ^ { 2 } } { h } } = { \ frac { a ^ { 2 } + 2ah + h ^ { 2 } - a ^ { 2 }
https://en.wikipedia.org/wiki/Derivative
} } { h } } = { \ frac { a ^ { 2 } + 2ah + h ^ { 2 } - a ^ { 2 } } { h } } = 2a + h. } the division in the last step is valid as long ash \ neq0 { h \ neq 0 }. the closerh { h } is to0 { 0 }, the closer this expression becomes to the value2 a { 2a }. the limit exists, and for every inputa { a } the limit is2 a { 2a }. so, the derivative of the squaring function is the doubling function : f'( x ) = 2x { f'( x ) = 2x }. the ratio in the definition of the derivative is the slope of the line through two points on the graph of the functionf { f }, specifically the points ( a, f ( a ) ) { ( a, f ( a ) ) } and ( a + h, f ( a + h ) ) { ( a + h, f ( a + h ) ) }. ash { h } is made smaller, these points grow closer together, and the slope of this line approaches the limiting value, the slope of the tangent to
https://en.wikipedia.org/wiki/Derivative
. ash { h } is made smaller, these points grow closer together, and the slope of this line approaches the limiting value, the slope of the tangent to the graph off { f } ata { a }. in other words, the derivative is the slope of the tangent. one way to think of the derivatived fd x ( a ) { \ textstyle { \ frac { df } { dx } } ( a ) } is as the ratio of an infinitesimal change in the output of the functionf { f } to an infinitesimal change in its input. in order to make this intuition rigorous, a system of rules for manipulating infinitesimal quantities is required. the system of hyperreal numbers is a way of treating infinite and infinitesimal quantities. the hyperreals are an extension of the real numbers that contain numbers greater than anything of the form1 + 1 + + 1 { 1 + 1 + \ cdots + 1 } for any finite number of terms. such numbers are infinite, and their reciprocals are infinitesimals. the application of hyperreal numbers to the foundations of calculus is called nonstandard analysis. this provides a way to define the basic concepts of calculus such as the derivative and integral in terms
https://en.wikipedia.org/wiki/Derivative
numbers to the foundations of calculus is called nonstandard analysis. this provides a way to define the basic concepts of calculus such as the derivative and integral in terms of infinitesimals, thereby giving a precise meaning to thed { d } in the leibniz notation. thus, the derivative off ( x ) { f ( x ) } becomesf'( x ) = st ( f ( x + dx ) - f ( x ) dx ) { f'( x ) = \ operatorname { st } ( { \ frac { f ( x + dx ) - f ( x ) } { dx } } ) } for an arbitrary infinitesimald x { dx }, wherest { \ operatorname { st } } denotes the standard part function, which " rounds off " each finite hyperreal to the nearest real. taking the squaring functionf ( x ) = x2 { f ( x ) = x ^ { 2 } } as an example again, f'( x ) = st ( x2 + 2x \ cdotd x + ( dx ) 2 - x2 dx ) = st ( 2x \ cdotd x + ( dx ) 2d x ) = st
https://en.wikipedia.org/wiki/Derivative
+ ( dx ) 2 - x2 dx ) = st ( 2x \ cdotd x + ( dx ) 2d x ) = st ( 2x \ cdotd xd x + ( dx ) 2d x ) = st ( 2x + dx ) = 2x. { { \ begin { aligned } f'( x ) & = \ operatorname { st } ( { \ frac { x ^ { 2 } + 2x \ cdot dx + ( dx ) ^ { 2 } - x ^ { 2 } } { dx } } ) \ \ & = \ operatorname { st } ( { \ frac { 2x \ cdot dx + ( dx ) ^ { 2 } } { dx } } ) \ \ & = \ operatorname { st } ( { \ frac { 2x \ cdot dx } { dx } } + { \ frac { ( dx ) ^ { 2 } } { dx } } ) \ \ & = \ operatorname { st } ( 2x + dx ) \ \ & = 2x. \ end { aligned } } } iff { f } is differentiable ata { a }
https://en.wikipedia.org/wiki/Derivative
2x + dx ) \ \ & = 2x. \ end { aligned } } } iff { f } is differentiable ata { a }, thenf { f } must also be continuous ata { a }. as an example, choose a pointa { a } and letf { f } be the step function that returns the value 1 for allx { x } less thana { a }, and returns a different value 10 for allx { x } greater than or equal toa { a }. the functionf { f } cannot have a derivative ata { a }. ifh { h } is negative, thena + h { a + h } is on the low part of the step, so the secant line froma { a } toa + h { a + h } is very steep ; ash { h } tends to zero, the slope tends to infinity. ifh { h } is positive, thena + h { a + h } is on the high part of the step, so the secant line froma { a } toa + h { a + h } has slope zero. consequently, the secant lines do not approach any single slope, so the limit of the difference quotie
https://en.wikipedia.org/wiki/Derivative
+ h { a + h } has slope zero. consequently, the secant lines do not approach any single slope, so the limit of the difference quotient does not exist. however, even if a function is continuous at a point, it may not be differentiable there. for example, the absolute value function given byf ( x ) = | x | { f ( x ) = | x | } is continuous atx = 0 { x = 0 }, but it is not differentiable there. ifh { h } is positive, then the slope of the secant line from 0 toh { h } is one ; ifh { h } is negative, then the slope of the secant line from0 { 0 } toh { h } is - 1 { - 1 }. this can be seen graphically as a " kink " or a " cusp " in the graph atx = 0 { x = 0 }. even a function with a smooth graph is not differentiable at a point where its tangent is vertical : for instance, the function given byf ( x ) = x1 / 3 { f ( x ) = x ^ { 1 / 3 } } is not differentiable atx = 0 { x =
https://en.wikipedia.org/wiki/Derivative
x ) = x1 / 3 { f ( x ) = x ^ { 1 / 3 } } is not differentiable atx = 0 { x = 0 }. in summary, a function that has a derivative is continuous, but there are continuous functions that do not have a derivative. most functions that occur in practice have derivatives at all points or almost every point. early in the history of calculus, many mathematicians assumed that a continuous function was differentiable at most points. under mild conditions ( for example, if the function is a monotone or a lipschitz function ), this is true. however, in 1872, weierstrass found the first example of a function that is continuous everywhere but differentiable nowhere. this example is now known as the weierstrass function. in 1931, stefan banach proved that the set of functions that have a derivative at some point is a meager set in the space of all continuous functions. informally, this means that hardly any random continuous functions have a derivative at even one point. one common way of writing the derivative of a function is leibniz notation, introduced by gottfried wilhelm leibniz in 1675, which denotes a derivative as the quotient of two differentials, such asd y { d
https://en.wikipedia.org/wiki/Derivative
notation, introduced by gottfried wilhelm leibniz in 1675, which denotes a derivative as the quotient of two differentials, such asd y { dy } andd x { dx }. it is still commonly used when the equationy = f ( x ) { y = f ( x ) } is viewed as a functional relationship between dependent and independent variables. the first derivative is denoted byd yd x { \ textstyle { \ frac { dy } { dx } } }, read as " the derivative ofy { y } with respect tox { x } ". this derivative can alternately be treated as the application of a differential operator to a function, dy dx = dd xf ( x ). { \ textstyle { \ frac { dy } { dx } } = { \ frac { d } { dx } } f ( x ). } higher derivatives are expressed using the notationd ny dx n { \ textstyle { \ frac { d ^ { n } y } { dx ^ { n } } } } for then { n } - th derivative ofy = f ( x ) { y = f ( x ) }. these are abbreviations for multiple
https://en.wikipedia.org/wiki/Derivative
} } for then { n } - th derivative ofy = f ( x ) { y = f ( x ) }. these are abbreviations for multiple applications of the derivative operator ; for example, d2 yd x2 = dd x ( dd xf ( x ) ). { \ textstyle { \ frac { d ^ { 2 } y } { dx ^ { 2 } } } = { \ frac { d } { dx } } { \ bigl ( } { \ frac { d } { dx } } f ( x ) { \ bigr ) }. } unlike some alternatives, leibniz notation involves explicit specification of the variable for differentiation, in the denominator, which removes ambiguity when working with multiple interrelated quantities. the derivative of a composed function can be expressed using the chain rule : ifu = g ( x ) { u = g ( x ) } andy = f ( g ( x ) ) { y = f ( g ( x ) ) } thend yd x = dy du \ cdotd ud x. { \ textstyle { \ frac { dy } { dx } } = { \ frac { dy } { du } } \
https://en.wikipedia.org/wiki/Derivative