text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In mathematics , the least-upper-bound property (sometimes called completeness , supremum property or l.u.b. property ) [ 1 ] is a fundamental property of the real numbers . More generally, a partially ordered set X has the least-upper-bound property if every non-empty subset of X with an upper bound has a least upper bound (supremum) in X . Not every (partially) ordered set has the least upper bound property. For example, the set Q {\displaystyle \mathbb {Q} } of all rational numbers with its natural order does not have the least upper bound property.
The least-upper-bound property is one form of the completeness axiom for the real numbers, and is sometimes referred to as Dedekind completeness . [ 2 ] It can be used to prove many of the fundamental results of real analysis , such as the intermediate value theorem , the Bolzano–Weierstrass theorem , the extreme value theorem , and the Heine–Borel theorem . It is usually taken as an axiom in synthetic constructions of the real numbers , and it is also intimately related to the construction of the real numbers using Dedekind cuts .
In order theory , this property can be generalized to a notion of completeness for any partially ordered set . A linearly ordered set that is dense and has the least upper bound property is called a linear continuum .
Let S be a non-empty set of real numbers .
The least-upper-bound property states that any non-empty set of real numbers that has an upper bound must have a least upper bound in real numbers .
More generally, one may define upper bound and least upper bound for any subset of a partially ordered set X , with “real number” replaced by “element of X ”. In this case, we say that X has the least-upper-bound property if every non-empty subset of X with an upper bound has a least upper bound in X .
For example, the set Q of rational numbers does not have the least-upper-bound property under the usual order. For instance, the set
has an upper bound in Q , but does not have a least upper bound in Q (since the square root of two is irrational ). The construction of the real numbers using Dedekind cuts takes advantage of this failure by defining the irrational numbers as the least upper bounds of certain subsets of the rationals.
The least-upper-bound property is equivalent to other forms of the completeness axiom , such as the convergence of Cauchy sequences or the nested intervals theorem . The logical status of the property depends on the construction of the real numbers used: in the synthetic approach , the property is usually taken as an axiom for the real numbers (see least upper bound axiom ); in a constructive approach, the property must be proved as a theorem , either directly from the construction or as a consequence of some other form of completeness.
It is possible to prove the least-upper-bound property using the assumption that every Cauchy sequence of real numbers converges. Let S be a nonempty set of real numbers. If S has exactly one element, then its only element is a least upper bound. So consider S with more than one element, and suppose that S has an upper bound B 1 . Since S is nonempty and has more than one element, there exists a real number A 1 that is not an upper bound for S . Define sequences A 1 , A 2 , A 3 , ... and B 1 , B 2 , B 3 , ... recursively as follows:
Then A 1 ≤ A 2 ≤ A 3 ≤ ⋯ ≤ B 3 ≤ B 2 ≤ B 1 and | A n − B n | → 0 as n → ∞ . It follows that both sequences are Cauchy and have the same limit L , which must be the least upper bound for S .
The least-upper-bound property of R can be used to prove many of the main foundational theorems in real analysis .
Let f : [ a , b ] → R be a continuous function , and suppose that f ( a ) < 0 and f ( b ) > 0 . In this case, the intermediate value theorem states that f must have a root in the interval [ a , b ] . This theorem can be proved by considering the set
That is, S is the initial segment of [ a , b ] that takes negative values under f . Then b is an upper bound for S , and the least upper bound must be a root of f .
The Bolzano–Weierstrass theorem for R states that every sequence x n of real numbers in a closed interval [ a , b ] must have a convergent subsequence . This theorem can be proved by considering the set
Clearly, a ∈ S {\displaystyle a\in S} , and S is not empty.
In addition, b is an upper bound for S , so S has a least upper bound c .
Then c must be a limit point of the sequence x n , and it follows that x n has a subsequence that converges to c .
Let f : [ a , b ] → R be a continuous function and let M = sup f ([ a , b ]) , where M = ∞ if f ([ a , b ]) has no upper bound. The extreme value theorem states that M is finite and f ( c ) = M for some c ∈ [ a , b ] . This can be proved by considering the set
By definition of M , a ∈ S , and by its own definition, S is bounded by b .
If c is the least upper bound of S , then it follows from continuity that f ( c ) = M .
Let [ a , b ] be a closed interval in R , and let { U α } be a collection of open sets that covers [ a , b ] . Then the Heine–Borel theorem states that some finite subcollection of { U α } covers [ a , b ] as well. This statement can be proved by considering the set
The set S obviously contains a , and is bounded by b by construction.
By the least-upper-bound property, S has a least upper bound c ∈ [ a , b ] .
Hence, c is itself an element of some open set U α , and it follows for c < b that [ a , c + δ ] can be covered by finitely many U α for some sufficiently small δ > 0 .
This proves that c + δ ∈ S and c is not an upper bound for S .
Consequently, c = b .
The importance of the least-upper-bound property was first recognized by Bernard Bolzano in his 1817 paper Rein analytischer Beweis des Lehrsatzes dass zwischen je zwey Werthen, die ein entgegengesetztes Resultat gewähren, wenigstens eine reelle Wurzel der Gleichung liege . [ 3 ] | https://en.wikipedia.org/wiki/Least_upper_bound_axiom |
Leave No Trace , sometimes written as LNT , is a set of ethics promoting conservation of the outdoors . Originating in the mid-20th century, the concept started as a movement in the United States in response to ecological damage caused by wilderness recreation. [ 1 ] In 1994, the non-profit Leave No Trace Center for Outdoor Ethics was formed to create educational resources around LNT, and organized the framework of LNT into seven principles. [ 2 ]
The idea behind the LNT principles is to leave the wilderness unchanged by human presence.
By the 1960s and 1970s, outdoor recreation was becoming more popular, following the creation of equipment such as synthetic tents and sleeping pads. A commercial interest in the outdoors increased the number of visitors to national parks, with the National Park Service seeing a five-fold increase between 1950 and 1970, from 33 million to 172 million. [ 3 ] [ 4 ] Articles were written about the wild being “loved to death,” problems with overcrowding and ecological damage, and the need for management. [ 5 ] To solve this, regulations were imposed, including limits on group sizes and where camping was allowed. This was met negatively, with people writing that it took the joy and spontaneity out of wilderness recreation. [ 6 ] [ 7 ]
The focus was shifted towards education, with the National Park Service (NPS), United States Forest Service (USFS), and the Bureau of Land Management (BLM) training Wilderness Informational Specialists to teach visitors about minimal impact camping. In 1987, the three departments cooperatively developed a pamphlet titled "Leave No Trace Land Ethics". [ 8 ]
At the same time, there was a cultural shift in outdoor ethics from woodcraft , where travelers prided themselves on their ability to use available natural resources , to having a minimal impact on the environment by traveling through wilderness as visitors. [ 3 ] Groups such as the Sierra Club , the National Outdoor Leadership School (NOLS), and the Boy Scouts of America were advocating minimum impact camping techniques, and companies like REI and The North Face began sharing the movement.
In 1990, the national education program of Leave No Trace was developed by the USFS in conjunction with NOLS, alongside Smokey Bear , Woodsy Owl , and programs like Tread Lightly! geared towards motorized recreation. The Bureau of Land Management joined the program in 1993 followed by the National Park Service and U.S. Fish and Wildlife Service in 1994. [ 8 ]
The number of LNT principles varied widely during the 1990s, starting from 75 and dropping to 6 as more people had input and principles were condensed. [ 9 ] However, by 1999, the list was finalized as seven principles and has remained unchanged.
[ 10 ]
Since 1994, the Leave No Trace program has been managed by the Leave No Trace Center for Outdoor Ethics, a 501(c)(3) non-profit organization, dedicated to the responsible enjoyment and active stewardship of the outdoors worldwide. [ 11 ] Leave No Trace works to build awareness, appreciation and respect for wildlands through education, research, volunteerism and partnerships. The center also has a youth education initiative, Leave No Trace for Every Kid, which emphasizes asset development in youth through the lens of outdoor stewardship.
The center has partnerships with the National Park Service , the U.S. Forest Service , the Bureau of Land Management , the U.S. Fish and Wildlife Service , [ 12 ] US Army Corps of Engineers , and other partners such as colleges, universities, guide services, small businesses, non-profits and youth-serving organizations such as the Boy Scouts of America and the American Camp Association .
Over 20 percent of the organization's 2019 income went to three members of their board of directors. [ 13 ]
There are also formal Leave No Trace organizations in Australia, Canada, Ireland and New Zealand. [ 14 ]
While Leave No Trace is a widely accepted conservationist ethic, there has been some criticism. In 2002, environmental historian James Morton Turner argued that Leave No Trace focused "largely on protecting wilderness" rather than tackling questions such as the "economy, consumerism , and the environment", and that it "helped ally the modern backpacker with the wilderness recreation industry" by encouraging backpackers to purchase products advertising Leave No Trace, or asking people to bring a petroleum stove instead of building a natural campfire. [ 15 ]
In 2009, Gregory Simon and Peter Alagona argued that there should be a move beyond Leave No Trace, and that the ethic "disguises much about human relationships with non-human nature" by making it seem that parks and wilderness areas are "pristine nature" which "erases their human histories, and prevents people from understanding how these landscapes have developed over time through complex human–environment interactions". They posit that there should be a new environmental ethic "that transforms the critical scholarship of social science into a critical practice of wilderness recreation, addresses the global economic system...and reinvents wilderness recreation as a more collaborative, participatory, productive, democratic, and radical form of political action". They also write about how "the LNT logo becomes both a corporate brand and an official stamp of approval" in outdoor recreation stores like REI . [ 16 ]
The authors articulate their new environmental ethic as expanding LNT, not rejecting it all together, and share the seven principles of what they call 'Beyond Leave No Trace': [ 16 ]
In 2012, in response to critiques of their 2009 article, Simon and Alagona wrote that they "remain steadfast in our endorsement of LNT’s value and potential" but that they believe that "this simple ethic is not enough in a world of global capital circulation." They write that Leave No Trace "could not exist in its current form without a plethora of consumer products;" that "the use of such products does not erase environmental impacts;" and that LNT "systematically obscures these impacts, displacements, and connections by encouraging the false belief that it is possible to 'leave no trace'". [ 17 ]
Other critics of Leave No Trace have argued that it is impractical, displaces environmental impacts to other locations, "obscures connections between the uses of outdoor products and their production and disposal impacts" and have questioned how much the ethic affects everyday environmental behavior. [ 18 ] [ 19 ] | https://en.wikipedia.org/wiki/Leave_No_Trace |
Leave the gate as you found it (or leave all gates as found ) is an important rule of courtesy in rural areas throughout the world. If a gate is found open, it should be left open, and if it is closed, it should be left closed. If a closed gate absolutely must be traversed, it should be closed again afterwards. It applies to visitors travelling onto or across farms , ranches , and stations .
In low-rainfall areas, closing gates can cut livestock off from water supplies. For example, most of the land used for grazing in Australia has no natural water supplies, so drinking water for the stock must be supplied by the farmer or landowner, often by using a windmill to pump groundwater . Even visitors who know how a stock water system works may be unaware of breakdowns. During hot weather, cattle require large quantities of water to drink and can die in less than a day if they do not get it. Sheep need less water and can survive longer without it, but will die if cut off from water for several hot days. [ 1 ]
In all agricultural areas, farmers need to keep groups of livestock separate, for reasons including breeding for disease resistance and increased production, pest control, and controlling when ewes deliver their lambs . Unwanted mingling of flocks or herds can deprive a farmer of significant income. [ 2 ]
The original versions of the United Kingdom 's Country Code advised visitors to always close gates. The revised Countryside Code now suggests that gates should be left as found. | https://en.wikipedia.org/wiki/Leave_the_gate_as_you_found_it |
In chemistry , a leaving group is defined by the IUPAC as an atom or group of atoms that detaches from the main or residual part of a substrate during a reaction or elementary step of a reaction. [ 1 ] However, in common usage, the term is often limited to a fragment that departs with a pair of electrons in heterolytic bond cleavage . [ 2 ] In this usage, a leaving group is a less formal but more commonly used synonym of the term nucleofuge . In this context, leaving groups are generally anions or neutral species, departing from neutral or cationic substrates, respectively, though in rare cases, cations leaving from a dicationic substrate are also known. [ 3 ]
A species' ability to serve as a leaving group depends on its ability to stabilize the additional electron density that results from bond heterolysis. Common anionic leaving groups are halides such as Cl − , Br − and I − , and sulfonate esters such as tosylate ( TsO − ), while water ( H 2 O ), alcohols ( R−OH ), and amines ( R 3 N ) are common neutral leaving groups.
In the broader IUPAC definition, the term also includes groups that depart without an electron pair in a heterolytic cleavage (groups specifically known as an electrofuges ), like H + or SiR + 3 , which commonly depart in electrophilic aromatic substitution reactions. [ 1 ] [ 4 ] Similarly, species of high thermodynamic stability like nitrogen ( N 2 ) or carbon dioxide ( CO 2 ) commonly act as leaving groups in homolytic bond cleavage reactions of radical species . A relatively uncommon term that serves as the antonym of leaving group is entering group (i.e., a species that reacts with and forms a bond with a substrate or a substrate-derived intermediate ).
In this article, the discussions below mainly pertain to leaving groups that act as nucleofuges.
The physical manifestation of leaving group ability is the reaction rate. Good leaving groups give fast reactions. By transition state theory , this implies that reactions involving good leaving groups have low activation barriers leading to relatively stable transition states.
It is helpful to consider the concept of leaving group ability in the case of the first step of an S N 1/E1 reaction with an anionic leaving group (ionization), while keeping in mind that this concept can be generalized to all reactions that involve leaving groups. Because the leaving group bears a larger negative charge in the transition state (and products) than in the starting material, a good leaving group must be able to stabilize this negative charge, i.e. form stable anions . A good measure of anion stability is the p K a of an anion's conjugate acid (p K aH ), and leaving group ability indeed generally follows this trend, with a lower p K aH correlating well with better leaving group ability.
The correlation between p K aH and leaving group ability, however, is not perfect. Leaving group ability represents the difference in energy between starting materials and a transition state (Δ G ‡ ) and differences in leaving group ability are reflected in changes in this quantity (ΔΔ G ‡ ). The p K aH , however, represents the difference in energy between starting materials and products (Δ G° ) with differences in acidity reflected in changes in this quantity (ΔΔ G° ). The ability to correlate these energy differences is justified by the Hammond postulate and the Bell–Evans–Polanyi principle . Also, the starting materials in these cases are different. In the case of the acid dissociation constant, the "leaving group" is bound to a proton in the starting material, while in the case of leaving group ability, the leaving group is bound to (usually) carbon. It is with these important caveats in mind that one must consider p K aH to be reflective of leaving group ability. Nevertheless, one can generally examine acid dissociation constants to qualitatively predict or rationalize rate or reactivity trends relating to variation of the leaving group. Consistent with this picture, strong bases such as OH − , OR − and NR − 2 tend to make poor leaving groups, due their inability to stabilize a negative charge.
What constitutes a reasonable leaving group is dependent on context. For S N 2 reactions, typical synthetically useful leaving groups include Cl − , Br − , I − , − OTs, − OMs, − OTf , and H 2 O . Substrates containing phosphate and carboxylate leaving groups are more likely to react by competitive addition-elimination, while sulfonium and ammonium salts generally form ylides or undergo E2 elimination when possible. With reference to the table above, phenoxides ( − OAr ) constitute the lower limit for what is feasible as S N 2 leaving groups: very strong nucleophiles like Ph 2 P − or EtS − have been used to demethylate anisole derivatives through S N 2 displacement at the methyl group. Hydroxide, alkoxides, amides, hydride, and alkyl anions do not serve as leaving groups in S N 2 reactions.
On the other hand, when anionic or dianionic tetrahedral intermediates collapse, the high electron density of the neighboring heteroatom facilitates the expulsion of a leaving group. Thus, in the case of ester and amide hydrolysis under basic conditions, alkoxides and amides are commonly proposed as leaving groups. For the same reason, E1cb reactions involving hydroxide as a leaving group are not uncommon (e.g., in the aldol condensation ). It is exceedingly rare for groups such as H − ( hydrides ), R 3 C − ( alkyl anions , R = alkyl or H), or Ar − (aryl anions, Ar = aryl) to depart with a pair of electrons because of the high energy of these species. The Chichibabin reaction provides an example of hydride as a leaving group, while the Wolff-Kishner reaction and Haller-Bauer reaction feature unstabilized carbanion leaving groups.
It is important to note that the list given above is qualitative and describes trends . The ability of a group to leave is contextual. For example, in S N Ar reactions, the rate is generally increased when the leaving group is fluoride relative to the other halogens. This effect is due to the fact that the highest energy transition state for this two step addition-elimination process occurs in the first step, where fluoride's greater electron withdrawing capability relative to the other halides stabilizes the developing negative charge on the aromatic ring. The departure of the leaving group takes place quickly from this high energy Meisenheimer complex , and since the departure is not involved in the rate limiting step, it does not affect the overall rate of the reaction. This effect is general to conjugate base eliminations.
Even when the departure of the leaving group is involved in the rate limiting step of a reaction there can still exist contextual differences that can change the order of leaving group ability. In Friedel-Crafts alkylations , the normal halogen leaving group order is reversed so that the rate of the reaction follows RF > RCl > RBr > RI. This effect is due to their greater ability to complex the Lewis acid catalyst, and the actual group that leaves is an "ate" complex between the Lewis acid and the departing leaving group. [ 6 ] This situation is broadly defined as leaving group activation .
There can still exist contextual differences in leaving group ability in the purest form, that is when the actual group that leaves is not affected by the reaction conditions (by protonation or Lewis acid complexation) and the departure of the leaving group occurs in the rate determining step. In the situation where other variables are held constant (nature of the alkyl electrophile, solvent, etc.), a change in nucleophile can lead to a change in the order of reactivity for leaving groups. In the case below, tosylate is the best leaving group when ethoxide is the nucleophile, but iodide and even bromide become better leaving groups in the case of the thiolate nucleophile. [ 7 ]
It is common in E1 and S N 1 reactions for a poor leaving group to be transformed into a good one by protonation or complexation with a Lewis acid . Thus, it is by protonation before departure that a molecule can formally lose such poor leaving groups as hydroxide.
The same principle is at work in the Friedel-Crafts reaction . Here, a strong Lewis acid is required to generate either a carbocation from an alkyl halide in the Friedel-Crafts alkylation reaction or an acylium ion from an acyl halide.
In the vast majority of cases, reactions that involve leaving group activation generate a cation in a separate step, before either nucleophilic attack or elimination. For example, S N 1 and E1 reactions may involve an activation step, whereas S N 2 and E2 reactions generally do not.
The requirement for a good leaving group is relaxed in conjugate base elimination reactions. These reactions include loss of a leaving group in the β position of an enolate as well as the regeneration of a carbonyl group from the tetrahedral intermediate in nucleophilic acyl substitution. Under forcing conditions, even amides can be made to undergo basic hydrolysis, a process that involves the expulsion of an extremely poor leaving group, R 2 N − . Even more dramatic, decarboxylation of benzoate anions can occur by heating with copper or Cu 2 O, involving the loss of an aryl anion. This reaction is facilitated by the fact that the leaving group is most likely an arylcopper compound rather than the much more basic alkali metal salt.
This dramatic departure from normal leaving group requirements occurs mostly in the realm of C=O double bond formation where formation of the very strong C=O double bond can drive otherwise unfavorable reactions forward. The requirement for a good leaving group is still relaxed in the case of C=C bond formation via E1cB mechanisms, but because of the relative weakness of the C=C double bond, the reaction still exhibits some leaving group sensitivity. Notably, changing the leaving group's identity (and willingness to leave) can change the nature of the mechanism in elimination reactions. With poor leaving groups, the E1cB mechanism is favored, but as the leaving group's ability changes, the reaction shifts from having a rate determining loss of leaving group from carbanionic intermediate B via TS BC ‡ through having a rate determining deprotonation step via TS AB ‡ (not pictured) to a concerted E2 elimination. In the latter situation, the leaving group X has become good enough that the former transition state connecting intermediates B and C has become lower in energy than B , which is no longer a stationary point on the potential energy surface for the reaction. Because only one transition state connects starting material A and product C , the reaction is now concerted (albeit very asynchronous in the pictured case) due to the increase in leaving group ability of X.
The prototypical super leaving group is triflate , and the term has come to mean any leaving group of comparable ability. Compounds where loss of a super leaving group can generate a stable carbocation are usually highly reactive and unstable. Thus, the most commonly encountered organic triflates are methyl triflate and alkenyl or aryl triflates, all of which cannot form stable carbocations on ionization, rendering them relatively stable. It has been noted that steroidal alkyl nonaflates (another super leaving group) generated from alcohols and perfluorobutanesulfonyl fluoride were not isolable as such but immediately formed the products of either elimination or substitution by fluoride generated by the reagent. Mixed acyl-trifluoromethanesulfonyl anhydrides smoothly undergo Friedel-Crafts acylation without a catalyst, [ 8 ] unlike the corresponding acyl halides, which require a strong Lewis acid. Methyl triflate, however, does not participate in Friedel-Crafts alkylation reactions with electron-neutral aromatic rings.
Beyond super leaving groups in reactivity lie the "hyper" leaving groups. Prominent among these are λ 3 -iodanes , which include diaryl iodonium salts, and other halonium ions . In one study, a quantitative comparison of these and other leaving groups was conducted. Relative to chloride (k rel = 1), reactivities increased in the order bromide (k rel = 14), iodide (k rel = 91), tosylate (k rel = 3.7 × 10 4 ), triflate (k rel = 1.4 × 10 8 ), phenyliodonium tetrafluoroborate ( PhI + BF − 4 , k rel = 1.2 × 10 14 ). Along with the criterion that a hyper leaving group be a stronger leaving group than triflate is the necessity that the leaving group undergo reductive elimination. In the case of halonium ions this involves reduction from a trivalent halonium to a monovalent halide coupled with the release of an anionic fragment. Part of the exceptional reactivity of compounds of hyper leaving groups has been ascribed to the entropic favorability of having one molecule split into three.
Dialkyl halonium ions have also been isolated and characterized for simple alkyl groups. These compounds, despite their extreme reactivity towards nucleophiles, can be obtained pure in the solid state with very weakly nucleophilic counterions such as SbF − 6 [ 9 ] [ 10 ] and CHB 11 Cl − 11 . [ 11 ] The strongly electrophilic nature of these compounds engendered by their attachment to extremely labile R−X (R = alkyl, X = Cl, Br, I) leaving groups is illustrated by their propensity to alkylate very weak nucleophiles. Heating neat samples of (CH 3 ) 2 Cl + [CHB 11 Cl 11 ] − under reduced pressure resulted in methylation of the very poorly nucleophilic carborane anion with concomitant expulsion of the CH 3 Cl leaving group. Dialkyl halonium hexafluoroantimonate salts alkylate excess alkyl halides to give exchanged products. Their strongly electrophilic nature, along with the instability of primary carbocations generated from ionization of their alkyl groups, points to their possible involvement in Friedel-Crafts alkylation chemistry. [ 9 ] The order of increasing lability of these leaving groups is R−I < R−Br < R−Cl . | https://en.wikipedia.org/wiki/Leaving_group |
Leaving the world a better place , often called the campsite rule , campground rule , or just leaving things better than you found them , is an ethical proposition that individuals should go beyond trying not to do harm in the world, and should try to remediate harms done by others.
This ethic was articulated by Bessie Anderson Stanley in 1911 (in a quote often misattributed to Ralph Waldo Emerson ): "To leave the world a bit better, whether by a healthy child, a garden patch or a redeemed social condition; To know even one life has breathed easier because you have lived. This is to have succeeded." [ 1 ] In his last message to the Boy Scouts , founder Robert Baden-Powell wrote: "Try and leave this world a little better than you found it and when your turn comes to die, you can die happy in feeling that at any rate you have not wasted your time but have done your best". [ 2 ] American writer William Gaddis , in his 1985 novel, Carpenter's Gothic , disputed the wisdom of this ethic, writing: "Finally realize you can't leave things better than you found them the best you can do is try not to leave them any worse . . ." [ 3 ]
Dan Savage , in his syndicated sex- advice column , Savage Love has articulated a variation of the rule for relationships, which he calls the "campsite rule", stating that in any relationship, but particularly those with a large difference of age or experience between the partners, the older or more experienced partner has the responsibility to leave the younger or less experienced partner in at least as good a state (emotionally and physically) as before the relationship. The "campsite rule" includes things like leaving the younger or less experienced partner with no STDs , no unwanted pregnancies, and not overburdening them with emotional and sexual baggage. [ 4 ] In 2013, humorist Alexandra Petri premiered a sex comedy play, The Campsite Rule , based on Savage's rule. [ 5 ] [ 6 ] [ 7 ] Savage also created a companion rule, the "tea and sympathy rule" in reference to a line in the play, Tea and Sympathy , in which an older woman asks of a high-school-age boy, right before having sex with him: "Years from now, when you talk about this – and you will – be kind". [ 8 ] The companion rule imposes on the younger person in the relationship the requirement to be kind to an older partner who followed the "campsite rule".
In 2015, a crowdsourcing competition to rethink the Ten Commandments called the Rethink Prize included "Leave the world a better place than you found it" as one of the ten winning beliefs selected by a panel of judges. [ 9 ] [ 10 ] Augustana College bioethicist Deke Gould invoked the "campground rule" in a 2021 article advocating "efforts to design future minds—whether these are full artificial, enhanced biological, or postbiological ones—should aim to produce minds that are not relevantly human‐like". [ 11 ] | https://en.wikipedia.org/wiki/Leaving_the_world_a_better_place |
In mathematics, the Lebedev–Milin inequality is any of several inequalities for the coefficients of the exponential of a power series , found by Lebedev and Milin ( 1965 ) and Isaak Moiseevich Milin ( 1977 ). It was used in the proof of the Bieberbach conjecture , as it shows that the Milin conjecture implies the Robertson conjecture .
They state that if
for complex numbers β k {\displaystyle \beta _{k}} and α k {\displaystyle \alpha _{k}} , and n {\displaystyle n} is a positive integer, then
See also exponential formula (on exponentiation of power series). | https://en.wikipedia.org/wiki/Lebedev–Milin_inequality |
In mathematics , more precisely in measure theory , the Lebesgue decomposition theorem [ 1 ] provides a way to decompose a measure into two distinct parts based on their relationship with another measure.
The theorem states that if ( Ω , Σ ) {\displaystyle (\Omega ,\Sigma )} is a measurable space and μ {\displaystyle \mu } and ν {\displaystyle \nu } are σ-finite signed measures on Σ {\displaystyle \Sigma } , then there exist two uniquely determined σ-finite signed measures ν 0 {\displaystyle \nu _{0}} and ν 1 {\displaystyle \nu _{1}} such that: [ 2 ] [ 3 ]
Lebesgue's decomposition theorem can be refined in a number of ways.
First, as the Lebesgue-Radon-Nikodym theorem . That is, let ( Ω , Σ ) {\displaystyle (\Omega ,\Sigma )} be a measure space, μ {\displaystyle \mu } a σ-finite positive measure on Σ {\displaystyle \Sigma } and λ {\displaystyle \lambda } a complex measure on Σ {\displaystyle \Sigma } . [ 4 ]
The first assertion follows from the Lebesgue decomposition, the second is known as the Radon-Nikodym theorem . That is, the function h {\displaystyle h} is a Radon-Nikodym derivative that can be expressed as h = d λ a d μ . {\displaystyle h={\frac {d\lambda _{a}}{d\mu }}.}
An alternative refinement is that of the decomposition of a regular Borel measure [ 5 ] [ 6 ] [ 7 ] ν = ν a c + ν s c + ν p p , {\displaystyle \nu =\nu _{ac}+\nu _{sc}+\nu _{pp},} where
The absolutely continuous measures are classified by the Radon–Nikodym theorem , and discrete measures are easily understood. Hence (singular continuous measures aside), Lebesgue decomposition gives a very explicit description of measures. The Cantor measure (the probability measure on the real line whose cumulative distribution function is the Cantor function ) is an example of a singular continuous measure.
The analogous [ citation needed ] decomposition for a stochastic processes is the Lévy–Itō decomposition : given a Lévy process X, it can be decomposed as a sum of three independent Lévy processes X = X ( 1 ) + X ( 2 ) + X ( 3 ) {\displaystyle X=X^{(1)}+X^{(2)}+X^{(3)}} where:
This article incorporates material from Lebesgue decomposition theorem on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Lebesgue's_decomposition_theorem |
In mathematics , Lebesgue's density theorem states that for any Lebesgue measurable set A ⊂ R n {\displaystyle A\subset \mathbb {R} ^{n}} , the "density" of A is 0 or 1 at almost every point in R n {\displaystyle \mathbb {R} ^{n}} . Additionally, the "density" of A is 1 at almost every point in A . Intuitively, this means that the "edge" of A , the set of points in A whose "neighborhood" is partially in A and partially outside of A , is negligible .
Let μ be the Lebesgue measure on the Euclidean space R n and A be a Lebesgue measurable subset of R n . Define the approximate density of A in a ε-neighborhood of a point x in R n as
where B ε denotes the closed ball of radius ε centered at x .
Lebesgue's density theorem asserts that for almost every point x of R n the density
exists and is equal to 0 or 1.
In other words, for every measurable set A , the density of A is 0 or 1 almost everywhere in R n . [ 1 ] However, if μ( A ) > 0 and μ( R n \ A ) > 0 , then there are always points of R n where the density either does not exist or exists but is neither 0 nor 1 (, [ 2 ] Lemma 4).
For example, given a square in the plane, the density at every point inside the square is 1, on the edges is 1/2, and at the corners is 1/4. The set of points in the plane at which the density is neither 0 nor 1 is non-empty (the square boundary), but it is negligible.
The Lebesgue density theorem is a particular case of the Lebesgue differentiation theorem .
Thus, this theorem is also true for every finite Borel measure on R n instead of Lebesgue measure, see Discussion .
This article incorporates material from Lebesgue density theorem on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Lebesgue's_density_theorem |
In topology , the Lebesgue covering lemma is a useful tool in the study of compact metric spaces .
Given an open cover of a compact metric space, a Lebesgue's number of the cover is a number δ > 0 {\displaystyle \delta >0} such that every subset of X {\displaystyle X} having diameter less than δ {\displaystyle \delta } is contained in some member of the cover.
The existence of Lebesgue's numbers for compact metric spaces is given by the Lebesgue's covering lemma:
The notion of Lebesgue's numbers itself is useful in other applications as well.
Let U {\displaystyle {\mathcal {U}}} be an open cover of X {\displaystyle X} . Since X {\displaystyle X} is compact we can extract a finite subcover { A 1 , … , A n } ⊆ U {\displaystyle \{A_{1},\dots ,A_{n}\}\subseteq {\mathcal {U}}} .
If any one of the A i {\displaystyle A_{i}} 's equals X {\displaystyle X} then any δ > 0 {\displaystyle \delta >0} will serve as a Lebesgue's number.
Otherwise for each i ∈ { 1 , … , n } {\displaystyle i\in \{1,\dots ,n\}} , let C i := X ∖ A i {\displaystyle C_{i}:=X\smallsetminus A_{i}} , note that C i {\displaystyle C_{i}} is not empty, and define a function f : X → R {\displaystyle f:X\rightarrow \mathbb {R} } by
Since f {\displaystyle f} is continuous on a compact set, it attains a minimum δ {\displaystyle \delta } .
The key observation is that, since every x {\displaystyle x} is contained in some A i {\displaystyle A_{i}} , the extreme value theorem shows δ > 0 {\displaystyle \delta >0} . Now we can verify that this δ {\displaystyle \delta } is the desired Lebesgue's number.
If Y {\displaystyle Y} is a subset of X {\displaystyle X} of diameter less than δ {\displaystyle \delta } , choose x 0 {\displaystyle x_{0}} as any point in Y {\displaystyle Y} , then by definition of diameter , Y ⊆ B δ ( x 0 ) {\displaystyle Y\subseteq B_{\delta }(x_{0})} , where B δ ( x 0 ) {\displaystyle B_{\delta }(x_{0})} denotes the ball of radius δ {\displaystyle \delta } centered at x 0 {\displaystyle x_{0}} . Since f ( x 0 ) ≥ δ {\displaystyle f(x_{0})\geq \delta } there must exist at least one i {\displaystyle i} such that d ( x 0 , C i ) ≥ δ {\displaystyle d(x_{0},C_{i})\geq \delta } . But this means that B δ ( x 0 ) ⊆ A i {\displaystyle B_{\delta }(x_{0})\subseteq A_{i}} and so, in particular, Y ⊆ A i {\displaystyle Y\subseteq A_{i}} .
Suppose for contradiction that X {\displaystyle X} is sequentially compact , { U α ∣ α ∈ J } {\displaystyle \{U_{\alpha }\mid \alpha \in J\}} is an open cover of X {\displaystyle X} , and the Lebesgue number δ {\displaystyle \delta } does not exist. That is: for all δ > 0 {\displaystyle \delta >0} , there exists A ⊂ X {\displaystyle A\subset X} with diam ( A ) < δ {\displaystyle \operatorname {diam} (A)<\delta } such that there does not exist β ∈ J {\displaystyle \beta \in J} with A ⊂ U β {\displaystyle A\subset U_{\beta }} .
This enables us to perform the following construction:
Note that A n ≠ ∅ {\displaystyle A_{n}\neq \emptyset } for all n ∈ Z + {\displaystyle n\in \mathbb {Z} ^{+}} , since A n ⊄ U β {\displaystyle A_{n}\not \subset U_{\beta }} . It is therefore possible by the axiom of choice to construct a sequence ( x n ) {\displaystyle (x_{n})} in which x i ∈ A i {\displaystyle x_{i}\in A_{i}} for each i {\displaystyle i} . Since X {\displaystyle X} is sequentially compact, there exists a subsequence { x n k } {\displaystyle \{x_{n_{k}}\}} (with k ∈ Z > 0 {\displaystyle k\in \mathbb {Z} _{>0}} ) that converges to x 0 {\displaystyle x_{0}} .
Because { U α } {\displaystyle \{U_{\alpha }\}} is an open cover, there exists some α 0 ∈ J {\displaystyle \alpha _{0}\in J} such that x 0 ∈ U α 0 {\displaystyle x_{0}\in U_{\alpha _{0}}} . As U α 0 {\displaystyle U_{\alpha _{0}}} is open, there exists r > 0 {\displaystyle r>0} with B r ( x 0 ) ⊂ U α 0 {\displaystyle B_{r}(x_{0})\subset U_{\alpha _{0}}} . Now we invoke the convergence of the subsequence { x n k } {\displaystyle \{x_{n_{k}}\}} : there exists L ∈ Z + {\displaystyle L\in \mathbb {Z} ^{+}} such that L ≤ k {\displaystyle L\leq k} implies x n k ∈ B r / 2 ( x 0 ) {\displaystyle x_{n_{k}}\in B_{r/2}(x_{0})} .
Furthermore, there exists M ∈ Z > 0 {\displaystyle M\in \mathbb {Z} _{>0}} such that δ M = 1 M < r 2 {\displaystyle \delta _{M}={\tfrac {1}{M}}<{\tfrac {r}{2}}} . Hence for all z ∈ Z > 0 {\displaystyle z\in \mathbb {Z} _{>0}} , we have M ≤ z {\displaystyle M\leq z} implies diam ( A M ) < r 2 {\displaystyle \operatorname {diam} (A_{M})<{\tfrac {r}{2}}} .
Finally, define q ∈ Z > 0 {\displaystyle q\in \mathbb {Z} _{>0}} such that n q ≥ M {\displaystyle n_{q}\geq M} and q ≥ L {\displaystyle q\geq L} . For all x ′ ∈ A n q {\displaystyle x'\in A_{n_{q}}} , notice that:
Hence d ( x 0 , x ′ ) < r {\displaystyle d(x_{0},x')<r} by the triangle inequality , which implies that A n q ⊂ U α 0 {\displaystyle A_{n_{q}}\subset U_{\alpha _{0}}} . This yields the desired contradiction.
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lebesgue's_number_lemma |
In mathematics , the Lebesgue constants (depending on a set of nodes and of its size) give an idea of how good the interpolant of a function (at the given nodes) is in comparison with the best polynomial approximation of the function (the degree of the polynomials are fixed). The Lebesgue constant for polynomials of degree at most n and for the set of n + 1 nodes T is generally denoted by Λ n ( T ) . These constants are named after Henri Lebesgue .
We fix the interpolation nodes x 0 , . . . , x n {\displaystyle x_{0},...,x_{n}} and an interval [ a , b ] {\displaystyle [a,\,b]} containing all the interpolation nodes. The process of interpolation maps the function f {\displaystyle f} to a polynomial p {\displaystyle p} . This defines a mapping X {\displaystyle X} from the space C ([ a , b ]) of all continuous functions on [ a , b ] to itself. The map X is linear and it is a projection on the subspace Π n of polynomials of degree n or less.
The Lebesgue constant Λ n ( T ) {\displaystyle \Lambda _{n}(T)} is defined as the operator norm of X . This definition requires us to specify a norm on C ([ a , b ]). The uniform norm is usually the most convenient.
The Lebesgue constant bounds the interpolation error: let p ∗ denote the best approximation of f among the polynomials of degree n or less. In other words, p ∗ minimizes || p − f || among all p in Π n . Then
We will here prove this statement with the maximum norm.
by the triangle inequality . But X is a projection on Π n , so
This finishes the proof since ‖ X ( p ∗ − f ) ‖ ≤ ‖ X ‖ ‖ p ∗ − f ‖ = ‖ X ‖ ‖ f − p ∗ ‖ {\displaystyle \|X(p^{*}-f)\|\leq \|X\|\|p^{*}-f\|=\|X\|\|f-p^{*}\|} . Note that this relation comes also as a special case of Lebesgue's lemma .
In other words, the interpolation polynomial is at most a factor Λ n ( T ) + 1 worse than the best possible approximation. This suggests that we look for a set of interpolation nodes with a small Lebesgue constant.
The Lebesgue constant can be expressed in terms of the Lagrange basis polynomials:
In fact, we have the Lebesgue function
and the Lebesgue constant (or Lebesgue number) for the grid is its maximum value
Nevertheless, it is not easy to find an explicit expression for Λ n ( T ) .
In the case of equidistant nodes, the Lebesgue constant grows exponentially . More precisely, we have the following asymptotic estimate
On the other hand, the Lebesgue constant grows only logarithmically if Chebyshev nodes are used, since we have
We conclude again that Chebyshev nodes are a very good choice for polynomial interpolation. However, there is an easy (linear) transformation of Chebyshev nodes that gives a better Lebesgue constant. Let t i denote the i -th Chebyshev node. Then, define
For such nodes:
Those nodes are, however, not optimal (i.e. they do not minimize the Lebesgue constants) and the search for an optimal set of nodes (which has already been proved to be unique under some assumptions) is still an intriguing topic in mathematics today. However, this set of nodes is optimal for interpolation over C M n [ − 1 , 1 ] {\displaystyle C_{M}^{n}[-1,1]} the set of n times differentiable functions whose n -th derivatives are bounded in absolute values by a constant M as shown by N. S. Hoang.
Using a computer , one can approximate the values of the minimal Lebesgue constants, here for the canonical interval [−1, 1] :
There are uncountable infinitely many sets of nodes in [−1,1] that minimize, for fixed n > 1, the Lebesgue constant. Though if we assume that we always take −1 and 1 as nodes for interpolation (which is called a canonical node configuration), then such a set is unique and zero-symmetric. To illustrate this property, we shall see what happens when n = 2 (i.e. we consider 3 interpolation nodes in which case the property is not trivial). One can check that each set of (zero-symmetric) nodes of type (− a , 0, a ) is optimal when √ 8 / 3 ≤ a ≤ 1 (we consider only nodes in [−1, 1]). If we force the set of nodes to be of the type (−1, b , 1) , then b must equal 0 (look at the Lebesgue function, whose maximum is the Lebesgue constant). All arbitrary (i.e. zero-symmetric or zero-asymmetric) optimal sets of nodes in [−1,1] when n = 2 have been determined by F. Schurer, and in an alternative fashion by H.-J. Rack and R. Vajda (2014).
If we assume that we take −1 and 1 as nodes for interpolation, then as shown by H.-J. Rack (1984 and 2013), for the case n = 3, the explicit values of the optimal (unique and zero-symmetric) 4 interpolation nodes and the explicit value of the minimal Lebesgue constant are known. All arbitrary optimal sets of 4 interpolation nodes in [1,1] when n = 3 have been explicitly determined, in two different but equivalent fashions, by H.-J. Rack and R. Vajda (2015).
The Padua points provide another set of nodes with slow growth (although not as slow as the Chebyshev nodes) and with the additional property of being a unisolvent point set .
The Lebesgue constants also arise in another problem. Let p ( x ) be a polynomial of degree n expressed in the Lagrangian form associated with the points in the vector t (i.e. the vector u of its coefficients is the vector containing the values p ( t i ) {\displaystyle p(t_{i})} ). Let p ^ ( x ) {\displaystyle {\hat {p}}(x)} be a polynomial obtained by slightly changing the coefficients u of the original polynomial p ( x ) to u ^ {\displaystyle {\hat {u}}} . Consider the inequality:
This means that the (relative) error in the values of p ^ ( x ) {\displaystyle {\hat {p}}(x)} will not be higher than the appropriate Lebesgue constant times the relative error in the coefficients. In this sense, the Lebesgue constant can be viewed as the relative condition number of the operator mapping each coefficient vector u to the set of the values of the polynomial with coefficients u in the Lagrange form. We can actually define such an operator for each polynomial basis but its condition number is greater than the optimal Lebesgue constant for most convenient bases. | https://en.wikipedia.org/wiki/Lebesgue_constant |
In the branch of mathematics known as real analysis , the Riemann integral , created by Bernhard Riemann , was the first rigorous definition of the integral of a function on an interval . It was presented to the faculty at the University of Göttingen in 1854, but not published in a journal until 1868. [ 1 ] For many functions and practical applications, the Riemann integral can be evaluated by the fundamental theorem of calculus or approximated by numerical integration , or simulated using Monte Carlo integration .
Imagine you have a curve on a graph, and the curve stays above the x-axis between two points, a and b. The area under that curve, from a to b, is what we want to figure out. This area can be described as the set of all points (x, y) on the graph that follow these rules: a ≤ x ≤ b (the x-coordinate is between a and b) and 0 < y < f(x) (the y-coordinate is between 0 and the height of the curve f(x)). Mathematically, this region can be expressed in set-builder notation as S = { ( x , y ) : a ≤ x ≤ b , 0 < y < f ( x ) } . {\displaystyle S=\left\{(x,y)\,:\,a\leq x\leq b\,,\,0<y<f(x)\right\}.}
To measure this area, we use a Riemann integral , which is written as: ∫ a b f ( x ) d x . {\displaystyle \int _{a}^{b}f(x)\,dx.}
This notation means “the integral of f(x) from a to b,” and it represents the exact area under the curve f(x) and above the x-axis, between x = a and x = b.
The idea behind the Riemann integral is to break the area into small, simple shapes (like rectangles), add up their areas, and then make the rectangles smaller and smaller to get a better estimate. In the end, when the rectangles are infinitely small, the sum gives the exact area, which is what the integral represents.
If the curve dips below the x-axis, the integral gives a signed area . This means the integral adds the part above the x-axis as positive and subtracts the part below the x-axis as negative. So, the result of ∫ a b f ( x ) d x {\displaystyle \int _{a}^{b}f(x)\,dx} can be positive, negative, or zero, depending on how much of the curve is above or below the x-axis.
A partition of an interval [ a , b ] is a finite sequence of numbers of the form a = x 0 < x 1 < x 2 < ⋯ < x i < ⋯ < x n = b {\displaystyle a=x_{0}<x_{1}<x_{2}<\dots <x_{i}<\dots <x_{n}=b}
Each [ x i , x i + 1 ] is called a sub-interval of the partition. The mesh or norm of a partition is defined to be the length of the longest sub-interval, that is, max ( x i + 1 − x i ) , i ∈ [ 0 , n − 1 ] . {\displaystyle \max \left(x_{i+1}-x_{i}\right),\quad i\in [0,n-1].}
A tagged partition P ( x , t ) of an interval [ a , b ] is a partition together with a choice of a sample point within each sub-interval: that is, numbers t 0 , ..., t n − 1 with t i ∈ [ x i , x i + 1 ] for each i . The mesh of a tagged partition is the same as that of an ordinary partition.
Suppose that two partitions P ( x , t ) and Q ( y , s ) are both partitions of the interval [ a , b ] . We say that Q ( y , s ) is a refinement of P ( x , t ) if for each integer i , with i ∈ [0, n ] , there exists an integer r ( i ) such that x i = y r ( i ) and such that t i = s j for some j with j ∈ [ r ( i ), r ( i + 1)] . That is, a tagged partition breaks up some of the sub-intervals and adds sample points where necessary, "refining" the accuracy of the partition.
We can turn the set of all tagged partitions into a directed set by saying that one tagged partition is greater than or equal to another if the former is a refinement of the latter.
Let f be a real-valued function defined on the interval [ a , b ] . The Riemann sum of f with respect to a tagged partition P ( x , t ) of [ a , b ] is [ 2 ] ∑ i = 0 n − 1 f ( t i ) ( x i + 1 − x i ) . {\displaystyle \sum _{i=0}^{n-1}f(t_{i})\left(x_{i+1}-x_{i}\right).}
Each term in the sum is the product of the value of the function at a given point and the length of an interval. Consequently, each term represents the (signed) area of a rectangle with height f ( t i ) and width x i + 1 − x i . The Riemann sum is the (signed) area of all the rectangles.
Closely related concepts are the lower and upper Darboux sums . These are similar to Riemann sums, but the tags are replaced by the infimum and supremum (respectively) of f on each sub-interval: L ( f , P ) = ∑ i = 0 n − 1 inf t ∈ [ x i , x i + 1 ] f ( t ) ( x i + 1 − x i ) , U ( f , P ) = ∑ i = 0 n − 1 sup t ∈ [ x i , x i + 1 ] f ( t ) ( x i + 1 − x i ) . {\displaystyle {\begin{aligned}L(f,P)&=\sum _{i=0}^{n-1}\inf _{t\in [x_{i},x_{i+1}]}f(t)(x_{i+1}-x_{i}),\\U(f,P)&=\sum _{i=0}^{n-1}\sup _{t\in [x_{i},x_{i+1}]}f(t)(x_{i+1}-x_{i}).\end{aligned}}}
If f is continuous, then the lower and upper Darboux sums for an untagged partition are equal to the Riemann sum for that partition, where the tags are chosen to be the minimum or maximum (respectively) of f on each subinterval. (When f is discontinuous on a subinterval, there may not be a tag that achieves the infimum or supremum on that subinterval.) The Darboux integral , which is similar to the Riemann integral but based on Darboux sums, is equivalent to the Riemann integral.
Loosely speaking, the Riemann integral is the limit of the Riemann sums of a function as the partitions get finer. If the limit exists then the function is said to be integrable (or more specifically Riemann-integrable ). The Riemann sum can be made as close as desired to the Riemann integral by making the partition fine enough. [ 3 ]
One important requirement is that the mesh of the partitions must become smaller and smaller, so that it has the limit zero. If this were not so, then we would not be getting a good approximation to the function on certain subintervals. In fact, this is enough to define an integral. To be specific, we say that the Riemann integral of f exists and equals s if the following condition holds:
For all ε > 0 , there exists δ > 0 such that for any tagged partition x 0 , ..., x n and t 0 , ..., t n − 1 whose mesh is less than δ , we have | ( ∑ i = 0 n − 1 f ( t i ) ( x i + 1 − x i ) ) − s | < ε . {\displaystyle \left|\left(\sum _{i=0}^{n-1}f(t_{i})(x_{i+1}-x_{i})\right)-s\right|<\varepsilon .}
Unfortunately, this definition is very difficult to use. It would help to develop an equivalent definition of the Riemann integral which is easier to work with. We develop this definition now, with a proof of equivalence following. Our new definition says that the Riemann integral of f exists and equals s if the following condition holds:
For all ε > 0 , there exists a tagged partition y 0 , ..., y m and r 0 , ..., r m − 1 such that for any tagged partition x 0 , ..., x n and t 0 , ..., t n − 1 which is a refinement of y 0 , ..., y m and r 0 , ..., r m − 1 , we have | ( ∑ i = 0 n − 1 f ( t i ) ( x i + 1 − x i ) ) − s | < ε . {\displaystyle \left|\left(\sum _{i=0}^{n-1}f(t_{i})(x_{i+1}-x_{i})\right)-s\right|<\varepsilon .}
Both of these mean that eventually, the Riemann sum of f with respect to any partition gets trapped close to s . Since this is true no matter how close we demand the sums be trapped, we say that the Riemann sums converge to s . These definitions are actually a special case of a more general concept, a net .
As we stated earlier, these two definitions are equivalent. In other words, s works in the first definition if and only if s works in the second definition. To show that the first definition implies the second, start with an ε , and choose a δ that satisfies the condition. Choose any tagged partition whose mesh is less than δ . Its Riemann sum is within ε of s , and any refinement of this partition will also have mesh less than δ , so the Riemann sum of the refinement will also be within ε of s .
To show that the second definition implies the first, it is easiest to use the Darboux integral . First, one shows that the second definition is equivalent to the definition of the Darboux integral; for this see the Darboux integral article. Now we will show that a Darboux integrable function satisfies the first definition. Fix ε , and choose a partition y 0 , ..., y m such that the lower and upper Darboux sums with respect to this partition are within ε /2 of the value s of the Darboux integral. Let r = 2 sup x ∈ [ a , b ] | f ( x ) | . {\displaystyle r=2\sup _{x\in [a,b]}|f(x)|.}
If r = 0 , then f is the zero function, which is clearly both Darboux and Riemann integrable with integral zero. Therefore, we will assume that r > 0 . If m > 1 , then we choose δ such that δ < min { ε 2 r ( m − 1 ) , ( y 1 − y 0 ) , ( y 2 − y 1 ) , ⋯ , ( y m − y m − 1 ) } {\displaystyle \delta <\min \left\{{\frac {\varepsilon }{2r(m-1)}},\left(y_{1}-y_{0}\right),\left(y_{2}-y_{1}\right),\cdots ,\left(y_{m}-y_{m-1}\right)\right\}}
If m = 1 , then we choose δ to be less than one. Choose a tagged partition x 0 , ..., x n and t 0 , ..., t n − 1 with mesh smaller than δ . We must show that the Riemann sum is within ε of s .
To see this, choose an interval [ x i , x i + 1 ] . If this interval is contained within some [ y j , y j + 1 ] , then m j ≤ f ( t i ) ≤ M j {\displaystyle m_{j}\leq f(t_{i})\leq M_{j}} where m j and M j are respectively, the infimum and the supremum of f on [ y j , y j + 1 ] . If all intervals had this property, then this would conclude the proof, because each term in the Riemann sum would be bounded by a corresponding term in the Darboux sums, and we chose the Darboux sums to be near s . This is the case when m = 1 , so the proof is finished in that case.
Therefore, we may assume that m > 1 . In this case, it is possible that one of the [ x i , x i + 1 ] is not contained in any [ y j , y j + 1 ] . Instead, it may stretch across two of the intervals determined by y 0 , ..., y m . (It cannot meet three intervals because δ is assumed to be smaller than the length of any one interval.) In symbols, it may happen that y j < x i < y j + 1 < x i + 1 < y j + 2 . {\displaystyle y_{j}<x_{i}<y_{j+1}<x_{i+1}<y_{j+2}.}
(We may assume that all the inequalities are strict because otherwise we are in the previous case by our assumption on the length of δ .) This can happen at most m − 1 times.
To handle this case, we will estimate the difference between the Riemann sum and the Darboux sum by subdividing the partition x 0 , ..., x n at y j + 1 . The term f ( t i )( x i + 1 − x i ) in the Riemann sum splits into two terms: f ( t i ) ( x i + 1 − x i ) = f ( t i ) ( x i + 1 − y j + 1 ) + f ( t i ) ( y j + 1 − x i ) . {\displaystyle f\left(t_{i}\right)\left(x_{i+1}-x_{i}\right)=f\left(t_{i}\right)\left(x_{i+1}-y_{j+1}\right)+f\left(t_{i}\right)\left(y_{j+1}-x_{i}\right).}
Suppose, without loss of generality , that t i ∈ [ y j , y j + 1 ] . Then m j ≤ f ( t i ) ≤ M j , {\displaystyle m_{j}\leq f(t_{i})\leq M_{j},} so this term is bounded by the corresponding term in the Darboux sum for y j . To bound the other term, notice that x i + 1 − y j + 1 < δ < ε 2 r ( m − 1 ) , {\displaystyle x_{i+1}-y_{j+1}<\delta <{\frac {\varepsilon }{2r(m-1)}},}
It follows that, for some (indeed any) t * i ∈ [ y j + 1 , x i + 1 ] , | f ( t i ) − f ( t i ∗ ) | ( x i + 1 − y j + 1 ) < ε 2 ( m − 1 ) . {\displaystyle \left|f\left(t_{i}\right)-f\left(t_{i}^{*}\right)\right|\left(x_{i+1}-y_{j+1}\right)<{\frac {\varepsilon }{2(m-1)}}.}
Since this happens at most m − 1 times, the distance between the Riemann sum and a Darboux sum is at most ε /2 . Therefore, the distance between the Riemann sum and s is at most ε .
Let f : [ 0 , 1 ] → R {\displaystyle f:[0,1]\to \mathbb {R} } be the function which takes the value 1 at every point. Any Riemann sum of f on [0, 1] will have the value 1, therefore the Riemann integral of f on [0, 1] is 1.
Let I Q : [ 0 , 1 ] → R {\displaystyle I_{\mathbb {Q} }:[0,1]\to \mathbb {R} } be the indicator function of the rational numbers in [0, 1] ; that is, I Q {\displaystyle I_{\mathbb {Q} }} takes the value 1 on rational numbers and 0 on irrational numbers. This function does not have a Riemann integral. To prove this, we will show how to construct tagged partitions whose Riemann sums get arbitrarily close to both zero and one.
To start, let x 0 , ..., x n and t 0 , ..., t n − 1 be a tagged partition (each t i is between x i and x i + 1 ). Choose ε > 0 . The t i have already been chosen, and we can't change the value of f at those points. But if we cut the partition into tiny pieces around each t i , we can minimize the effect of the t i . Then, by carefully choosing the new tags, we can make the value of the Riemann sum turn out to be within ε of either zero or one.
Our first step is to cut up the partition. There are n of the t i , and we want their total effect to be less than ε . If we confine each of them to an interval of length less than ε / n , then the contribution of each t i to the Riemann sum will be at least 0 · ε / n and at most 1 · ε / n . This makes the total sum at least zero and at most ε . So let δ be a positive number less than ε / n . If it happens that two of the t i are within δ of each other, choose δ smaller. If it happens that some t i is within δ of some x j , and t i is not equal to x j , choose δ smaller. Since there are only finitely many t i and x j , we can always choose δ sufficiently small.
Now we add two cuts to the partition for each t i . One of the cuts will be at t i − δ /2 , and the other will be at t i + δ /2 . If one of these leaves the interval [0, 1], then we leave it out. t i will be the tag corresponding to the subinterval [ t i − δ 2 , t i + δ 2 ] . {\displaystyle \left[t_{i}-{\frac {\delta }{2}},t_{i}+{\frac {\delta }{2}}\right].}
If t i is directly on top of one of the x j , then we let t i be the tag for both intervals: [ t i − δ 2 , x j ] , and [ x j , t i + δ 2 ] . {\displaystyle \left[t_{i}-{\frac {\delta }{2}},x_{j}\right],\quad {\text{and}}\quad \left[x_{j},t_{i}+{\frac {\delta }{2}}\right].}
We still have to choose tags for the other subintervals. We will choose them in two different ways. The first way is to always choose a rational point , so that the Riemann sum is as large as possible. This will make the value of the Riemann sum at least 1 − ε . The second way is to always choose an irrational point, so that the Riemann sum is as small as possible. This will make the value of the Riemann sum at most ε .
Since we started from an arbitrary partition and ended up as close as we wanted to either zero or one, it is false to say that we are eventually trapped near some number s , so this function is not Riemann integrable. However, it is Lebesgue integrable . In the Lebesgue sense its integral is zero, since the function is zero almost everywhere . But this is a fact that is beyond the reach of the Riemann integral.
There are even worse examples. I Q {\displaystyle I_{\mathbb {Q} }} is equivalent (that is, equal almost everywhere) to a Riemann integrable function, but there are non-Riemann integrable bounded functions which are not equivalent to any Riemann integrable function. For example, let C be the Smith–Volterra–Cantor set , and let I C be its indicator function. Because C is not Jordan measurable , I C is not Riemann integrable. Moreover, no function g equivalent to I C is Riemann integrable: g , like I C , must be zero on a dense set, so as in the previous example, any Riemann sum of g has a refinement which is within ε of 0 for any positive number ε . But if the Riemann integral of g exists, then it must equal the Lebesgue integral of I C , which is 1/2 . Therefore, g is not Riemann integrable.
It is popular to define the Riemann integral as the Darboux integral . This is because the Darboux integral is technically simpler and because a function is Riemann-integrable if and only if it is Darboux-integrable.
Some calculus books do not use general tagged partitions, but limit themselves to specific types of tagged partitions. If the type of partition is limited too much, some non-integrable functions may appear to be integrable.
One popular restriction is the use of "left-hand" and "right-hand" Riemann sums. In a left-hand Riemann sum, t i = x i for all i , and in a right-hand Riemann sum, t i = x i + 1 for all i . Alone this restriction does not impose a problem: we can refine any partition in a way that makes it a left-hand or right-hand sum by subdividing it at each t i . In more formal language, the set of all left-hand Riemann sums and the set of all right-hand Riemann sums is cofinal in the set of all tagged partitions.
Another popular restriction is the use of regular subdivisions of an interval. For example, the n th regular subdivision of [0, 1] consists of the intervals [ 0 , 1 n ] , [ 1 n , 2 n ] , … , [ n − 1 n , 1 ] . {\displaystyle \left[0,{\frac {1}{n}}\right],\left[{\frac {1}{n}},{\frac {2}{n}}\right],\ldots ,\left[{\frac {n-1}{n}},1\right].}
Again, alone this restriction does not impose a problem, but the reasoning required to see this fact is more difficult than in the case of left-hand and right-hand Riemann sums.
However, combining these restrictions, so that one uses only left-hand or right-hand Riemann sums on regularly divided intervals, is dangerous. If a function is known in advance to be Riemann integrable, then this technique will give the correct value of the integral. But under these conditions the indicator function I Q {\displaystyle I_{\mathbb {Q} }} will appear to be integrable on [0, 1] with integral equal to one: Every endpoint of every subinterval will be a rational number, so the function will always be evaluated at rational numbers, and hence it will appear to always equal one. The problem with this definition becomes apparent when we try to split the integral into two pieces. The following equation ought to hold: ∫ 0 2 − 1 I Q ( x ) d x + ∫ 2 − 1 1 I Q ( x ) d x = ∫ 0 1 I Q ( x ) d x . {\displaystyle \int _{0}^{{\sqrt {2}}-1}I_{\mathbb {Q} }(x)\,dx+\int _{{\sqrt {2}}-1}^{1}I_{\mathbb {Q} }(x)\,dx=\int _{0}^{1}I_{\mathbb {Q} }(x)\,dx.}
If we use regular subdivisions and left-hand or right-hand Riemann sums, then the two terms on the left are equal to zero, since every endpoint except 0 and 1 will be irrational, but as we have seen the term on the right will equal 1.
As defined above, the Riemann integral avoids this problem by refusing to integrate I Q . {\displaystyle I_{\mathbb {Q} }.} The Lebesgue integral is defined in such a way that all these integrals are 0.
The Riemann integral is a linear transformation; that is, if f and g are Riemann-integrable on [ a , b ] and α and β are constants, then ∫ a b ( α f ( x ) + β g ( x ) ) d x = α ∫ a b f ( x ) d x + β ∫ a b g ( x ) d x . {\displaystyle \int _{a}^{b}(\alpha f(x)+\beta g(x))\,dx=\alpha \int _{a}^{b}f(x)\,dx+\beta \int _{a}^{b}g(x)\,dx.}
Because the Riemann integral of a function is a number, this makes the Riemann integral a linear functional on the vector space of Riemann-integrable functions.
A bounded function on a compact interval [ a , b ] is Riemann integrable if and only if it is continuous almost everywhere (the set of its points of discontinuity has measure zero , in the sense of Lebesgue measure ). This is the Lebesgue-Vitali theorem (of characterization of the Riemann integrable functions). It has been proven independently by Giuseppe Vitali and by Henri Lebesgue in 1907, and uses the notion of measure zero , but makes use of neither Lebesgue's general measure or integral.
The integrability condition can be proven in various ways, [ 4 ] [ 5 ] [ 6 ] [ 7 ] one of which is sketched below.
One direction can be proven using the oscillation definition of continuity: [ 8 ] For every positive ε , Let X ε be the set of points in [ a , b ] with oscillation of at least ε . Since every point where f is discontinuous has a positive oscillation and vice versa, the set of points in [ a , b ] , where f is discontinuous is equal to the union over { X 1/ n } for all natural numbers n .
If this set does not have zero Lebesgue measure , then by countable additivity of the measure there is at least one such n so that X 1/ n does not have a zero measure. Thus there is some positive number c such that every countable collection of open intervals covering X 1/ n has a total length of at least c . In particular this is also true for every such finite collection of intervals. This remains true also for X 1/ n less a finite number of points (as a finite number of points can always be covered by a finite collection of intervals with arbitrarily small total length).
For every partition of [ a , b ] , consider the set of intervals whose interiors include points from X 1/ n . These interiors consist of a finite open cover of X 1/ n , possibly up to a finite number of points (which may fall on interval edges). Thus these intervals have a total length of at least c . Since in these points f has oscillation of at least 1/ n , the infimum and supremum of f in each of these intervals differ by at least 1/ n . Thus the upper and lower sums of f differ by at least c / n . Since this is true for every partition, f is not Riemann integrable.
We now prove the converse direction using the sets X ε defined above. [ 9 ] For every ε , X ε is compact , as it is bounded (by a and b ) and closed:
Now, suppose that f is continuous almost everywhere . Then for every ε , X ε has zero Lebesgue measure . Therefore, there is a countable collections of open intervals in [ a , b ] which is an open cover of X ε , such that the sum over all their lengths is arbitrarily small. Since X ε is compact , there is a finite subcover – a finite collections of open intervals in [ a , b ] with arbitrarily small total length that together contain all points in X ε . We denote these intervals { I ( ε ) i }, for 1 ≤ i ≤ k , for some natural k .
The complement of the union of these intervals is itself a union of a finite number of intervals, which we denote { J ( ε ) i } (for 1 ≤ i ≤ k − 1 and possibly for i = k , k + 1 as well).
We now show that for every ε > 0 , there are upper and lower sums whose difference is less than ε , from which Riemann integrability follows. To this end, we construct a partition of [ a , b ] as follows:
Denote ε 1 = ε / 2( b − a ) and ε 2 = ε / 2( M − m ) , where m and M are the infimum and supremum of f on [ a , b ] . Since we may choose intervals { I ( ε 1 ) i } with arbitrarily small total length, we choose them to have total length smaller than ε 2 .
Each of the intervals { J ( ε 1 ) i } has an empty intersection with X ε 1 , so each point in it has a neighborhood with oscillation smaller than ε 1 . These neighborhoods consist of an open cover of the interval, and since the interval is compact there is a finite subcover of them. This subcover is a finite collection of open intervals, which are subintervals of J ( ε 1 ) i (except for those that include an edge point, for which we only take their intersection with J ( ε 1 ) i ) . We take the edge points of the subintervals for all J ( ε 1 ) i − s , including the edge points of the intervals themselves, as our partition.
Thus the partition divides [ a , b ] to two kinds of intervals:
In total, the difference between the upper and lower sums of the partition is smaller than ε , as required.
In particular, any set that is at most countable has Lebesgue measure zero, and thus a bounded function (on a compact interval) with only finitely or countably many discontinuities is Riemann integrable. Another sufficient criterion to Riemann integrability over [ a , b ] , but which does not involve the concept of measure, is the existence of a right-hand (or left-hand) limit at every point in [ a , b ) (or ( a , b ] ). [ 10 ]
An indicator function of a bounded set is Riemann-integrable if and only if the set is Jordan measurable . The Riemann integral can be interpreted measure-theoretically as the integral with respect to the Jordan measure.
If a real-valued function is monotone on the interval [ a , b ] it is Riemann integrable, since its set of discontinuities is at most countable, and therefore of Lebesgue measure zero. If a real-valued function on [ a , b ] is Riemann integrable, it is Lebesgue integrable . That is, Riemann-integrability is a stronger (meaning more difficult to satisfy) condition than Lebesgue-integrability. The converse does not hold; not all Lebesgue-integrable functions are Riemann integrable.
The Lebesgue–Vitali theorem does not imply that all type of discontinuities have the same weight on the obstruction that a real-valued bounded function be Riemann integrable on [ a , b ] . In fact, certain discontinuities have absolutely no role on the Riemann integrability of the function—a consequence of the classification of the discontinuities of a function. [ citation needed ]
If f n is a uniformly convergent sequence on [ a , b ] with limit f , then Riemann integrability of all f n implies Riemann integrability of f , and ∫ a b f d x = ∫ a b lim n → ∞ f n d x = lim n → ∞ ∫ a b f n d x . {\displaystyle \int _{a}^{b}f\,dx=\int _{a}^{b}{\lim _{n\to \infty }{f_{n}}\,dx}=\lim _{n\to \infty }\int _{a}^{b}f_{n}\,dx.}
However, the Lebesgue monotone convergence theorem (on a monotone pointwise limit) does not hold for Riemann integrals. Thus, in Riemann integration, taking limits under the integral sign is far more difficult to logically justify than in Lebesgue integration. [ 11 ]
It is easy to extend the Riemann integral to functions with values in the Euclidean vector space R n {\displaystyle \mathbb {R} ^{n}} for any n . The integral is defined component-wise; in other words, if f = ( f 1 , ..., f n ) then ∫ f = ( ∫ f 1 , … , ∫ f n ) . {\displaystyle \int \mathbf {f} =\left(\int f_{1},\,\dots ,\int f_{n}\right).}
In particular, since the complex numbers are a real vector space , this allows the integration of complex valued functions.
The Riemann integral is only defined on bounded intervals, and it does not extend well to unbounded intervals. The simplest possible extension is to define such an integral as a limit , in other words, as an improper integral : ∫ − ∞ ∞ f ( x ) d x = lim a → − ∞ b → ∞ ∫ a b f ( x ) d x . {\displaystyle \int _{-\infty }^{\infty }f(x)\,dx=\lim _{a\to -\infty \atop b\to \infty }\int _{a}^{b}f(x)\,dx.}
This definition carries with it some subtleties, such as the fact that it is not always equivalent to compute the Cauchy principal value lim a → ∞ ∫ − a a f ( x ) d x . {\displaystyle \lim _{a\to \infty }\int _{-a}^{a}f(x)\,dx.}
For example, consider the sign function f ( x ) = sgn( x ) which is 0 at x = 0 , 1 for x > 0 , and −1 for x < 0 . By symmetry, ∫ − a a f ( x ) d x = 0 {\displaystyle \int _{-a}^{a}f(x)\,dx=0} always, regardless of a . But there are many ways for the interval of integration to expand to fill the real line, and other ways can produce different results; in other words, the multivariate limit does not always exist. We can compute ∫ − a 2 a f ( x ) d x = a , ∫ − 2 a a f ( x ) d x = − a . {\displaystyle {\begin{aligned}\int _{-a}^{2a}f(x)\,dx&=a,\\\int _{-2a}^{a}f(x)\,dx&=-a.\end{aligned}}}
In general, this improper Riemann integral is undefined. Even standardizing a way for the interval to approach the real line does not work because it leads to disturbingly counterintuitive results. If we agree (for instance) that the improper integral should always be lim a → ∞ ∫ − a a f ( x ) d x , {\displaystyle \lim _{a\to \infty }\int _{-a}^{a}f(x)\,dx,} then the integral of the translation f ( x − 1) is −2, so this definition is not invariant under shifts, a highly undesirable property. In fact, not only does this function not have an improper Riemann integral, its Lebesgue integral is also undefined (it equals ∞ − ∞ ).
Unfortunately, the improper Riemann integral is not powerful enough. The most severe problem is that there are no widely applicable theorems for commuting improper Riemann integrals with limits of functions. In applications such as Fourier series it is important to be able to approximate the integral of a function using integrals of approximations to the function. For proper Riemann integrals, a standard theorem states that if f n is a sequence of functions that converge uniformly to f on a compact set [ a , b ] , then lim n → ∞ ∫ a b f n ( x ) d x = ∫ a b f ( x ) d x . {\displaystyle \lim _{n\to \infty }\int _{a}^{b}f_{n}(x)\,dx=\int _{a}^{b}f(x)\,dx.}
On non-compact intervals such as the real line, this is false. For example, take f n ( x ) to be n −1 on [0, n ] and zero elsewhere. For all n we have: ∫ − ∞ ∞ f n d x = 1. {\displaystyle \int _{-\infty }^{\infty }f_{n}\,dx=1.}
The sequence ( f n ) converges uniformly to the zero function, and clearly the integral of the zero function is zero. Consequently, ∫ − ∞ ∞ f d x ≠ lim n → ∞ ∫ − ∞ ∞ f n d x . {\displaystyle \int _{-\infty }^{\infty }f\,dx\neq \lim _{n\to \infty }\int _{-\infty }^{\infty }f_{n}\,dx.}
This demonstrates that for integrals on unbounded intervals, uniform convergence of a function is not strong enough to allow passing a limit through an integral sign. This makes the Riemann integral unworkable in applications (even though the Riemann integral assigns both sides the correct value), because there is no other general criterion for exchanging a limit and a Riemann integral, and without such a criterion it is difficult to approximate integrals by approximating their integrands.
A better route is to abandon the Riemann integral for the Lebesgue integral . The definition of the Lebesgue integral is not obviously a generalization of the Riemann integral, but it is not hard to prove that every Riemann-integrable function is Lebesgue-integrable and that the values of the two integrals agree whenever they are both defined. Moreover, a function f defined on a bounded interval is Riemann-integrable if and only if it is bounded and the set of points where f is discontinuous has Lebesgue measure zero.
An integral which is in fact a direct generalization of the Riemann integral is the Henstock–Kurzweil integral .
Another way of generalizing the Riemann integral is to replace the factors x k + 1 − x k in the definition of a Riemann sum by something else; roughly speaking, this gives the interval of integration a different notion of length. This is the approach taken by the Riemann–Stieltjes integral .
In multivariable calculus , the Riemann integrals for functions from R n → R {\displaystyle \mathbb {R} ^{n}\to \mathbb {R} } are multiple integrals .
The Riemann integral is unsuitable for many theoretical purposes. Some of the technical deficiencies in Riemann integration can be remedied with the Riemann–Stieltjes integral , and most disappear with the Lebesgue integral , though the latter does not have a satisfactory treatment of improper integrals . The gauge integral is a generalisation of the Lebesgue integral that is at the same time closer to the Riemann integral.
These more general theories allow for the integration of more "jagged" or "highly oscillating" functions whose Riemann integral does not exist; but the theories give the same value as the Riemann integral when it does exist.
In educational settings, the Darboux integral offers a simpler definition that is easier to work with; it can be used to introduce the Riemann integral. The Darboux integral is defined whenever the Riemann integral is, and always gives the same result. Conversely, the gauge integral is a simple but more powerful generalization of the Riemann integral and has led some educators to advocate that it should replace the Riemann integral in introductory calculus courses. [ 12 ] | https://en.wikipedia.org/wiki/Lebesgue_integrability_condition |
In mathematics , given a locally Lebesgue integrable function f {\displaystyle f} on R k {\displaystyle \mathbb {R} ^{k}} , a point x {\displaystyle x} in the domain of f {\displaystyle f} is a Lebesgue point if [ 1 ]
Here, B ( x , r ) {\displaystyle B(x,r)} is a ball centered at x {\displaystyle x} with radius r > 0 {\displaystyle r>0} , and λ ( B ( x , r ) ) {\displaystyle \lambda (B(x,r))} is its Lebesgue measure . The Lebesgue points of f {\displaystyle f} are thus points where f {\displaystyle f} does not oscillate too much, in an average sense. [ 2 ]
The Lebesgue differentiation theorem states that, given any f ∈ L 1 ( R k ) {\displaystyle f\in L^{1}(\mathbb {R} ^{k})} , almost every x {\displaystyle x} is a Lebesgue point of f {\displaystyle f} . [ 3 ] | https://en.wikipedia.org/wiki/Lebesgue_point |
The Leblanc process was an early industrial process for making soda ash ( sodium carbonate ) used throughout the 19th century, named after its inventor, Nicolas Leblanc . It involved two stages: making sodium sulfate from sodium chloride , followed by reacting the sodium sulfate with coal and calcium carbonate to make sodium carbonate. The process gradually became obsolete after the development of the Solvay process .
Soda ash ( sodium carbonate ) and potash ( potassium carbonate ), collectively termed alkali , are vital chemicals in the glass , textile , soap , and paper industries. The traditional source of alkali in western Europe had been potash obtained from wood ashes. However, by the 13th century, deforestation had rendered this means of production uneconomical, and alkali had to be imported. Potash was imported from North America, Scandinavia, and Russia, where large forests still stood. Soda ash was imported from Spain and the Canary Islands, where it was produced from the ashes of glasswort plants (called barilla ashes in Spain), or imported from Syria. [ 1 ] The soda ash from glasswort plant ashes was mainly a mixture of sodium carbonate and potassium carbonate. In addition in Egypt, naturally occurring sodium carbonate, the mineral natron , was mined from dry lakebeds. In Britain, the only local source of alkali was from kelp , which washed ashore in Scotland and Ireland. [ 2 ] [ 3 ]
In 1783, King Louis XVI of France and the French Academy of Sciences offered a prize of 2400 livres for a method to produce alkali from sea salt ( sodium chloride ). In 1791, Nicolas Leblanc , physician to Louis Philip II, Duke of Orléans , patented a solution. That same year he built the first Leblanc plant for the Duke at Saint-Denis , and this began to produce 320 tons of soda per year. [ 4 ] He was denied his prize money because of the French Revolution . [ 5 ]
For more recent history, see industrial history below.
In the first step, sodium chloride is treated with sulfuric acid in the Mannheim process . This reaction produces sodium sulfate (called the salt cake ) and hydrogen chloride :
This chemical reaction had been discovered in 1772 by the Swedish chemist Carl Wilhelm Scheele . Leblanc's contribution was the second step, in which a mixture of the salt cake and crushed limestone ( calcium carbonate ) was reduced by heating with coal . [ 6 ] This conversion entails two parts. First is the carbothermic reaction whereby the coal, a source of carbon , reduces the sulfate to sulfide :
In the second stage, is the reaction to produce sodium carbonate and calcium sulfide . This mixture is called black ash . [ citation needed ]
The soda ash is extracted from the black ash with water. Evaporation of this extract yields solid sodium carbonate. This extraction process was termed lixiviation. [ citation needed ]
In response to the Alkali Act , the noxious calcium sulfide was converted into calcium carbonate:
The hydrogen sulfide can be used as a sulfur source for the lead chamber process to produce the sulfuric acid used in the first step of the Leblanc process.
Likewise, by 1874 the Deacon process was invented, oxidizing the hydrochloric acid over a copper catalyst:
The chlorine would be sold for bleach in paper and textile manufacturing. Eventually, the chlorine sales became the purpose of the Leblanc process. The inexpensive chlorine was a contributor to the development of the chloralkali process . [ citation needed ]
The sodium chloride is initially mixed with concentrated sulfuric acid and the mixture exposed to low heat. The hydrogen chloride gas bubbles off and was discarded to atmosphere before gas absorption towers were introduced. This continues until all that is left is a fused mass. This mass still contains enough chloride to contaminate the later stages of the process. The mass is then exposed to direct flame, which evaporates nearly all of the remaining chloride. [ 7 ] [ 8 ]
The coal used in the next step must be low in nitrogen to avoid the formation of cyanide . The calcium carbonate, in the form of limestone or chalk, should be low in magnesia and silica. The weight ratio of the charge is 2:2:1 of salt cake, calcium carbonate, and carbon respectively. It is fired in a reverberatory furnace at about 1000 °C. [ 9 ] Sometimes the reverberatory furnace rotated and thus was called a "revolver". [ 10 ]
The black-ash product of firing must be lixiviated right away to prevent oxidation of sulfides back to sulfate. [ 9 ] In the lixiviation process, the black-ash is completely covered in water, again to prevent oxidation. To optimize the leaching of soluble material, the lixiviation is done in cascaded stages. That is, pure water is used on the black-ash that has already been through prior stages. The liquor from that stage is used to leach an earlier stage of the black-ash, and so on. [ 9 ]
The final liquor is treated by blowing carbon dioxide through it. This precipitates dissolved calcium and other impurities. It also volatilizes the sulfide, which is carried off as H 2 S gas. Any residual sulfide can be subsequently precipitated by adding zinc hydroxide . The liquor is separated from the precipitate and evaporated using waste heat from the reverberatory furnace. The resulting ash is then redissolved into concentrated solution in hot water. Solids that fail to dissolve are separated. The solution is then cooled to recrystallize nearly pure sodium carbonate decahydrate. [ 9 ]
Leblanc established the first Leblanc process plant in 1791 in St. Denis . However, French Revolutionaries seized the plant, along with the rest of Louis Philip's estate, in 1794, and publicized Leblanc's trade secrets . Napoleon I returned the plant to Leblanc in 1801, but lacking the funds to repair it and compete against other soda works that had been established in the meantime, Leblanc committed suicide in 1806. [ 5 ]
By the early 19th century, French soda ash producers were making 10,000 - 15,000 tons annually. However, it was in Britain that the Leblanc process became most widely practiced. [ 5 ] The first British soda works using the Leblanc process was built by the Losh family of iron founders at the Losh, Wilson and Bell works in Walker on the River Tyne in 1816, but steep British tariffs on salt production hindered the economics of the Leblanc process and kept such operations on a small scale until 1824. Following the repeal of the salt tariff, the British soda industry grew dramatically. The Bonnington Chemical Works was possibly the earliest production, [ 11 ] and the chemical works established by James Muspratt in Liverpool and Flint , and by Charles Tennant near Glasgow became some of the largest in the world. Muspratt's Liverpool works enjoyed proximity and transport links to the Cheshire salt mines, the St Helens coalfields and the North Wales and Derbyshire limestone quarries. [ 12 ] By 1852, annual soda production had reached 140,000 tons in Britain and 45,000 tons in France. [ 5 ] By the 1870s, the British soda output of 200,000 tons annually exceeded that of all other nations in the world combined. [ citation needed ]
In 1861, the Belgian chemist Ernest Solvay developed a more direct process for producing soda ash from salt and limestone through the use of ammonia . The only waste product of this Solvay process was calcium chloride , and so it was both more economical and less polluting than the Leblanc method. From the late 1870s, Solvay-based soda works on the European continent provided stiff competition in their home markets to the Leblanc-based British soda industry. Additionally the Brunner Mond Solvay plant which opened in 1874 at Winnington near Northwich provided fierce competition nationally. Leblanc producers were unable to compete with Solvay soda ash, and their soda ash production was effectively an adjunct to their still profitable production of chlorine, bleaching powder etc. (The unwanted by-products had become the profitable products). The development of electrolytic methods of chlorine production removed that source of profits as well, and there followed a decline moderated only by "gentlemen's' agreements" with Solvay producers. [ 13 ] By 1900, 90% of the world's soda production was through the Solvay method, or on the North American continent, through the mining of trona , discovered in 1938, which caused the closure of the last North American Solvay plant in 1986.
The last Leblanc-based soda ash plant in the West closed in the early 1920s, [ 3 ] but when during WWII Nationalist China had to evacuate its industry to the inland rural areas, the difficulties in importing and maintaining complex equipment forced them to temporarily re-establish the Leblanc process. [ 14 ]
However, the Solvay process does not work for the manufacture of potassium carbonate , because it relies on the low solubility of the corresponding bicarbonate .
The Leblanc process plants were quite damaging to the local environment. The process of generating salt cake from salt and sulfuric acid released hydrochloric acid gas , and because this acid was industrially useless in the early 19th century, it was simply vented into the atmosphere. Also, an insoluble smelly solid waste was produced. For every 8 tons of soda ash, the process produced 5.5 tons of hydrogen chloride and 7 tons of calcium sulfide waste. This solid waste (known as galligu) had no economic value, and was piled in heaps and spread on fields near the soda works, where it weathered to release hydrogen sulfide , the toxic gas responsible for the odor of rotten eggs. [ citation needed ]
Because of their noxious emissions, Leblanc soda works became targets of lawsuits and legislation. An 1839 suit against soda works alleged, "the gas from these manufactories is of such a deleterious nature as to blight everything within its influence, and is alike baneful to health and property. The herbage of the fields in their vicinity is scorched, the gardens neither yield fruit nor vegetables; many flourishing trees have lately become rotten naked sticks. Cattle and poultry droop and pine away. It tarnishes the furniture in our houses, and when we are exposed to it, which is of frequent occurrence, we are afflicted with coughs and pains in the head ... all of which we attribute to the Alkali works." [ 15 ]
In 1863, the British Parliament passed the Alkali Act 1863 , the first of several Alkali Acts , the first modern air pollution legislation. This act allowed that no more than 5% of the hydrochloric acid produced by alkali plants could be vented to the atmosphere. To comply with the legislation, soda works passed the escaping hydrogen chloride gas up through a tower packed with charcoal , where it was absorbed by water flowing in the other direction. The chemical works usually dumped the resulting hydrochloric acid solution into nearby bodies of water, killing fish and other aquatic life. [ citation needed ]
The Leblanc process also meant very unpleasant working conditions for the operators. It originally required careful operation and frequent operator interventions (some involving heavy manual labour) into processes giving off hot noxious chemicals. [ 16 ] Sometimes, workmen cleaning the reaction products out of the reverberatory furnace wore cloth mouth-and-nose gags to keep dust and aerosols out of the lungs. [ 17 ] [ 18 ]
This improved somewhat later as processes were more heavily mechanised to improve economics and uniformity of product. [ citation needed ]
By the 1880s, methods for converting the hydrochloric acid to chlorine gas for the manufacture of bleaching powder and for reclaiming the sulfur in the calcium sulfide waste had been discovered, but the Leblanc process remained more wasteful and more polluting than the Solvay process . The same is true when it is compared with the later electrolytical processes which eventually replaced it for chlorine production. [ citation needed ]
There is a strong case for arguing that Leblanc process waste is the most endangered habitat in the UK, since the waste weathers down to calcium carbonate and produces a haven for plants that thrive in lime-rich soils, known as calcicoles . Only four such sites have survived the new millennium; three are protected as local nature reserves of which the largest, at Nob End near Bolton , is an SSSI and Local Nature Reserve - largely for its sparse orchid-calcicole flora, most unusual in an area with acid soils. This alkaline island contains within it an acid island, where acid boiler slag was deposited, which now shows up as a zone dominated by heather, Calluna vulgaris . [ 19 ] | https://en.wikipedia.org/wiki/Leblanc_process |
The Lebombo bone is a bone tool made of a baboon fibula with incised markings discovered in Border Cave in the Lebombo Mountains located between South Africa and Eswatini . [ 1 ] Changes in the section of the notches indicate the use of different cutting edges, which the bone's discoverer, Peter Beaumont, views as evidence for their having been made, like other markings found all over the world, during participation in rituals.
The bone is between 43,000 and 42,000 years old, according to 24 radiocarbon datings . [ 2 ] This is far older than the Ishango bone with which it is sometimes confused. Other notched bones are 80,000 years old but it is unclear if the notches are merely decorative or if they
bear a functional meaning. [ 3 ] The bone has been conjectured to be a tally stick . [ 4 ] | https://en.wikipedia.org/wiki/Lebombo_bone |
See text .
Lecanopteris is a genus of ferns in the family Polypodiaceae , subfamily Microsoroideae , according to the Pteridophyte Phylogeny Group classification of 2016 (PPG I). [ 1 ] They have swollen hollow rhizomes that provide homes for symbiotic ants. All are epiphytic plants that naturally occur from Southeast Asia to New Guinea . [ 2 ] [ 3 ] Several species are in commerce, [ 4 ] being grown as houseplants and greenhouse curiosities.
The monophyletic genus Lecanopteris has been divided into two sub-genera, Lecanopteris and Myrmecopteris . All the species have rhizomes associated with ants. Subgenus Lecanopteris was monophyletic, and Myrmecopteris was paraphyletic . [ 2 ] [ 3 ] A 2019 molecular phylogenetic study suggested that the genus was related to three other clades, treated as genera, related as shown in the following cladogram. [ 5 ]
Bosmania
Dendroconche
Zealandia
Lecanopteris s.s.
As of February 2020 [update] , the Checklist of Ferns and Lycophytes of the World recognizes the segregate genera; other sources do not.
As of February 2020 [update] , the Checklist of Ferns and Lycophytes of the World recognized the following species in Lecanopteris s.s. [ 6 ] | https://en.wikipedia.org/wiki/Lecanopteris |
A lichen has lecanorine fruiting body parts if they are shaped like a plate with a ring around them, and that ring is made of tissue similar to the main non-fruiting body part of the lichen. [ 1 ] The name comes from the name of the lichen genus Lecanora , whose members have such apothecia. [ 1 ] If a lichen has lecanorine apothecia, the lichen itself is sometimes described as being lecanorine.
This article about lichens or lichenology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lecanorine_lichen |
In electronics , a Lecher line or Lecher wires is a pair of parallel wires or rods that were used to measure the wavelength of radio waves , mainly at VHF , UHF and microwave frequencies . [ 1 ] [ 2 ] They form a short length of balanced transmission line (a resonant stub ). When attached to a source of radio-frequency power such as a radio transmitter, the radio waves form standing waves along their length. By sliding a conductive bar that bridges the two wires along their length, the length of the waves can be physically measured. Austrian physicist Ernst Lecher , improving on techniques used by Oliver Lodge [ 3 ] and Heinrich Hertz , [ 4 ] developed this method of measuring wavelength around 1888. [ 5 ] [ 6 ] [ 7 ] Lecher lines were used as frequency measuring devices until inexpensive frequency counters became available after World War 2. They were also used as components , often called " resonant stubs ", in VHF, UHF and microwave radio equipment such as transmitters , radar sets, and television sets , serving as tank circuits , filters , and impedance-matching devices. [ 8 ] They are used at frequencies between HF / VHF , where lumped components are used, and UHF / SHF , where resonant cavities are more practical.
A Lecher line is a pair of parallel uninsulated wires or rods held a precise distance apart. [ 9 ] [ 1 ] [ 10 ] The separation is not critical but should be a small fraction of the wavelength; it ranges from less than a centimeter to over 10 cm. The length of the wires depends on the wavelength involved; lines used for measurement are generally several wavelengths long. The uniform spacing of the wires makes them a transmission line , conducting waves at a constant speed very close to the speed of light . [ 10 ] One end of the rods is connected to the source of RF power, such as the output of a radio transmitter . At the other end the rods are connected together with a conductive bar between them. This short circuiting termination reflects the waves. The waves reflected from the short-circuited end interfere with the outgoing waves, creating a sinusoidal standing wave of voltage and current on the line. The voltage goes close to zero at nodes located at multiples of half a wavelength from the end, with maxima called antinodes located midway between the nodes. [ 11 ] Therefore, the wavelength λ can be determined by finding the location of two successive nodes (or antinodes) and measuring the distance between them, and multiplying by two. The frequency f of the waves can be calculated from the wavelength and the speed of the waves, which is approximately the speed of light c :
The nodes are much sharper than the antinodes, because the change of voltage with distance along the line is maximum at the nodes, so they are used. [ 10 ] [ 9 ]
Two methods are employed to find the nodes. [ 11 ] One is to use some type of voltage indicator, such as an RF voltmeter or light bulb , attached to a pair of contacts that slide up and down the wires. [ 12 ] [ 11 ] When the bulb reaches a node, the voltage between the wires goes to zero, so the bulb goes out. If the indicator has too low an impedance it will disturb the standing wave on the line, so a high impedance indicator must be used; a regular incandescent bulb has too low a resistance. Lecher and early researchers used long thin Geissler tubes , laying the glass tube directly across the line. The high voltage of early transmitters excited a glow discharge in the gas. In modern times small neon bulbs are often used. One problem with using glow discharge bulbs is their high striking voltage makes it difficult to localize the exact voltage minimum. In precision wavemeters an RF voltmeter is used.
The other method used to find the nodes is to slide the terminating shorting bar up and down the line, and measure the current flowing into the line with an RF ammeter in the feeder line. [ 9 ] [ 11 ] The current on the Lecher line, like the voltage, forms a standing wave with nodes (points of minimum current) every half wavelength. So the line presents an impedance to the applied power which varies with its length; when a current node is located at the entrance to the line, the current drawn from the source, measured by the ammeter, will be minimum. The shorting bar is slid down the line and the position of two successive current minima is noted, the distance between them is half a wavelength.
With care, Lecher lines can measure frequency to an accuracy of 0.1%. [ 9 ] [ 1 ] [ 10 ]
A major attraction of Lecher lines was they were a way to measure frequency without complicated electronics, and could be improvised from simple materials found in a typical shop. Lecher line wavemeters are usually built on a frame which holds the conductors rigid and horizontal, with a track that the shorting bar or indicator rides on, and a built-in measuring scale so the distance between nodes can be read out. [ 9 ] The frame must be made of a nonconductive material like wood, because any conducting objects near the line can disturb the standing wave pattern. [ 9 ] The RF current is usually coupled into the line through a single turn loop of wire at one end, which can be held near a transmitter's tank coil .
A simpler design is a U-shaped metal bar, marked with graduations, with a sliding shorting bar. [ 1 ] In operation, the U end acts as a coupling link and is held near the transmitter's tank coil, and the shorting bar is slid out along the arms until the transmitter's plate current dips, indicating the first node has been reached. Then the distance from the end of the link to the shorting bar is a half-wavelength. The shorting bar should always be slid out , away from the link end, not in , to avoid converging on a higher order node by mistake.
In many ways Lecher lines are an electrical version of the Kundt's tube experiment which is used to measure the wavelength of sound waves .
If the frequency f of the radio waves is independently known, the wavelength λ measured on a Lecher line can be used to calculate the speed of the waves, c , which is approximately equal to the speed of light :
In 1891, French physicist Prosper-René Blondlot made the first [ 13 ] measurement of the speed of radio waves, using this method. [ 14 ] [ 15 ] He used 13 different frequencies between 10 and 30 MHz and obtained an average value of 297,600 km/s, which is within 1% of the current value for the speed of light. [ 13 ] Other researchers repeated the experiment with greater accuracy. This was an important confirmation of James Clerk Maxwell 's theory that light was an electromagnetic wave like radio waves.
Short lengths of Lecher line are often used as high Q resonant circuits , termed resonant stubs . For example, a quarter wavelength (λ/4) shorted Lecher line acts like a parallel resonant circuit, appearing as a high impedance at its resonant frequency and low impedance at other frequencies. They are used because at UHF frequencies the value of inductors and capacitors needed for ' lumped component ' tuned circuits becomes extremely low, making them difficult to fabricate and sensitive to parasitic capacitance and inductance. One difference between them is that transmission line stubs like Lecher lines also resonate at odd-number multiples of their fundamental resonant frequency, while lumped LC circuits just have one resonant frequency.
Lecher line circuits can be used for the tank circuits of UHF power amplifiers . [ 16 ] For instance, the twin tetrode (QQV03-20) 432 MHz amplifier described by G.R Jessop [ 17 ] uses a Lecher line anode tank.
Quarter-wave Lecher lines are used for the tuned circuits in the RF amplifier and local oscillator portions of modern television sets . The tuning necessary to select different stations is done by varactor diodes across the Lecher line. [ 18 ]
The separation between the Lecher bars does not affect the position of the standing waves on the line, but it does determine the characteristic impedance , which can be important for matching the line to the source of the radio frequency energy for efficient power transfer. For two parallel cylindrical conductors of diameter d and spacing D ,
Z 0 = 276 log ( D d + ( D d ) 2 − 1 ) = {\displaystyle Z_{0}=276\log \left({\frac {D}{d}}+{\sqrt {\left({\frac {D}{d}}\right)^{2}-1}}\right)=} 120 ϵ r cosh − 1 ( D d ) {\displaystyle {\frac {120}{\sqrt {\epsilon _{r}}}}\cosh ^{-1}\left({\frac {D}{d}}\right)}
For parallel wires the formula for capacitance (per unit length) C is
Hence as
Commercially available 300 and 450 ohm twin lead balanced ribbon feeder can be used as a fixed length Lecher line (resonant stub). | https://en.wikipedia.org/wiki/Lecher_line |
Lecithinase is a type of phospholipase that acts upon lecithin . [ 1 ] [ 2 ]
It can be produced by Clostridium perfringens , Staphylococcus aureus , Pseudomonas aeruginosa or Listeria monocytogenes . C. perfringens alpha toxin (lecithinase) causes myonecrosis and hemolysis . The lecithinase of S. aureus is used in detection of coagulase-positive strains, because of high link between lecithinase activity and coagulase activity.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lecithinase |
Lectins are carbohydrate -binding proteins that are highly specific for sugar groups that are part of other molecules, so cause agglutination of particular cells or precipitation of glycoconjugates and polysaccharides . Lectins have a role in recognition at the cellular and molecular level and play numerous roles in biological recognition phenomena involving cells, carbohydrates, and proteins. [ 1 ] [ 2 ] Lectins also mediate attachment and binding of bacteria , viruses , and fungi to their intended targets.
Lectins are found in many foods. Some foods, such as beans and grains, need to be cooked, fermented or sprouted to reduce lectin content. Some lectins are beneficial, such as CLEC11A , which promotes bone growth, while others may be powerful toxins such as ricin . [ 3 ]
Lectins may be disabled by specific mono- and oligosaccharides , which bind to ingested lectins from grains, legumes, nightshade plants, and dairy; binding can prevent their attachment to the carbohydrates within the cell membrane. The selectivity of lectins means that they are useful for analyzing blood type , and they have been researched for potential use in genetically engineered crops to transfer pest resistance.
branched α-mannosidic structures (high α-mannose type, or hybrid type and biantennary complex type N-Glycans)
R2-GlcNAcβ1-4(Fucα1-6)GlcNAc-R1
William C. Boyd alone and then together with Elizabeth Shapleigh [ 5 ] introduced the term "lectin" in 1954 from the Latin word lectus , "chosen" (from the verb legere , to choose or pick out). [ 6 ]
Lectins may bind to a soluble carbohydrate or to a carbohydrate moiety that is a part of a glycoprotein or glycolipid . They typically agglutinate certain animal cells and/or precipitate glycoconjugates . Most lectins do not possess enzymatic activity.
Lectins have these functions in animals:
The function of lectins in plants ( legume lectin ) is still uncertain. Once thought to be necessary for rhizobia binding, this proposed function was ruled out through lectin-knockout transgene studies. [ 9 ]
The large concentration of lectins in plant seeds decreases with growth, and suggests a role in plant germination and perhaps in the seed's survival itself. The binding of glycoproteins on the surface of parasitic cells also is believed to be a function. Several plant lectins have been found to recognize noncarbohydrate ligands that are primarily hydrophobic in nature, including adenine , auxins , cytokinin , and indole acetic acid , as well as water-soluble porphyrins . These interactions may be physiologically relevant, since some of these molecules function as phytohormones . [ 10 ]
Lectin receptor kinases (LecRKs) are believed to recognize damage associated molecular patterns (DAMPs), which are created or released from herbivore attack. [ citation needed ] In Arabidopsis , legume-type LecRKs Clade 1 has 11 LecRK proteins. LecRK-1.8 has been reported to recognize extracellular NAD molecules and LecRK-1.9 has been reported to recognize extracellular ATP molecules. [ citation needed ]
Extraction of proteins and lectins can be extracted via similar processes, also with their analysis, and discovery. For example cottonseed contains compounds of interest within the studies of extraction and purification of proteins [ 11 ]
Some hepatitis C viral glycoproteins may attach to C-type lectins on the host cell surface (liver cells) to initiate infection. [ 12 ] To avoid clearance from the body by the innate immune system , pathogens (e.g., virus particles and bacteria that infect human cells) often express surface lectins known as adhesins and hemagglutinins that bind to tissue-specific glycans on host cell-surface glycoproteins and glycolipids . [ 13 ] Multiple viruses, including influenza and several viruses in the Paramyxoviridae family, use this mechanism to bind and gain entry to target cells. [ 14 ]
Purified lectins are important in a clinical setting because they are used for blood typing . [ 15 ] Some of the glycolipids and glycoproteins on an individual's red blood cells can be identified by lectins.
Non blood-group antigens can be identified by lectins:
In neuroscience, the anterograde labeling method is used to trace the path of efferent axons with PHA-L , a lectin from the kidney bean . [ 16 ]
A lectin ( BanLec ) from bananas inhibits HIV-1 in vitro . [ 17 ] Achylectins, isolated from Tachypleus tridentatus , show specific agglutinating activity against human A-type erythrocytes. Anti-B agglutinins such as anti-BCJ and anti-BLD separated from Charybdis japonica and Lymantria dispar , respectively, are of value both in routine blood grouping and research. [ 18 ]
Lectins from legume plants, such as PHA or concanavalin A , have been used widely as model systems to understand the molecular basis of how proteins recognize carbohydrates, because they are relatively easy to obtain and have a wide variety of sugar specificities. The many crystal structures of legume lectins have led to a detailed insight of the atomic interactions between carbohydrates and proteins.
Legume seed lectins have been studied for their insecticidal potential and have shown harmful effects for the development of pest. [ 19 ]
Concanavalin A and other commercially available lectins have been used widely in affinity chromatography for purifying glycoproteins. [ 20 ]
In general, proteins may be characterized with respect to glycoforms and carbohydrate structure by means of affinity chromatography , blotting , affinity electrophoresis , and affinity immunoelectrophoreis with lectins, as well as in microarrays , as in evanescent -field fluorescence-assisted lectin microarray. [ 21 ]
One example of the powerful biological attributes of lectins is the biochemical warfare agent ricin. The protein ricin is isolated from seeds of the castor oil plant and comprises two protein domains . Abrin from the jequirity pea is similar:
Lectins are widespread in nature, and many foods contain the proteins. Some lectins can be harmful if poorly cooked or consumed in great quantities. They are most potent when raw as boiling, stewing or soaking in water for several hours can render most lectins inactive. Cooking raw beans at low heat, though, such as in a slow cooker , will not remove all the lectins. [ 22 ]
Some studies have found that lectins may interfere with absorption of some minerals, such as calcium , iron , phosphorus , and zinc . The binding of lectins to cells in the digestive tract may disrupt the breakdown and absorption of some nutrients, and as they bind to cells for long periods of time, some theories hold that they may play a role in certain inflammatory conditions such as rheumatoid arthritis and type 1 diabetes , but research supporting claims of long-term health effects in humans is limited and most existing studies have focused on developing countries where malnutrition may be a factor, or dietary choices are otherwise limited. [ 22 ]
The first writer to advocate a lectin-free diet was Peter J. D'Adamo, a Naturopath best known for promoting the Blood type diet . He argued that lectins may damage a person's blood type by interfering with digestion, food metabolism, hormones, insulin production—and so should be avoided. [ 23 ] D'Adamo provided no scientific evidence nor published data for his claims, and his diet has been criticized for making inaccurate statements about biochemistry. [ 23 ] [ 24 ]
Steven Gundry proposed a lectin-free diet in his book The Plant Paradox (2017). It excludes a large range of commonplace foods including whole grains , legumes, and most fruit, as well as the nightshade vegetables : tomatoes, potatoes, eggplant, bell peppers, and chili peppers. [ 25 ] [ 26 ] Gundry's claims about lectins are considered pseudoscience . His book cites studies that have nothing to do with lectins, and some that show—contrary to his own recommendations—that avoiding the whole grains wheat , barley , and rye will allow increase of harmful bacteria while diminishing helpful bacteria. [ 27 ] [ 28 ] [ 29 ]
Lectins are one of many toxic constituents of many raw plants that are inactivated by proper processing and preparation (e.g., cooking with heat, fermentation). [ 30 ] For example, raw kidney beans naturally contain toxic levels of lectin (e.g. phytohaemagglutinin ). Adverse effects may include nutritional deficiencies , and immune ( allergic ) reactions. [ 31 ]
Lectins are considered a major family of protein antinutrients , which are specific sugar-binding proteins exhibiting reversible carbohydrate-binding activities. [ 32 ] Lectins are similar to antibodies in their ability to agglutinate red blood cells. [ 33 ]
Many legume seeds have been proven to contain high lectin activity, termed hemagglutination . [ 34 ] Soybean is the most important grain legume crop in this category. Its seeds contain high activity of soybean lectins ( soybean agglutinin or SBA).
Long before a deeper understanding of their numerous biological functions, the plant lectins, also known as phytohemagglutinins , were noted for their particularly high specificity for foreign glycoconjugates (e.g., those of fungi and animals) [ 35 ] and used in biomedicine for blood cell testing and in biochemistry for fractionation . [ citation needed ]
Although they were first discovered more than 100 years ago in plants, now lectins are known to be present throughout nature. The earliest description of a lectin is believed to have been given by Peter Hermann Stillmark in his doctoral thesis presented in 1888 to the University of Dorpat . Stillmark isolated ricin, an extremely toxic hemagglutinin, from seeds of the castor plant ( Ricinus communis ).
The first lectin to be purified on a large scale and available on a commercial basis was concanavalin A , which is now the most-used lectin for characterization and purification of sugar-containing molecules and cellular structures. [ 36 ] The legume lectins are probably the most well-studied lectins. | https://en.wikipedia.org/wiki/Lectin |
Lectio difficilior potior ( Latin for "the more difficult reading is the stronger") is a main principle of textual criticism . Where different manuscripts conflict on a particular reading, the principle suggests that the more unusual one is more likely the original. The presupposition is that scribes would more often replace odd words and hard sayings with more familiar and less controversial ones, than vice versa. [ 1 ] Lectio difficilior potior is an internal criterion, which is independent of criteria for evaluating the manuscript in which it is found, [ 2 ] and that it is as applicable to manuscripts of a roman courtois , a classical poet, or a Sanskrit epic as it is to a biblical text.
The principle was one among a number that became established in early 18th-century text criticism, as part of attempts by scholars of the Enlightenment to provide a neutral basis for discovering an urtext that was independent of the weight of traditional authority.
Rabbeinu Tam (1100-1171) expressed the idea in his work 'Sefer Hayashar':
"ובעל התלמוד כתבו, שתלמידים המגיהים אינם מגיהים דברים של תימה"
("it was written by the author of the Talmud, since students who correct the text do not correct it in order to make the text difficult", responsum 44). Erasmus expressed the idea in his Annotations to the New Testament in the early 1500s: "And whenever the Fathers report that there is a variant reading, that one always appears to me to be more esteemed (by them is the one) which at first glance seems the more absurd-since it is reasonable that a reader who is either not very learned or not very attentive was offended by the specter of absurdity and changed the text." [ 3 ]
According to Paolo Trovato, who cites as source Sebastiano Timpanaro , the principle was first codified by Jean Leclerc in 1696 in his Ars critica . [ 4 ] It was also laid down by Johann Albrecht Bengel , as "proclivi scriptioni praestat ardua" , in his Prodromus Novi Testamenti Graeci Rectè Cautèque Adornandi , 1725, and employed in his Novum Testamentum Graecum , 1734. [ 5 ] It was widely promulgated by Johann Jakob Wettstein , to whom it is often attributed. [ 6 ]
Many scholars considered the employment of lectio difficilior potior an objective criterion that would even override other evaluative considerations. [ 7 ] The poet and scholar A. E. Housman challenged such reactive applications in 1922, in the provocatively titled article "The Application of Thought to Textual Criticism". [ 8 ]
On the other hand, taken as an axiom, the principle lectio difficilior produces an eclectic text , rather than one based on a history of manuscript transmission. "Modern eclectic praxis operates on a variant unit basis without any apparent consideration of the consequences", Maurice A. Robinson warned. He suggested that to the principle "should be added a corollary, difficult readings created by individual scribes do not tend to perpetuate in any significant degree within transmissional history". [ 9 ]
A noted proponent of the superiority of the Byzantine text-type , the form of the Greek New Testament in the largest number of surviving manuscripts, Robinson would use the corollary to explain differences from the Majority Text as scribal errors that were not perpetuated because they were known to be errant or because they existed only in a small number of manuscripts at the time .
Most textual-critical scholars would explain the corollary by the assumption that scribes tended to "correct" harder readings and so cut off the stream of transmission. Thus, only earlier manuscripts would have the harder readings. Later manuscripts would not see the corollary principle as being a very important one to get closer to the original form of the text.
However, lectio difficilior is not to be taken as an absolute rule either but as a general guideline. " In general the more difficult reading is to be preferred" is Bruce Metzger 's reservation. [ 10 ] "There is truth in the maxim: lectio difficilior lectio potior ('the more difficult reading is the more probable reading')", write Kurt and Barbara Aland. [ 11 ]
However, for scholars like Kurt Aland , who follow a path of reasoned eclecticism based on evidence both internal and external to the manuscripts, "this principle must not be taken too mechanically, with the most difficult reading ( lectio difficillima ) adopted as original simply because of its degree of difficulty". [ 12 ] Also, Martin Litchfield West cautions: "When we choose the 'more difficult reading'... we must be sure that it is in itself a plausible reading. The principle should not be used in support of dubious syntax, or phrasing that it would not have been natural for the author to use. There is an important difference between a more difficult reading and a more unlikely reading". [ 13 ]
Responding to Tetyana Vilkul 's review of his 2003 critical edition of the Primary Chronicle (PVL) , Donald Ostrowski (2005) phrased the principle as follows: 'The more difficult reading is preferred to a smoother reading, except, again, where a mechanical copying error would explain the roughness. The rationale is that a copyist is more likely to have tried to make a rough reading smoother than to have made a smooth reading more difficult to understand.' [ 14 ] | https://en.wikipedia.org/wiki/Lectio_difficilior_potior |
LEDDAR ( Light-Emitting Diode Detection And Ranging) is a proprietary technology owned by LeddarTech . It uses the time of flight of light signals and signal processing algorithms to detect, locate, and measure objects in its field of view.
The Leddar technology is like a light-based radar that sends very short light pulses of invisible light about 100,000 times per second to actively illuminate an area of interest. The sensor captures the light backscattered from objects (either fixed or moving) over its detection area and processes the signals to precisely map their location and other attributes.
The data is compiled thousands of times per second, providing up to a few hundred frames per second and offering accurate and reliable information even in adverse weather and lighting conditions.
The multi-channel sensor also provides lateral discrimination of detected objects and this feature, with 3D measurements, provides the basis for object tracking. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Leddar |
In iron and steel metallurgy , ledeburite is a mixture of 4.3% carbon in iron and is a eutectic mixture of austenite and cementite . Ledeburite is not a type of steel as the carbon level is too high although it may occur as a separate constituent in some high carbon steels. It is mostly found with cementite or pearlite in a range of cast irons.
It is named after the metallurgist Karl Heinrich Adolf Ledebur (1837–1906). He was the first professor of metallurgy at the Bergakademie Freiberg and discovered ledeburite in 1882.
Ledeburite arises when the carbon content is between 2.06% and 6.67%. The eutectic mixture of austenite and cementite is 4.3% carbon, Fe 3 C:2Fe, with a melting point of 1147 °C.
Ledeburite-II (at ambient temperature) is composed of cementite-I with recrystallized secondary cementite (which separates from austenite as the metal cools) and (with slow cooling) of pearlite. The pearlite results from the eutectoidal decay of the austenite that comes from the ledeburite-I at 723 °C. During more rapid cooling, bainite can develop instead of pearlite, and with very rapid cooling martensite can develop.
The story of ledeburite begins in the late 19th century when Adolf Ledebur, a pioneering German metallurgist, embarked on a journey to unravel the complexities of steel microstructures. In 1882, Ledebur identified a distinct microconstituent in high-carbon steels, characterized by its unique lamellar structure. This discovery marked the birth of ledeburite, named in honor of the scientist whose keen observations laid the foundation for understanding the intricate world within steel.
Beyond its immediate industrial applications, ledeburite holds a central position in metallurgical studies. The exploration of this unique microconstituent contributes to a deeper understanding of phase transformations, solidification processes, and the principles governing alloy behavior. Researchers and metallurgists leverage ledeburite as a model system to investigate the fundamental aspects of phase diagrams, eutectic reactions, and the kinetics of microstructural evolution during cooling and solidification.
Metallurgical studies involving ledeburite extend to the development of advanced materials with tailored properties. By comprehending the nuances of ledeburite formation and its impact on steel performance, scientists can design alloys with improved strength, hardness, and corrosion resistance. This knowledge is invaluable in pushing the boundaries of material science and engineering, paving the way for innovations in diverse fields. | https://en.wikipedia.org/wiki/Ledeburite |
In fluid dynamics , the Ledinegg instability occurs in two-phase flow , especially in a boiler tube , when the boiling boundary is within the tube. For a given mass flux J through the tube, the pressure drop per unit length (which typically varies as the square of the mass flux and inversely as the density, i.e., as J 2 / ρ {\displaystyle J^{2}/\rho } ) is much less when the flow is wholly of liquid than when the flow is wholly of steam. Thus, as the boiling boundary moves up the tube, the total pressure drop falls, potentially increasing the flow in an unstable manner. Boiler tubes normally overcome this (which is effectively a 'negative resistance' regime) by incorporating a narrow orifice at the entry, to give a stabilising pressure drop on entry.
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ledinegg_instability |
The Lee algorithm is one possible solution for maze routing problems based on breadth-first search .
It always gives an optimal solution, if one exists, but is slow and requires considerable memory.
1) Initialization
2) Wave expansion
3) Backtrace
4) Clearance
Of course the wave expansion marks only points in the routable area of the chip, not in the blocks or already wired parts, and to minimize segmentation you should keep in one direction as long as possible.
Remzi Osmanli
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lee_algorithm |
The Leeuwenhoek Lecture is a prize lecture of the Royal Society to recognize achievement in microbiology . [ 1 ] The prize was originally given in 1950 and awarded annually, but from 2006 to 2018 was given triennially. From 2018 it will be awarded biennially.
The prize is named after the Dutch microscopist Antonie van Leeuwenhoek and was instituted in 1948 from a bequest from George Gabb. A gift of £2000 is associated with the lecture. [ 1 ]
The following is a list of Leeuwenhoek Lecture award winners along with the title of their lecture: [ 2 ] | https://en.wikipedia.org/wiki/Leeuwenhoek_Lecture |
The Leeuwenhoek Medal , established in 1875 by the Royal Netherlands Academy of Arts and Sciences (KNAW), in honor of the 17th- and 18th-century microscopist Antoni van Leeuwenhoek , is granted every ten years to the scientist judged to have made the most significant contribution to microbiology during the preceding decade. [ 1 ] Starting in 2015, the Royal Dutch Society for Microbiology (KNVM) began awarding the Leeuwenhoek Medal, selecting Jillian Banfield , the first woman to receive the award in 2023. [ 2 ] [ 3 ]
The following persons have received the Leeuwenhoek medal: [ 4 ] | https://en.wikipedia.org/wiki/Leeuwenhoek_Medal |
In statistical mechanics , Lee–Yang theory , sometimes also known as Yang–Lee theory , is a scientific theory which seeks to describe phase transitions in large physical systems in the thermodynamic limit based on the properties of small, finite-size systems. The theory revolves around the complex zeros of partition functions of finite-size systems and how these may reveal the existence of phase transitions in the thermodynamic limit. [ 1 ] [ 2 ]
Lee–Yang theory constitutes an indispensable part of the theories of phase transitions. Originally developed for the Ising model , the theory has been extended and applied to a wide range of models and phenomena, including protein folding , [ 3 ] percolation , [ 4 ] complex networks , [ 5 ] and molecular zippers. [ 6 ]
The theory is named after the Nobel laureates Tsung-Dao Lee and Yang Chen-Ning , [ 7 ] [ 8 ] who were awarded the 1957 Nobel Prize in Physics for their unrelated work on parity non-conservation in weak interaction . [ 9 ]
For an equilibrium system in the canonical ensemble , all statistical information about the system is encoded in the partition function,
where the sum runs over all possible microstates , and β = 1 / ( k B T ) {\displaystyle \beta =1/(k_{B}T)} is the inverse temperature, k B {\displaystyle k_{B}} is the Boltzmann constant and E i {\displaystyle E_{i}} is the energy of a microstate. The moments ⟨ E n ⟩ {\displaystyle \langle E^{n}\rangle } of the energy statistics are obtained by differentiating the partition function with respect to the inverse temperature multiple times,
From the partition function, we may also obtain the free energy
Analogously to how the partition function generates the moments, the free energy generates the cumulants of the energy statistics
More generally, if the microstate energies E i ( q ) = E i ( 0 ) − q Φ i {\displaystyle E_{i}(q)=E_{i}(0)-q\Phi _{i}} depend on a control parameter q {\displaystyle q} and a fluctuating conjugate variable Φ {\displaystyle \Phi } (whose value may depend on the microstate), the moments of Φ {\displaystyle \Phi } may be obtained as
and the cumulants as
For instance, for a spin system, the control parameter may be an external magnetic field , q = h {\displaystyle q=h} , and the conjugate variable may be the total magnetization, Φ = M {\displaystyle \Phi =M} .
The partition function and the free energy are intimately linked to phase transitions, for which there is a sudden change in the properties of a physical system. Mathematically, a phase transition occurs when the partition function vanishes and the free energy is singular (non- analytic ). For instance, if the first derivative of the free energy with respect to the control parameter is non-continuous, a jump may occur in the average value of the fluctuating conjugate variable, such as the magnetization, corresponding to a first-order phase transition .
Importantly, for a finite-size system, Z ( q ) {\displaystyle Z(q)} is a finite sum of exponential functions and is thus always positive for real values of q {\displaystyle q} . Consequently, F ( q ) {\displaystyle F(q)} is always well-behaved and analytic for finite system sizes. By contrast, in the thermodynamic limit, F ( q ) {\displaystyle F(q)} may exhibit a non-analytic behavior.
Using that Z ( q ) {\displaystyle Z(q)} is an entire function for finite system sizes, Lee–Yang theory takes advantage of the fact that the partition function can be fully characterized by its zeros in the complex plane of q {\displaystyle q} . These zeros are often known as Lee–Yang zeros or, in the case of inverse temperature as control parameter, Fisher zeros . The main idea of Lee–Yang theory is to mathematically study how the positions and the behavior of the zeros change as the system size grows. If the zeros move onto the real axis of the control parameter in the thermodynamic limit, it signals the presence of a phase transition at the corresponding real value of q = q ∗ {\displaystyle q=q^{*}} .
In this way, Lee–Yang theory establishes a connection between the properties (the zeros) of a partition function for a finite size system and phase transitions that may occur in the thermodynamic limit (where the system size goes to infinity).
The molecular zipper is a toy model which may be used to illustrate the Lee–Yang theory. It has the advantage that all quantities, including the zeros, can be computed analytically. The model is based on a double-stranded macromolecule with N {\displaystyle N} links that can be either open or closed. For a fully closed zipper, the energy is zero, while for each open link the energy is increased by an amount ε {\displaystyle \varepsilon } . A link can only be open if the preceding one is also open. [ 6 ]
For a number g {\displaystyle g} of different ways that a link can be open, the partition function of a zipper with N {\displaystyle N} links reads
This partition function has the complex zeros
where we have introduced the critical inverse temperature β c − 1 = k B T c {\displaystyle \beta _{c}^{-1}=k_{B}T_{c}} , with T c = ε k B log g {\displaystyle T_{c}={\frac {\varepsilon }{k_{B}\log g}}} . We see that in the limit N → ∞ {\displaystyle N\rightarrow \infty } , the zeros closest to the real axis approach the critical value β k = β c {\displaystyle \beta _{k}=\beta _{c}} . For g = 1 {\displaystyle g=1} , the critical temperature is infinite and no phase transition takes place for finite temperature. By contrast, for g > 1 {\displaystyle g>1} , a phase transition takes place at the finite temperature T c {\displaystyle T_{c}} .
To confirm that the system displays a non-analytic behavior in the thermodynamic limit, we consider the free energy F = − k B T log Z {\displaystyle F=-k_{B}T\log Z} or, equivalently, the dimensionless free energy per link F N ε . {\displaystyle {\frac {F}{N\varepsilon }}.} In the thermodynamic limit, one obtains
Indeed, a cusp develops at T c {\displaystyle T_{c}} in the thermodynamic limit. In this case, the first derivative of the free energy is discontinuous, corresponding to a first-order phase transition .
The Ising model is the original model that Lee and Yang studied when they developed their theory on partition function zeros. The Ising model consists of spin lattice with N {\displaystyle N} spins { σ k } {\displaystyle \{\sigma _{k}\}} , each pointing either up, σ k = + 1 {\displaystyle \sigma _{k}=+1} , or down, σ k = − 1 {\displaystyle \sigma _{k}=-1} . Each spin may also interact with its closest spin neighbors with a strength J i j {\displaystyle J_{ij}} . In addition, an external magnetic field h > 0 {\displaystyle h>0} may be applied (here we assume that it is uniform and thus independent of the spin indices). The Hamiltonian of the system for a certain spin configuration { σ i } {\displaystyle \{\sigma _{i}\}} then reads
In this case, the partition function reads
The zeros of this partition function cannot be determined analytically, thus requiring numerical approaches.
For the ferromagnetic Ising model, for which J i j ≥ 0 {\displaystyle J_{ij}\geq 0} for all i , j {\displaystyle i,j} , Lee and Yang showed that all zeros of Z ( h ) {\displaystyle Z(h)} lie on the unit circle in the complex plane of the parameter z ≡ exp ( − 2 β h ) {\displaystyle z\equiv \exp(-2\beta h)} . [ 7 ] [ 8 ] This statement is known as the Lee–Yang theorem , and has later been generalized to other models, such as the Heisenberg model .
A similar approach can be used to study dynamical phase transitions. These transitions are characterized by the Loschmidt amplitude , which plays the analogue role of a partition function. [ 10 ]
The Lee–Yang zeros may be connected to the cumulants of the conjugate variable Φ {\displaystyle \Phi } of the control variable q {\displaystyle q} . [ 11 ] [ 12 ] For brevity, we set β = 1 {\displaystyle \beta =1} in the following. Using that the partition function is an entire function for a finite-size system, one may expand it in terms of its zeros as
where Z ( 0 ) {\displaystyle Z(0)} and c {\displaystyle c} are constants, and q k {\displaystyle q_{k}} is the k {\displaystyle k} :th zero in the complex plane of q {\displaystyle q} . The corresponding free energy then reads
Differentiating this expression n {\displaystyle n} times with respect to q {\displaystyle q} , yields the n {\displaystyle n} :th order cumulant
Furthermore, using that the partition function is a real function, the Lee–Yang zeros have to come in complex conjugate pairs, allowing us to express the cumulants as
where the sum now runs only over each pair of zeros. This establishes a direct connection between cumulants and Lee–Yang zeros.
Moreover, if n {\displaystyle n} is large, the contribution from zeros lying far away from q {\displaystyle q} is strongly suppressed, and only the closest pair q 0 {\displaystyle q_{0}} of zeros plays an important role. One may then write
This equation may be solved as a linear system of equations, allowing for the Lee–Yang zeros to be determined directly from higher-order cumulants of the conjugate variable: [ 11 ] [ 12 ]
Being complex numbers of a physical variable, Lee–Yang zeros have traditionally been seen as a purely theoretical tool to describe phase transitions, with little or none connection to experiments. However, in a series of experiments in the 2010s, various kinds of Lee–Yang zeros have been determined from real measurements. In one experiment in 2015, the Lee–Yang zeros were extracted experimentally by measuring the quantum coherence of a spin coupled to an Ising-type spin bath. [ 13 ] In another experiment in 2017, dynamical Lee–Yang zeros were extracted from Andreev tunneling processes between a normal-state island and two superconducting leads. [ 14 ] Furthermore, in 2018, there was an experiment determining the dynamical Fisher zeros of the Loschmidt amplitude, which may be used to identify dynamical phase transitions . [ 15 ] | https://en.wikipedia.org/wiki/Lee–Yang_theory |
Lefavirales is an order of viruses of the class Naldaviricetes . [ 1 ] Viruses of this order have Hexapoda and Crustacea as their hosts, including Adoxophyes and Carcinus . [ 2 ]
Lefavirales is part of the incertae sedis class Naldaviricetes . It contains the following families: [ 1 ]
This virus -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lefavirales |
In mathematics , Lefschetz duality is a version of Poincaré duality in geometric topology , applying to a manifold with boundary . Such a formulation was introduced by Solomon Lefschetz ( 1926 ), at the same time introducing relative homology , for application to the Lefschetz fixed-point theorem . [ 1 ] There are now numerous formulations of Lefschetz duality or Poincaré–Lefschetz duality , or Alexander–Lefschetz duality .
Let M be an orientable compact manifold of dimension n , with boundary ∂ ( M ) {\displaystyle \partial (M)} , and let z ∈ H n ( M , ∂ ( M ) ; Z ) {\displaystyle z\in H_{n}(M,\partial (M);\mathbb {Z} )} be the fundamental class of the manifold M . Then cap product with z (or its dual class in cohomology) induces a pairing of the (co) homology groups of M and the relative (co)homology of the pair ( M , ∂ ( M ) ) {\displaystyle (M,\partial (M))} . Furthermore, this gives rise to isomorphisms of H k ( M , ∂ ( M ) ; Z ) {\displaystyle H^{k}(M,\partial (M);\mathbb {Z} )} with H n − k ( M ; Z ) {\displaystyle H_{n-k}(M;\mathbb {Z} )} , and of H k ( M , ∂ ( M ) ; Z ) {\displaystyle H_{k}(M,\partial (M);\mathbb {Z} )} with H n − k ( M ; Z ) {\displaystyle H^{n-k}(M;\mathbb {Z} )} for all k {\displaystyle k} . [ 2 ]
Here ∂ ( M ) {\displaystyle \partial (M)} can in fact be empty, so Poincaré duality appears as a special case of Lefschetz duality.
There is a version for triples. Let ∂ ( M ) {\displaystyle \partial (M)} decompose into subspaces A and B , themselves compact orientable manifolds with common boundary Z , which is the intersection of A and B . Then, for each k {\displaystyle k} , there is an isomorphism [ 3 ] | https://en.wikipedia.org/wiki/Lefschetz_duality |
In mathematics , the Lefschetz zeta-function is a tool used in topological periodic and fixed point theory, and dynamical systems . Given a continuous map f : X → X {\displaystyle f\colon X\to X} , the zeta-function is defined as the formal series
where L ( f n ) {\displaystyle L(f^{n})} is the Lefschetz number of the n {\displaystyle n} -th iterate of f {\displaystyle f} . This zeta-function is of note in topological periodic point theory because it is a single invariant containing information about all iterates of f {\displaystyle f} .
The identity map on X {\displaystyle X} has Lefschetz zeta function
where χ ( X ) {\displaystyle \chi (X)} is the Euler characteristic of X {\displaystyle X} , i.e., the Lefschetz number of the identity map.
For a less trivial example, let X = S 1 {\displaystyle X=S^{1}} be the unit circle , and let f : S 1 → S 1 {\displaystyle f\colon S^{1}\to S^{1}} be reflection in the x -axis, that is, f ( θ ) = − θ {\displaystyle f(\theta )=-\theta } . Then f {\displaystyle f} has Lefschetz number 2, while f 2 {\displaystyle f^{2}} is the identity map, which has Lefschetz number 0. Likewise, all odd iterates have Lefschetz number 2, while all even iterates have Lefschetz number 0. Therefore, the zeta function of f {\displaystyle f} is
If f is a continuous map on a compact manifold X of dimension n (or more generally any compact polyhedron), the zeta function is given by the formula
Thus it is a rational function. The polynomials occurring in the numerator and denominator are essentially the characteristic polynomials of the map induced by f on the various homology spaces.
This generating function is essentially an algebraic form of the Artin–Mazur zeta function , which gives geometric information about the fixed and periodic points of f . | https://en.wikipedia.org/wiki/Lefschetz_zeta_function |
Left-hand–right-hand activity chart is an illustration that shows the contributions of the right and left hands of a worker and the balance of the workload between the right and left hands. [ 1 ]
This article about a mechanical engineering topic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Left-hand–right-hand_activity_chart |
In algebra , the terms left and right denote the order of a binary operation (usually, but not always, called " multiplication ") in non- commutative algebraic structures .
A binary operation ∗ is usually written in the infix form :
The argument s is placed on the left side, and the argument t is on the right side. Even if the symbol of the operation is omitted, the order of s and t does matter (unless ∗ is commutative).
A two-sided property is fulfilled on both sides. A one-sided property is related to one (unspecified) of two sides.
Although the terms are similar, left–right distinction in algebraic parlance is not related either to left and right limits in calculus , or to left and right in geometry .
A binary operation ∗ may be considered as a family of unary operators through currying :
depending on t as a parameter – this is the family of right operations. Similarly,
defines the family of left operations parametrized with s .
If for some e , the left operation L e is the identity operation , then e is called a left identity . Similarly, if R e = id , then e is a right identity.
In ring theory , a subring which is invariant under any left multiplication in a ring is called a left ideal . Similarly, a right multiplication-invariant subring is a right ideal.
Over non-commutative rings , the left–right distinction is applied to modules , namely to specify the side where a scalar (module element) appears in the scalar multiplication .
The distinction is not purely syntactical because one gets two different associativity rules (the lowest row in the table) which link multiplication in a module with multiplication in a ring.
A bimodule is simultaneously a left and right module, with two different scalar multiplication operations, obeying an associativity condition on them. [ vague ]
In category theory the usage of "left" and "right" has some algebraic resemblance, but refers to left and right sides of morphisms . See adjoint functors . | https://en.wikipedia.org/wiki/Left_and_right_(algebra) |
Left shift or blood shift is an increase in the number of immature cell types among the blood cells in a sample of blood. Many (perhaps most) clinical mentions of left shift refer to the white blood cell lineage, particularly neutrophil -precursor band cells , [ 1 ] : 84–84 thus signifying bandemia . Less commonly, left shift may also refer to a similar phenomenon in the red blood cell lineage in severe anemia , when increased reticulocytes and immature erythrocyte -precursor cells appear in the peripheral circulation. [ 2 ]
The standard definition of a left shift is an absolute band form count greater than 7700/microL. [ 3 ] There are competing explanations for the origin of the phrase "left shift," including the left-most button arrangement of early cell sorting machines [ 4 ] [ 5 ] and a 1920s publication by Josef Arneth, containing a graph in which immature neutrophils, with fewer segments, shifted the median left. [ 6 ] In the latter view, the name reflects a curve's preponderance shifting to the left on a graph of hematopoietic cellular differentiations .
It is usually noted on microscopic examination of a blood smear . This systemic effect of inflammation is most often seen in the course of an active infection and during other severe illnesses such as hypoxia and shock . Döhle bodies may also be present in the neutrophil's cytoplasm in the setting of sepsis or severe inflammatory responses. [ 1 ] : 663–664
It is believed that cytokines (including IL-1 and TNF ) accelerate the release of cells from the postmitotic reserve pool in the bone marrow , leading to an increased number of immature cells. [ 1 ] : 84–84 | https://en.wikipedia.org/wiki/Left_shift_(medicine) |
Left–right confusion ( LRC ) is the inability to accurately differentiate between left and right directions . Conversely, Left–right discrimination ( LRD ) refers to a person's ability to differentiate between left and right. LRC is reported by approximately 15% of the population according to the 2020 research by Van der Ham and her colleagues. [ 1 ] People who have LRC can typically perform daily navigational tasks, such as driving according to road signs or following a map, but may have difficulty performing actions that require a precise understanding of directional commands, such as ballroom dancing . [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Data regarding LRC prevalence is primarily based on behavioral studies, self-assessments, and surveys. Gormley and Brydges found that in a group of 800 adults, 17% of women and 9% of men reported difficulty differentiating between left and right. [ 6 ] Such studies suggest that women are more prone to LRC than men, [ 7 ] with women reporting higher rates of LRC in both accuracy and speed of response. [ 4 ] [ 8 ] [ 9 ]
The Bergen Left–Right Discrimination (BLRD) test is designed to measure individual performance in LRD accuracy. However, this test has been criticized for incorporating tasks that require the use of additional strategies, such as mental rotation (MR). [ 10 ] Because men have been shown to consistently outperform women in MR tasks, [ 11 ] tests involving the use of this particular strategy may present alternative cognitive demands and lead to inaccurate assessment of LRD performance. [ 8 ] An extended version of the BLRD test was designed to allow for differential evaluation of LRD and MR abilities, in which subtests were created with either high or low demands on mental rotation. Results from these studies did not find sex differences in LRD performance when mental rotation demands were low. [ 10 ] Another study found that sex differences in left–right discrimination existed in terms of self-reported difficulty, but not in actual tested ability. [ 12 ]
Alternatively, studies focused on LRD as a phenomenon distinct from MR concluded that there are sex differences present in LRD. [ 7 ] Scientists controlled for MR demands, potential menstrual cycle effects, and other hormone fluctuations, and determined that the neurocognitive mechanisms that support LRD are different for men and women. This research revealed that inferior parietal and right angular gyrus activation were correlated with LRD performance in both men and women. Women also demonstrated increased prefrontal activation, but did not exhibit greater bilateral activation. Additionally, no correlation was found between LRD accuracy and brain activation, or between brain activation and reaction time, for either sex. These results indicate that there are sex differences in the neurocognitive mechanisms underlying LRD performance; however, findings did not suggest that women are more prone to LRC than men. [ 7 ]
Humans are constantly making decisions about spatial relations ; however, some spatial relations, such as left–right, are commonly confused, while other spatial relations, such as up–down, above–below, and front–back, are seldom, if ever, mistaken. [ 13 ] The ability to categorize and compartmentalize space is an essential tool for navigating this 3D world; an ability shown to develop in early infancy. [ 14 ] [ 15 ] Infant ability to visually match above–below and left–right relations appears to diminish in early toddlerhood, as language acquisition may complicate verbal labeling. Children learn to verbally discriminate between above–below relations around the age of three, and learn left–right linguistic labels between the ages of six and seven; however, these classifications may only exist in the linguistic context. [ 13 ] In other words, children may learn the terms for left and right without having developed a cognitive representation to allow for the accurate application of such spatial distinctions.
Research seeks to explain the neural activity associated with left–right discrimination, attempting to identify differences in the encoding, consolidation, and retrieval of left–right versus above–below relations. One study found that neural activity patterns for left–right and above–below distinctions are represented differently in the brain, leading to the theory that these spatial judgements are supported by separate cognitive mechanisms. [ 13 ] Experiments used magnetoencephalography (MEG) to record neural activity during a computerized nonverbal task, examining left–right and above–below differences in encoding and working memory . Results showed differences in neural activity patterns in the right cerebellum , right superior temporal gyrus , and left temporoparietal junction during the encoding phase, and indicated differential neural activity in the inferior parietal, right superior temporal, and right cerebellum regions in the working memory tests. [ 13 ]
Although some individuals may struggle with LRD more than others, discriminating between left and right in the face of distraction has been shown to impair even the most proficient individual's ability to accurately differentiate between the two. This issue is of particular importance to medical students, clinicians and health care professionals, where distraction in the workplace and LRD inaccuracy can lead to severe consequences, including laterality errors and wrong-side surgeries. [ 16 ] Laterality errors in the field of aviation may also lead to equally devastating results, for example, causing a major airline crash.
Distraction has a significant impact on LRD accuracy, and the type of distraction can alter the magnitude of these effects. For example, cognitive distraction, which occurs when an individual is not directly focused on the task at hand, has a more profound effect on LRD performance than auditory distraction, such as the presence of continuous ambient noise . [ 16 ] Additionally, in the field of health care, it has been noted that mental rotation is often involved in making left–right distinctions, such as when a medical practitioner is facing their patient and must adjust for the opposite left–right relations. [ 6 ] | https://en.wikipedia.org/wiki/Left–right_confusion |
In computing , legacy mode is a state in which a computer system , component, or application software behaves in a way that is different from its standard operation in order to support older software , data , or expected behavior. It differs from backward compatibility in that an item in legacy mode will often sacrifice newer features or performance , or be unable to access data or run programs it normally could, in order to provide continued access to older data or functionality. Sometimes it can allow newer technologies that replaced the old to emulate them when running older operating systems .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Legacy_mode |
A legal expert system is a domain-specific expert system that uses artificial intelligence to emulate the decision-making abilities of a human expert in the field of law. [ 1 ] : 172 Legal expert systems employ a rule base or knowledge base and an inference engine to accumulate, reference and produce expert knowledge on specific subjects within the legal domain.
It has been suggested that legal expert systems could help to manage the rapid expansion of legal information and decisions that began to intensify in the late 1960s. [ 2 ] Many of the first legal expert systems were created in the 1970s [ 1 ] : 179 and 1980s. [ 3 ] : 928
Lawyers were originally identified as primary target users of legal expert systems. [ 4 ] : 3 Potential motivations for this work included:
Some early development work was oriented toward the creation of automated judges. [ 6 ] : 386
One of the first use cases was the encoding of the British Nationality Act at Imperial College carried out under the supervision of Marek Sergot and Robert Kowalski .
Lance Elliot wrote: "The British Nationality Act was passed in 1981 and shortly thereafter was used as a means of showcasing the efficacy of using Artificial Intelligence (AI) techniques and technologies, doing so to explore how the at-the-time newly enacted statutory law might be encoded into a computerized logic-based formalization." [ 7 ]
The authors’ seminal article, "The British Nationality Act as a Logic Program," published in 1986 in the Communications of the ACM journal, is one of the first and best-known works in computational law, and one of the most widely cited papers in the field. [ 8 ]
In 2021, the Inaugural CodeX Prize was awarded to Robert Kowalski, Fariba Sadri, and Marek Sergot in acknowledgment of their groundbreaking work on the application of logic programming to the formalization and analysis of the British Nationality Act. [ 9 ]
Later work on legal expert systems has identified potential benefits to non-lawyers as a means to increase access to legal knowledge. [ 4 ] : 4
Legal expert systems can also support administrative processes, facilitate decision-making processes, automate rule-based analyses, [ 10 ] and exchange information directly with citizen-users. [ 11 ]
Rule-based expert systems rely on a model of deductive reasoning that utilizes "If A, then B" rules. In a rule-based legal expert system, information is represented in the form of deductive rules within the knowledge base. [ 12 ]
Case-based reasoning models, which store and manipulate examples or cases, hold the potential to emulate an analogical reasoning process thought to be well-suited for the legal domain. [ 12 ] This model effectively draws on known experiences our outcomes for similar problems. [ 13 ] : 5
A neural net relies on a computer model that mimics that structure of a human brain, and operates in a very similar way to the case-based reasoning model. [ 12 ] This expert system model is capable of recognizing and classifying patterns within the realm of legal knowledge and dealing with imprecise inputs. [ 14 ] : 18
Fuzzy logic models attempt to create 'fuzzy' concepts or objects that can then be converted into quantitative terms or rules that are indexed and retrieved by the system. [ 14 ] : 18–19 In the legal domain, fuzzy logic can be used for rule-based and case-based reasoning models. [ 15 ]
Some legal expert system architects have adopted a very practical approach, employing scientific modes of reasoning within a given set of rules or cases. Others have opted for a broader philosophical approach inspired by jurisprudential reasoning modes emanating from established legal theoreticians. [ 1 ] : 183
Some legal expert systems aim to arrive at a particular conclusion in law, while others are designed to predict a particular outcome. An example of a predictive system is one that predicts the outcome of judicial decisions, the value of a case, or the outcome of litigation. [ 3 ] : 932
Many forms of legal expert systems have become widely used and accepted by both the legal community and the users of legal services. [ 16 ]
The inherent complexity of law as a discipline raises immediate challenges for legal expert system knowledge engineers . Legal matters often involve interrelated facts and issues, which further compound the complexity. [ 5 ] : 4 [ 6 ] : 386
Factual uncertainty may also arise when there are disputed versions of factual representations that must be input into an expert system to begin the reasoning process. [ 5 ] : 4
The limitations of most computerized problem solving techniques inhibit the success of many expert systems in the legal domain. Expert systems typically rely on deductive reasoning models that have difficulty according degrees of weight to certain principles of law or importance to previously decided cases that may or may not influence a decision in an immediate case or context. [ 12 ]
Expert legal knowledge can be difficult to represent or formalize within the structure of an expert system. For knowledge engineers, challenges include:
Creating a functioning expert system requires significant investments in software architecture , subject matter expertise and knowledge engineering . Faced with these challenges, many system architects restrict the domain in terms of subject matter and jurisdiction. The consequence of this approach is the creation of narrowly focused and geographically restricted legal expert systems that are difficult to justify on a cost-benefit basis. [ 5 ] : 5
Current applications of AI in the legal field utilize machines to review documents, particularly when a high level of completeness and confidence in the quality of document analysis is depended upon, such as in instances of litigation and where due diligence play a role. [ 18 ] Among the numerically most quantifiable advantages of AI in the legal field are the time and money saving impact by freeing lawyers from having to spend inordinate amounts of their valuable time on routine tasks, aiding in setting free lawyers’ creative energy by reducing stress. [ 18 ] This in turn increases the rate of case load reduction by accomplishing better results in less time, which unlocks potential additional revenue per unit of time spend on a case. [ 18 ] The cost of setting up and maintaining AI systems in law is more than offset by the attained savings through increased efficacy; unbalanced cost can be assigned to clients. [ 18 ]
Legal expert systems may lead non-expert users to incorrect or inaccurate results and decisions. This problem could be compounded by the fact that users may rely heavily on the correctness or trustworthiness of results or decisions generated by these systems. [ 19 ]
ASHSD-II is a hybrid legal expert system that blends rule-based and case-based reasoning models in the area of matrimonial property disputes under English law. [ 13 ] : 49
CHIRON is a hybrid legal expert system that blends rule-based and case-based reasoning models to support tax planning activities under United States tax law and codes. [ 20 ]
JUDGE is a rule-based legal expert system that deals with sentencing in the criminal legal domain for offences relating to murder, assault and manslaughter. [ 21 ] : 51
Legislate is a knowledge graph powered contract management platform which applies legal rules to generate lawyer-approved contracts. [ 22 ]
The Latent Damage Project is a rule-based legal expert system that deals with limitation periods under the (UK) Latent Damage Act 1986 in relation to the domains of tort, contract and product liability law. [ 23 ]
Split-Up is a rule-based legal expert system that assists in the division of marital assets according to the (Australia) Family Law Act (1975) . [ 24 ]
SHYSTER is a case-based legal expert system that can also function as a hybrid through its ability to link with rule-based models. It was designed to accommodate multiple legal domains, including aspects of Australian copyright law, contract law, personal property and administrative law. [ 21 ]
TAXMAN is a rule-based system that could perform a basic form of legal reasoning by classifying cases under a particular category of statutory rules in the area of law concerning corporate reorganization. [ 25 ] : 837
Catala is a French domain-specific programming language designed for deriving correct-by-construction (as assured by formal methods ) implementations from legislative texts. It is currently maintained by the INRIA . [ 26 ]
There may be a lack of consensus over what distinguishes a legal expert system from a knowledge-based system (also called an intelligent knowledge-based system). While legal expert systems are held to function at the level of a human legal expert, knowledge-based systems may depend on the ongoing assistance of a human expert. True legal expert systems typically focus on a narrow domain of expertise as opposed to a wider and less specific domain as in the case of most knowledge-based systems. [ 5 ] : 1
Legal expert systems represent potentially disruptive technologies for the traditional, bespoke delivery of legal services. Accordingly, established legal practitioners may consider them a threat to historical business practices. [ 5 ] : 2
Arguments have been made that a failure to take into consideration various theoretical approaches to legal decision making will produce expert systems that fail to reflect the true nature of decision making. [ 1 ] : 190 Meanwhile, some legal expert system architects contend that because many lawyers have proficient legal reasoning skills without a sound base in legal theory, the same should hold true for legal expert systems. [ 21 ] : pp.6–7
Because legal expert systems apply precision and scientific rigor to the act of legal decision-making, they may be seen as a challenge to the more disorganized and less precise dynamics of traditional jurisprudential modes of legal reasoning. [ 25 ] : 839 Some commentators also contend that the true nature of legal practice does not necessarily depend on analyses of legal rules or principles; decisions are based instead on an expectation of what a human adjudicator would decide for a given case. [ 3 ] : 930
Since 2013, there have been significant developments in legal expert systems. Professor Tanina Rostain of Georgetown Law Center teaches a course in designing legal expert systems. [ 27 ] Open-source platforms like Docassemble and companies such as Neota Logic, Logic Programming Associates , Berkely Bridge, Oracle and Checkbox have begun to offer artificial intelligence and machine learning -based legal expert systems. [ 28 ] [ 29 ]
More recently, the world of legal expert systems has collided with the world of low-code no-code products. In its article entitled 'No Code and Lawyers', the NoCode Journal [ 30 ] mentions tools such as Neota Logic , VisiRule , Berkeley Bridge, BRYTER and Josef as all being used within the legal sector for a variety of purposes including Self-Service Legal and Policy Advice, Document Drafting, Document Automation, New Business Intake and Analysis, Expert Decisioning, Business Process Automation and other use cases. [ citation needed ] | https://en.wikipedia.org/wiki/Legal_expert_system |
A legal singularity is a hypothetical future point in time beyond which the law is much more completely specified, [ 1 ] with human lawmakers and other legal actors being supported by rapid technological advancements and artificial intelligence (AI), leading to a vast reduction in legal uncertainty. [ 2 ]
The legal singularity is based on the idea that as AI systems become more advanced, they will be capable of processing and analyzing vast amounts of legal data and case law more quickly and accurately than humans. [ 2 ] This could potentially lead to a situation where AI systems become the primary legal decision-makers, and humans are relegated to a more supervisory role, if any role at all. [ 3 ] [ 4 ]
There is much debate around whether the legal singularity is possible or desirable among legal scholars, ethicists, and AI researchers. [ 5 ] [ 6 ] [ 3 ] [ 7 ] While some see it as a potential way to improve legal efficiency and reduce bias, [ 1 ] others are concerned about the potential for AI systems to lead to decisions that violate fundamental human rights or perpetuate existing inequalities. [ 8 ]
This law -related article is a stub . You can help Wikipedia by expanding it .
This technology-related article is a stub . You can help Wikipedia by expanding it .
This article about futures studies is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Legal_singularity |
Legal syllogism is a legal concept concerning the law and its application, specifically a form of argument based on deductive reasoning and seeking to establish whether a specified act is lawful. [ 1 ]
A syllogism is a form of logical reasoning that hinges on a question , a major premise , a minor premise and a conclusion . If properly pleaded , every legal action seeking redress of a wrong or enforcement of a right is "a syllogism of which the major premise is the proposition of law involved, the minor premise is the proposition of fact, and the judgment the conclusion." [ 2 ] [ 3 ] More broadly, many sources suggest that every good legal argument is cast in the form of a syllogism. [ 3 ] [ 4 ] [ 5 ]
Fundamentally, the syllogism may be reduced to a three step process: 1. " law finding ", 2. " fact finding ", and 3." law applying ." See Holding (law) . That protocol presupposes someone has done " law making " already. [ 3 ] This model is sufficiently broad so that it may be applied in many different nations and legal systems. [ 3 ]
In legal theoretic literature, legal syllogism is controversial. It is treated as equivalent to an “ interpretational decision .” [ 6 ] | https://en.wikipedia.org/wiki/Legal_syllogism |
Legalism , in the Western sense, is the ethical attitude that holds moral conduct as a matter of rule following. [ 1 ] It is an approach to the analysis of legal questions characterized by abstract logical reasoning focusing on the applicable legal text, such as a constitution , legislation , or case law , rather than on the social , economic , or political context. Legalism has occurred both in civil and common law traditions. It underlines both natural law and legal positivism . [ 2 ] In its narrower versions, legalism may endorse the notion that the preexisting body of authoritative legal materials already contains a uniquely predetermined right answer to any legal problem that may arise.
Legalism typically also claims that the task of the judge is to ascertain the answer to a legal question by an essentially mechanical process rather than some Schmittian modality of sovereignty . [ citation needed ]
This philosophy of law -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Legalism_(Western_philosophy) |
The Legend of the Octopus is a sports tradition during Detroit Red Wings home playoff games involving dead octopuses thrown onto the ice rink . The origins of the activity dates back to the 1952 playoffs when a National Hockey League team played two best-of-seven series to capture the Stanley Cup .
The tradition started on April 15, 1952, when Pete and Jerry Cusimano, brothers and storeowners in Detroit's Eastern Market , hurled an octopus into the rink of Olympia Stadium . Having eight arms, the octopus symbolized the number of playoff wins the Red Wings needed to win the Stanley Cup at the time. The team would go on to sweep the Toronto Maple Leafs and Montreal Canadiens en route to winning the championship. [ 1 ]
Since then, the tradition has persisted with each passing year. In one 1995 game, fans threw 36 octopuses, including a specimen weighing 38 pounds (17 kg). [ 2 ] The Red Wings' unofficial mascot is a purple octopus named Al , and during playoff runs, two of these mascots were also hung from the rafters of Joe Louis Arena , symbolizing the 16 wins now needed to take home the Stanley Cup. [ 3 ] The practice has become such an accepted part of the team's lore, fans have developed various techniques and "octopus etiquette" for launching the creatures onto the ice. [ 4 ]
On October 4, 1987, the last day of the regular Major League Baseball season, an octopus was thrown on the field in the top of the seventh inning at Tiger Stadium in Detroit as the Tigers defeated the Toronto Blue Jays , 1–0, clinching the American League East division championship. [ 5 ] In May of that year, the Red Wings had defeated the Toronto Maple Leafs in the Stanley Cup playoffs . [ 6 ]
At the final game at Joe Louis Arena, 35 octopuses were thrown onto the ice. [ 7 ]
Al Sobotka , the former head ice manager at Little Caesars Arena and one of the two Zamboni drivers, was the person who retrieved the thrown octopuses from the ice. When the Red Wings played at Joe Louis Arena, he was known to twirl an octopus above his head as he walked across the ice rink to the Zamboni entrance. On April 19, 2008, the NHL sent the Red Wings a memo that forbade this and imposed a $10,000 fine for violating the mandate. In an email to the Detroit Free Press , NHL spokesman Frank Brown justified the ban because matter flew off the octopus and got on the ice when Sobotka swung it above his head. [ 8 ] In an article describing the effects of the new rule, the Detroit Free Press dubbed the NHL's prohibition as "Octopus-gate". [ 9 ] By the beginning of the third round of the 2008 Playoffs, the NHL loosened the ban to allow for the octopus twirling to take place at the Zamboni entrance. [ 10 ]
The octopus tradition inspired several other creature and object tossing moments:
During Game 3 of the 1995 Stanley Cup Finals between the Red Wings and the New Jersey Devils , Devils fans threw a lobster, a dead fish, and other objects onto the ice. [ 11 ]
Nashville Predators fans throw catfish onto their home ice. [ 12 ] The first recorded instance occurred during a game between the Red Wings and the Predators on January 26, 1999. It was done in response to the Red Wings' tradition. [ 13 ] [ 14 ]
In the 2006 Stanley Cup playoffs , during the opening-round series between the Red Wings and the Edmonton Oilers , two Edmonton radio hosts threw Alberta Beef onto the ice. Oilers fans continued throwing steaks, even at away games, which ultimately resulted in one of the hosts being arrested and charged with a misdemeanor while attending Game 1 of the Stanley Cup Finals at the RBC Center . [ 15 ]
During Game 4 of the 2007 Stanley Cup Western Conference Semifinals between the Red Wings and the San Jose Sharks , a Sharks fan threw a four-foot leopard shark onto the ice at the HP Pavilion at San Jose after the Sharks scored their first goal with two minutes left in the first period. [ 16 ]
During the 2008 Stanley Cup Finals , in which the Red Wings defeated the Pittsburgh Penguins , seafood wholesalers in Pittsburgh , led by Wholey's Fish Market , began requiring identification from customers who purchased octopuses, refusing to sell to buyers from Michigan . [ 17 ] This also took place in the lead up to the 2017 Stanley Cup Finals with markets refusing to sell catfish to Tennessee residents. [ 18 ]
In Game 1 of the 2010 Western Conference Quarterfinals between the Detroit Red Wings and the Phoenix Coyotes , a rubber snake was thrown onto the ice after a goal by the Coyotes' Keith Yandle . [ 19 ]
In Game 2 of the 2010 Western Conference Semifinals between the Red Wings and San Jose Sharks, a small shark was tossed onto the ice with an octopus inside its mouth. [ 20 ] | https://en.wikipedia.org/wiki/Legend_of_the_Octopus |
Legendre's constant is a mathematical constant occurring in a formula constructed by Adrien-Marie Legendre to approximate the behavior of the prime-counting function π ( x ) {\displaystyle \pi (x)} . The value that corresponds precisely to its asymptotic behavior is now known to be 1 .
Examination of available numerical data for known values of π ( x ) {\displaystyle \pi (x)} led Legendre to an approximating formula.
Legendre proposed in 1808 the formula y = x log ( x ) − 1.08366 , {\displaystyle y={\frac {x}{\log(x)-1.08366}},} ( OEIS : A228211 ), as giving an approximation of y = π ( x ) {\displaystyle y=\pi (x)} with a "very satisfying precision". [ 1 ] [ 2 ]
However, if one defines the real function B ( x ) {\displaystyle B(x)} by π ( x ) = x log ( x ) − B ( x ) , {\displaystyle \pi (x)={\frac {x}{\log(x)-B(x)}},} and if B ( x ) {\displaystyle B(x)} converges to a real constant B {\displaystyle B} as x {\displaystyle x} tends to infinity, then this constant satisfies B = lim x → ∞ ( log ( x ) − x π ( x ) ) . {\displaystyle B=\lim _{x\to \infty }\left(\log(x)-{x \over \pi (x)}\right).}
Not only is it now known that the limit exists, but also that its value is equal to 1, somewhat less than Legendre's 1.08366 . Regardless of its exact value, the existence of the limit B {\displaystyle B} implies the prime number theorem .
Pafnuty Chebyshev proved in 1849 [ 3 ] that if the limit B exists, it must be equal to 1. An easier proof was given by Pintz in 1980. [ 4 ]
It is an immediate consequence of the prime number theorem , under the precise form with an explicit estimate of the error term
π ( x ) = Li ( x ) + O ( x e − a log x ) as x → ∞ {\displaystyle \pi (x)=\operatorname {Li} (x)+O\left(xe^{-a{\sqrt {\log x}}}\right)\quad {\text{as }}x\to \infty }
(for some positive constant a , where O (...) is the big O notation ), as proved in 1899 by Charles de La Vallée Poussin , [ 5 ] that B indeed is equal to 1. (The prime number theorem had been proved in 1896, independently by Jacques Hadamard [ 6 ] and La Vallée Poussin, [ 7 ] : 183–256, 281–361 [ page needed ] but without any estimate of the involved error term).
Being evaluated to such a simple number has made the term Legendre's constant mostly only of historical value, with it often (technically incorrectly) being used to refer to Legendre's first guess 1.08366... instead.
Using known values for π ( x ) {\displaystyle \pi (x)} , we can compute B ( x ) = log x − x π ( x ) {\displaystyle B(x)=\log x-{\frac {x}{\pi (x)}}} for values of x {\displaystyle x} far beyond what was available to Legendre:
Values up to π ( 10 29 ) {\displaystyle \pi (10^{29})} (the first two columns) are known exactly; the values in the third and fourth columns are estimated using the Riemann R function . | https://en.wikipedia.org/wiki/Legendre's_constant |
In mathematics , Legendre's equation is a Diophantine equation of the form:
a x 2 + b y 2 + c z 2 = 0. {\displaystyle ax^{2}+by^{2}+cz^{2}=0.}
The equation is named for Adrien-Marie Legendre who proved it in 1785 that it is solvable in integers x , y , z , not all zero, if and only if
− bc , − ca and − ab are quadratic residues modulo a , b and c , respectively, where a , b , c are nonzero, square-free , pairwise relatively prime integers and also not all positive or all negative.
This number theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Legendre's_equation |
In geometry , Legendre's theorem on spherical triangles , named after Adrien-Marie Legendre , is stated as follows:
The theorem was very important in simplifying the heavy numerical work in calculating the results of traditional (pre-GPS and pre-computer) geodetic surveys from about 1800 until the middle of the twentieth century.
The theorem was stated by Legendre (1787) who provided a proof [ 1 ] in a supplement to the report of the measurement of the French meridional arc used in the definition of the metre . [ 2 ] Legendre does not claim that he was the originator of the theorem despite the attribution to him. Tropfke (1903) maintains that the method was in common use by surveyors at the time and may have been used as early as 1740 by La Condamine for the calculation of the Peruvian meridional arc . [ 3 ]
Girard's theorem states that the spherical excess of a triangle, E , is equal to its area, Δ, and therefore Legendre's theorem may be written as
The excess, or area, of small triangles is very small. For example, consider an equilateral spherical triangle with sides of 60 km on a spherical Earth of radius 6371 km; the side corresponds to an angular distance of 60/6371=.0094, or approximately 10 −2 radians (subtending an angle of 0.57° at the centre). The area of such a small triangle is well approximated by that of a planar equilateral triangle with the same sides: 1 2 a 2 sin π 3 {\displaystyle {\tfrac {1}{2}}a^{2}\sin {\tfrac {\pi }{3}}} = 0.0000433 radians corresponding to 8.9″.
When the sides of the triangles exceed 180 km, for which the excess is about 80″, the relations between the areas and the differences of the angles must be corrected by terms of fourth order in the sides, amounting to no more than 0.01″:
( Δ ′ {\displaystyle \Delta '} is the area of the planar triangle.) This result was proved by Buzengeiger (1818) . [ 4 ]
The theorem may be extended to the ellipsoid if a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} are calculated by dividing the true lengths by the square root of the product of the principal radii of curvature [ 5 ] at the median latitude of the vertices (in place of a spherical radius). Gauss provided more exact formulae. [ 6 ] | https://en.wikipedia.org/wiki/Legendre's_theorem_on_spherical_triangles |
In mathematics , Legendre's three-square theorem states that a natural number can be represented as the sum of three squares of integers
if and only if n is not of the form n = 4 a ( 8 b + 7 ) {\displaystyle n=4^{a}(8b+7)} for nonnegative integers a and b .
The first numbers that cannot be expressed as the sum of three squares (i.e. numbers that can be expressed as n = 4 a ( 8 b + 7 ) {\displaystyle n=4^{a}(8b+7)} ) are
Pierre de Fermat gave a criterion for numbers of the form 8 a + 1 and 8 a + 3 to be sums of a square plus twice another square, but did not provide a proof. [ 1 ] N. Beguelin noticed in 1774 [ 2 ] that every positive integer which is neither of the form 8 n + 7, nor of the form 4 n , is the sum of three squares, but did not provide a satisfactory proof. [ 3 ] In 1796 Gauss proved his Eureka theorem that every positive integer n is the sum of 3 triangular numbers ; this is equivalent to the fact that 8 n + 3 is a sum of three squares. In 1797 or 1798 A.-M. Legendre obtained the first proof of his 3 square theorem. [ 4 ] In 1813, A. L. Cauchy noted [ 5 ] that Legendre's theorem is equivalent to the statement in the introduction above. Previously, in 1801, Gauss had obtained a more general result, [ 6 ] containing Legendre's theorem of 1797–8 as a corollary. In particular, Gauss counted the number of solutions of the expression of an integer as a sum of three squares, and this is a generalisation of yet another result of Legendre, [ 7 ] whose proof is incomplete. This last fact appears to be the reason for later incorrect claims according to which Legendre's proof of the three-square theorem was defective and had to be completed by Gauss. [ 8 ]
With Lagrange's four-square theorem and the two-square theorem of Girard, Fermat and Euler, the Waring's problem for k = 2 is entirely solved.
The "only if" of the theorem is simply because modulo 8, every square is congruent to 0, 1 or 4. There are several proofs of the converse (besides Legendre's proof). One of them is due to Dirichlet (in 1850), and has become classical. [ 9 ] It requires three main lemmas:
This theorem can be used to prove Lagrange's four-square theorem , which states that all natural numbers can be written as a sum of four squares. Gauss [ 10 ] pointed out that the four squares theorem follows easily from the fact that any positive integer that is 1 or 2 mod 4 is a sum of 3 squares, because any positive integer not divisible by 4 can be reduced to this form by subtracting 0 or 1 from it.
However, proving the three-square theorem is considerably more difficult than a direct proof of the four-square theorem that does not use the three-square theorem. Indeed, the four-square theorem was proved earlier, in 1770. | https://en.wikipedia.org/wiki/Legendre's_three-square_theorem |
In mathematics, Legendre moments are a type of image moment and are achieved by using the Legendre polynomial . Legendre moments are used in areas of image processing including: pattern and object recognition, image indexing, line fitting, feature extraction, edge detection, and texture analysis. [ 1 ] Legendre moments have been studied as a means to reduce image moment calculation complexity by limiting the amount of information redundancy through approximation. [ 2 ]
Source: [ 3 ]
With order of m + n , and object intensity function f ( x , y ):
where m , n = 1, 2, 3, ... ∞ with the n th-order Legendre polynomials being:
which can also be written:
where D ( n ) = floor( n /2). The set of Legendre polynomials { P n ( x )} form an orthogonal set on the interval [−1,1]:
A recurrence relation can be used to compute the Legendre polynomial:
f ( x , y ) can be written as an infinite series expansion in terms of Legendre polynomials [−1 ≤ x , y ≤ 1.]: | https://en.wikipedia.org/wiki/Legendre_moment |
In mathematics , Legendre polynomials , named after Adrien-Marie Legendre (1782), are a system of complete and orthogonal polynomials with a wide number of mathematical properties and numerous applications. They can be defined in many ways, and the various definitions highlight different aspects as well as suggest generalizations and connections to different mathematical structures and physical and numerical applications.
Closely related to the Legendre polynomials are associated Legendre polynomials , Legendre functions , Legendre functions of the second kind, big q-Legendre polynomials , and associated Legendre functions .
In this approach, the polynomials are defined as an orthogonal system with respect to the weight function w ( x ) = 1 {\displaystyle w(x)=1} over the interval [ − 1 , 1 ] {\displaystyle [-1,1]} . That is, P n ( x ) {\displaystyle P_{n}(x)} is a polynomial of degree n {\displaystyle n} , such that ∫ − 1 1 P m ( x ) P n ( x ) d x = 0 if n ≠ m . {\displaystyle \int _{-1}^{1}P_{m}(x)P_{n}(x)\,dx=0\quad {\text{if }}n\neq m.}
With the additional standardization condition P n ( 1 ) = 1 {\displaystyle P_{n}(1)=1} , all the polynomials can be uniquely determined. We then start the construction process: P 0 ( x ) = 1 {\displaystyle P_{0}(x)=1} is the only correctly standardized polynomial of degree 0. P 1 ( x ) {\displaystyle P_{1}(x)} must be orthogonal to P 0 {\displaystyle P_{0}} , leading to P 1 ( x ) = x {\displaystyle P_{1}(x)=x} , and P 2 ( x ) {\displaystyle P_{2}(x)} is determined by demanding orthogonality to P 0 {\displaystyle P_{0}} and P 1 {\displaystyle P_{1}} , and so on. P n {\displaystyle P_{n}} is fixed by demanding orthogonality to all P m {\displaystyle P_{m}} with m < n {\displaystyle m<n} . This gives n {\displaystyle n} conditions, which, along with the standardization P n ( 1 ) = 1 {\displaystyle P_{n}(1)=1} fixes all n + 1 {\displaystyle n+1} coefficients in P n ( x ) {\displaystyle P_{n}(x)} . With work, all the coefficients of every polynomial can be systematically determined, leading to the explicit representation in powers of x {\displaystyle x} given below.
This definition of the P n {\displaystyle P_{n}} 's is the simplest one. It does not appeal to the theory of differential equations. Second, the completeness of the polynomials follows immediately from the completeness of the powers 1, x , x 2 , x 3 , … {\displaystyle x,x^{2},x^{3},\ldots } . Finally, by defining them via orthogonality with respect to the Lebesgue measure on [ − 1 , 1 ] {\displaystyle [-1,1]} , it sets up the Legendre polynomials as one of the three classical orthogonal polynomial systems . The other two are the Laguerre polynomials , which are orthogonal over the half line [ 0 , ∞ ) {\displaystyle [0,\infty )} with the weight e − x {\displaystyle e^{-x}} , and the Hermite polynomials , orthogonal over the full line ( − ∞ , ∞ ) {\displaystyle (-\infty ,\infty )} with weight e − x 2 {\displaystyle e^{-x^{2}}} .
The Legendre polynomials can also be defined as the coefficients in a formal expansion in powers of t {\displaystyle t} of the generating function [ 1 ]
The coefficient of t n {\displaystyle t^{n}} is a polynomial in x {\displaystyle x} of degree n {\displaystyle n} with | x | ≤ 1 {\displaystyle |x|\leq 1} . Expanding up to t 1 {\displaystyle t^{1}} gives P 0 ( x ) = 1 , P 1 ( x ) = x . {\displaystyle P_{0}(x)=1\,,\quad P_{1}(x)=x.} Expansion to higher orders gets increasingly cumbersome, but is possible to do systematically, and again leads to one of the explicit forms given below.
It is possible to obtain the higher P n {\displaystyle P_{n}} 's without resorting to direct expansion of the Taylor series , however. Equation 2 is differentiated with respect to t on both sides and rearranged to obtain x − t 1 − 2 x t + t 2 = ( 1 − 2 x t + t 2 ) ∑ n = 1 ∞ n P n ( x ) t n − 1 . {\displaystyle {\frac {x-t}{\sqrt {1-2xt+t^{2}}}}=\left(1-2xt+t^{2}\right)\sum _{n=1}^{\infty }nP_{n}(x)t^{n-1}\,.} Replacing the quotient of the square root with its definition in Eq. 2 , and equating the coefficients of powers of t in the resulting expansion gives Bonnet’s recursion formula ( n + 1 ) P n + 1 ( x ) = ( 2 n + 1 ) x P n ( x ) − n P n − 1 ( x ) . {\displaystyle (n+1)P_{n+1}(x)=(2n+1)xP_{n}(x)-nP_{n-1}(x)\,.} This relation, along with the first two polynomials P 0 and P 1 , allows all the rest to be generated recursively.
The generating function approach is directly connected to the multipole expansion in electrostatics, as explained below, and is how the polynomials were first defined by Legendre in 1782.
A third definition is in terms of solutions to Legendre's differential equation :
This differential equation has regular singular points at x = ±1 so if a solution is sought using the standard Frobenius or power series method, a series about the origin will only converge for | x | < 1 in general. When n is an integer, the solution P n ( x ) that is regular at x = 1 is also regular at x = −1 , and the series for this solution terminates (i.e. it is a polynomial). The orthogonality and completeness of these solutions is best seen from the viewpoint of Sturm–Liouville theory . We rewrite the differential equation as an eigenvalue problem, d d x ( ( 1 − x 2 ) d d x ) P ( x ) = − λ P ( x ) , {\displaystyle {\frac {d}{dx}}\left(\left(1-x^{2}\right){\frac {d}{dx}}\right)P(x)=-\lambda P(x)\,,} with the eigenvalue λ {\displaystyle \lambda } in lieu of n ( n + 1 ) {\displaystyle n(n+1)} . If we demand that the solution be regular at x = ± 1 {\displaystyle x=\pm 1} , the differential operator on the left is Hermitian . The eigenvalues are found to be of the form n ( n + 1) , with n = 0 , 1 , 2 , … {\displaystyle n=0,1,2,\ldots } and the eigenfunctions are the P n ( x ) {\displaystyle P_{n}(x)} . The orthogonality and completeness of this set of solutions follows at once from the larger framework of Sturm–Liouville theory.
The differential equation admits another, non-polynomial solution, the Legendre functions of the second kind Q n {\displaystyle Q_{n}} .
A two-parameter generalization of (Eq. 1 ) is called Legendre's general differential equation, solved by the Associated Legendre polynomials . Legendre functions are solutions of Legendre's differential equation (generalized or not) with non-integer parameters.
In physical settings, Legendre's differential equation arises naturally whenever one solves Laplace's equation (and related partial differential equations ) by separation of variables in spherical coordinates . From this standpoint, the eigenfunctions of the angular part of the Laplacian operator are the spherical harmonics , of which the Legendre polynomials are (up to a multiplicative constant) the subset that is left invariant by rotations about the polar axis. The polynomials appear as P n ( cos θ ) {\displaystyle P_{n}(\cos \theta )} where θ {\displaystyle \theta } is the polar angle. This approach to the Legendre polynomials provides a deep connection to rotational symmetry. Many of their properties which are found laboriously through the methods of analysis — for example the addition theorem — are more easily found using the methods of symmetry and group theory , and acquire profound physical and geometrical meaning.
An especially compact expression for the Legendre polynomials is given by Rodrigues' formula : P n ( x ) = 1 2 n n ! d n d x n ( x 2 − 1 ) n . {\displaystyle P_{n}(x)={\frac {1}{2^{n}n!}}{\frac {d^{n}}{dx^{n}}}(x^{2}-1)^{n}\,.}
This formula enables derivation of a large number of properties of the P n {\displaystyle P_{n}} 's. Among these are explicit representations such as P n ( x ) = [ t n ] ( ( t + x ) 2 − 1 ) n 2 n = [ t n ] ( t + x + 1 ) n ( t + x − 1 ) n 2 n , P n ( x ) = 1 2 n ∑ k = 0 n ( n k ) 2 ( x − 1 ) n − k ( x + 1 ) k , P n ( x ) = ∑ k = 0 n ( n k ) ( n + k k ) ( x − 1 2 ) k , P n ( x ) = 1 2 n ∑ k = 0 ⌊ n / 2 ⌋ ( − 1 ) k ( n k ) ( 2 n − 2 k n ) x n − 2 k , P n ( x ) = 2 n ∑ k = 0 n x k ( n k ) ( n + k − 1 2 n ) , P n ( x ) = 1 2 n n ! ∑ k = ⌈ n / 2 ⌉ n ( − 1 ) k + n ( 2 k ) ! ( 2 k − n ) ! ( n − k ) ! k ! x 2 k − n , P n ( x ) = { 1 π ∫ 0 π ( x + x 2 − 1 ⋅ cos ( t ) ) n d t if | x | > 1 , x n if | x | = 1 , 2 π ⋅ x n ⋅ | x | ⋅ ∫ | x | 1 t − n − 1 t 2 − x 2 ⋅ cos ( n ⋅ arccos ( t ) ) sin ( arccos ( t ) ) d t if 0 < | x | < 1 , ( − 1 ) n / 2 ⋅ 2 − n ⋅ ( n n / 2 ) if x = 0 and n even , 0 if x = 0 and n odd . {\displaystyle {\begin{aligned}P_{n}(x)&=[t^{n}]{\frac {\left((t+x)^{2}-1\right)^{n}}{2^{n}}}=[t^{n}]{\frac {\left(t+x+1\right)^{n}\left(t+x-1\right)^{n}}{2^{n}}},\\[1ex]P_{n}(x)&={\frac {1}{2^{n}}}\sum _{k=0}^{n}{\binom {n}{k}}^{\!2}(x-1)^{n-k}(x+1)^{k},\\[1ex]P_{n}(x)&=\sum _{k=0}^{n}{\binom {n}{k}}{\binom {n+k}{k}}\left({\frac {x-1}{2}}\right)^{\!k},\\[1ex]P_{n}(x)&={\frac {1}{2^{n}}}\sum _{k=0}^{\left\lfloor n/2\right\rfloor }\left(-1\right)^{k}{\binom {n}{k}}{\binom {2n-2k}{n}}x^{n-2k},\\[1ex]P_{n}(x)&=2^{n}\sum _{k=0}^{n}x^{k}{\binom {n}{k}}{\binom {\frac {n+k-1}{2}}{n}},\\[1ex]P_{n}(x)&={\frac {1}{2^{n}n!}}\sum _{k=\lceil n/2\rceil }^{n}{\frac {(-1)^{k+n}(2k)!}{(2k-n)!(n-k)!k!}}x^{2k-n},\\[1ex]P_{n}(x)&={\begin{cases}\displaystyle {\frac {1}{\pi }}\int _{0}^{\pi }{\left(x+{\sqrt {x^{2}-1}}\cdot \cos(t)\right)}^{n}\,dt&{\text{if }}|x|>1,\\x^{n}&{\text{if }}|x|=1,\\\displaystyle {\frac {2}{\pi }}\cdot x^{n}\cdot |x|\cdot \int _{|x|}^{1}{\frac {t^{-n-1}}{\sqrt {t^{2}-x^{2}}}}\cdot {\frac {\cos \left(n\cdot \arccos(t)\right)}{\sin \left(\arccos(t)\right)}}\,dt&{\text{if }}0<|x|<1,\\\displaystyle (-1)^{n/2}\cdot 2^{-n}\cdot {\binom {n}{n/2}}&{\text{if }}x=0{\text{ and }}n{\text{ even}},\\0&{\text{if }}x=0{\text{ and }}n{\text{ odd}}.\end{cases}}\end{aligned}}}
Expressing the polynomial as a power series, P n ( x ) = ∑ a n , k x k {\textstyle P_{n}(x)=\sum a_{n,k}x^{k}} , the coefficients of powers of x {\displaystyle x} can also be calculated using the recurrences
a n , k = − ( n − k + 2 ) ( n + k − 1 ) k ( k − 1 ) a n , k − 2 . {\displaystyle a_{n,k}=-{\frac {(n-k+2)(n+k-1)}{k(k-1)}}a_{n,k-2}.} or
a n , k = − n + k − 1 n − k a n − 2 , k . {\displaystyle a_{n,k}=-{\frac {n+k-1}{n-k}}a_{n-2,k}.}
The Legendre polynomial is determined by the values used for the two constants a n , 0 {\textstyle a_{n,0}} and a n , 1 {\textstyle a_{n,1}} , where a n , 0 = 0 {\textstyle a_{n,0}=0} if n {\displaystyle n} is odd and a n , 1 = 0 {\textstyle a_{n,1}=0} if n {\displaystyle n} is even. [ 2 ]
In the fourth representation, ⌊ n / 2 ⌋ {\displaystyle \lfloor n/2\rfloor } stands for the largest integer less than or equal to n / 2 {\displaystyle n/2} . The last representation, which is also immediate from the recursion formula, expresses the Legendre polynomials by simple monomials and involves the generalized form of the binomial coefficient .
The reversal of the representation as a power series is [ 3 ] [ 4 ]
x m = ∑ s = 0 ⌊ m / 2 ⌋ ( 2 m − 4 s + 1 ) ( 2 s + 2 ) ( 2 s + 4 ) ⋯ 2 ⌊ m / 2 ⌋ ( 2 m − 2 s + 1 ) ( 2 m − 2 s − 1 ) ( 2 m − 2 s − 3 ) ⋯ ( 1 + 2 ⌊ ( m + 1 ) / 2 ⌋ ) P m − 2 s ( x ) . {\displaystyle x^{m}=\sum _{s=0}^{\lfloor m/2\rfloor }(2m-4s+1){\frac {(2s+2)(2s+4)\cdots 2\lfloor m/2\rfloor }{(2m-2s+1)(2m-2s-1)(2m-2s-3)\cdots (1+2\lfloor (m+1)/2\rfloor )}}P_{m-2s}(x).}
for m = 0 , 1 , 2 , … {\displaystyle m=0,1,2,\ldots } , where an empty product in the numerator (last factor less than the first factor) evaluates to 1.
The first few Legendre polynomials are:
The graphs of these polynomials (up to n = 5 ) are shown below:
The standardization P n ( 1 ) = 1 {\displaystyle P_{n}(1)=1} fixes the normalization of the Legendre polynomials (with respect to the L 2 norm on the interval −1 ≤ x ≤ 1 ). Since they are also orthogonal with respect to the same norm, the two statements [ clarification needed ] can be combined into the single equation, ∫ − 1 1 P m ( x ) P n ( x ) d x = 2 2 n + 1 δ m n , {\displaystyle \int _{-1}^{1}P_{m}(x)P_{n}(x)\,dx={\frac {2}{2n+1}}\delta _{mn},} (where δ mn denotes the Kronecker delta , equal to 1 if m = n and to 0 otherwise).
This normalization is most readily found by employing Rodrigues' formula , given below.
That the polynomials are complete means the following. Given any piecewise continuous function f ( x ) {\displaystyle f(x)} with finitely many discontinuities in the interval [−1, 1] , the sequence of sums f n ( x ) = ∑ ℓ = 0 n a ℓ P ℓ ( x ) {\displaystyle f_{n}(x)=\sum _{\ell =0}^{n}a_{\ell }P_{\ell }(x)} converges in the mean to f ( x ) {\displaystyle f(x)} as n → ∞ {\displaystyle n\to \infty } , provided we take a ℓ = 2 ℓ + 1 2 ∫ − 1 1 f ( x ) P ℓ ( x ) d x . {\displaystyle a_{\ell }={\frac {2\ell +1}{2}}\int _{-1}^{1}f(x)P_{\ell }(x)\,dx.}
This completeness property underlies all the expansions discussed in this article, and is often stated in the form ∑ ℓ = 0 ∞ 2 ℓ + 1 2 P ℓ ( x ) P ℓ ( y ) = δ ( x − y ) , {\displaystyle \sum _{\ell =0}^{\infty }{\frac {2\ell +1}{2}}P_{\ell }(x)P_{\ell }(y)=\delta (x-y),} with −1 ≤ x ≤ 1 and −1 ≤ y ≤ 1 .
The Legendre polynomials were first introduced in 1782 by Adrien-Marie Legendre [ 5 ] as the coefficients in the expansion of the Newtonian potential 1 | x − x ′ | = 1 r 2 + r ′ 2 − 2 r r ′ cos γ = ∑ ℓ = 0 ∞ r ′ ℓ r ℓ + 1 P ℓ ( cos γ ) , {\displaystyle {\frac {1}{\left|\mathbf {x} -\mathbf {x} '\right|}}={\frac {1}{\sqrt {r^{2}+{r'}^{2}-2r{r'}\cos \gamma }}}=\sum _{\ell =0}^{\infty }{\frac {{r'}^{\ell }}{r^{\ell +1}}}P_{\ell }(\cos \gamma ),} where r and r ′ are the lengths of the vectors x and x ′ respectively and γ is the angle between those two vectors. The series converges when r > r ′ . The expression gives the gravitational potential associated to a point mass or the Coulomb potential associated to a point charge . The expansion using Legendre polynomials might be useful, for instance, when integrating this expression over a continuous mass or charge distribution.
Legendre polynomials occur in the solution of Laplace's equation of the static potential , ∇ 2 Φ( x ) = 0 , in a charge-free region of space, using the method of separation of variables , where the boundary conditions have axial symmetry (no dependence on an azimuthal angle ). Where ẑ is the axis of symmetry and θ is the angle between the position of the observer and the ẑ axis (the zenith angle), the solution for the potential will be Φ ( r , θ ) = ∑ ℓ = 0 ∞ ( A ℓ r ℓ + B ℓ r − ( ℓ + 1 ) ) P ℓ ( cos θ ) . {\displaystyle \Phi (r,\theta )=\sum _{\ell =0}^{\infty }\left(A_{\ell }r^{\ell }+B_{\ell }r^{-(\ell +1)}\right)P_{\ell }(\cos \theta )\,.}
A l and B l are to be determined according to the boundary condition of each problem. [ 6 ]
They also appear when solving the Schrödinger equation in three dimensions for a central force.
Legendre polynomials are also useful in expanding functions of the form (this is the same as before, written a little differently): 1 1 + η 2 − 2 η x = ∑ k = 0 ∞ η k P k ( x ) , {\displaystyle {\frac {1}{\sqrt {1+\eta ^{2}-2\eta x}}}=\sum _{k=0}^{\infty }\eta ^{k}P_{k}(x),} which arise naturally in multipole expansions . The left-hand side of the equation is the generating function for the Legendre polynomials.
As an example, the electric potential Φ( r , θ ) (in spherical coordinates ) due to a point charge located on the z -axis at z = a (see diagram right) varies as Φ ( r , θ ) ∝ 1 R = 1 r 2 + a 2 − 2 a r cos θ . {\displaystyle \Phi (r,\theta )\propto {\frac {1}{R}}={\frac {1}{\sqrt {r^{2}+a^{2}-2ar\cos \theta }}}.}
If the radius r of the observation point P is greater than a , the potential may be expanded in the Legendre polynomials Φ ( r , θ ) ∝ 1 r ∑ k = 0 ∞ ( a r ) k P k ( cos θ ) , {\displaystyle \Phi (r,\theta )\propto {\frac {1}{r}}\sum _{k=0}^{\infty }\left({\frac {a}{r}}\right)^{k}P_{k}(\cos \theta ),} where we have defined η = a / r < 1 and x = cos θ . This expansion is used to develop the normal multipole expansion .
Conversely, if the radius r of the observation point P is smaller than a , the potential may still be expanded in the Legendre polynomials as above, but with a and r exchanged. This expansion is the basis of interior multipole expansion .
The trigonometric functions cos nθ , also denoted as the Chebyshev polynomials T n (cos θ ) ≡ cos nθ , can also be multipole expanded by the Legendre polynomials P n (cos θ ) . The first several orders are as follows: T 0 ( cos θ ) = 1 = P 0 ( cos θ ) , T 1 ( cos θ ) = cos θ = P 1 ( cos θ ) , T 2 ( cos θ ) = cos 2 θ = 1 3 ( 4 P 2 ( cos θ ) − P 0 ( cos θ ) ) , T 3 ( cos θ ) = cos 3 θ = 1 5 ( 8 P 3 ( cos θ ) − 3 P 1 ( cos θ ) ) , T 4 ( cos θ ) = cos 4 θ = 1 105 ( 192 P 4 ( cos θ ) − 80 P 2 ( cos θ ) − 7 P 0 ( cos θ ) ) , T 5 ( cos θ ) = cos 5 θ = 1 63 ( 128 P 5 ( cos θ ) − 56 P 3 ( cos θ ) − 9 P 1 ( cos θ ) ) , T 6 ( cos θ ) = cos 6 θ = 1 1155 ( 2560 P 6 ( cos θ ) − 1152 P 4 ( cos θ ) − 220 P 2 ( cos θ ) − 33 P 0 ( cos θ ) ) . {\displaystyle {\begin{alignedat}{2}T_{0}(\cos \theta )&=1&&=P_{0}(\cos \theta ),\\[4pt]T_{1}(\cos \theta )&=\cos \theta &&=P_{1}(\cos \theta ),\\[4pt]T_{2}(\cos \theta )&=\cos 2\theta &&={\tfrac {1}{3}}{\bigl (}4P_{2}(\cos \theta )-P_{0}(\cos \theta ){\bigr )},\\[4pt]T_{3}(\cos \theta )&=\cos 3\theta &&={\tfrac {1}{5}}{\bigl (}8P_{3}(\cos \theta )-3P_{1}(\cos \theta ){\bigr )},\\[4pt]T_{4}(\cos \theta )&=\cos 4\theta &&={\tfrac {1}{105}}{\bigl (}192P_{4}(\cos \theta )-80P_{2}(\cos \theta )-7P_{0}(\cos \theta ){\bigr )},\\[4pt]T_{5}(\cos \theta )&=\cos 5\theta &&={\tfrac {1}{63}}{\bigl (}128P_{5}(\cos \theta )-56P_{3}(\cos \theta )-9P_{1}(\cos \theta ){\bigr )},\\[4pt]T_{6}(\cos \theta )&=\cos 6\theta &&={\tfrac {1}{1155}}{\bigl (}2560P_{6}(\cos \theta )-1152P_{4}(\cos \theta )-220P_{2}(\cos \theta )-33P_{0}(\cos \theta ){\bigr )}.\end{alignedat}}}
This can be summarized for n > 0 {\displaystyle n>0} as
T n ( x ) = 2 2 n − n ′ n ^ ! ∑ t = 0 n ^ ( n − 2 t + 1 / 2 ) ( n − t − 1 ) ! 2 2 t t ! ( n − 1 ) ! × ( − 1 ) ⋅ 1 ⋅ 3 ⋯ ( 2 t − 3 ) ( 1 + 2 n ′ ) ( 3 + 2 n ′ ) ⋯ ( 2 n − 2 t + 1 ) P n − 2 t ( x ) . {\displaystyle T_{n}(x)=2^{2n-n'}{\hat {n}}!\sum _{t=0}^{\hat {n}}(n-2t+1/2){\frac {(n-t-1)!}{2^{2t}t!(n-1)!}}\times {\frac {(-1)\cdot 1\cdot 3\cdots (2t-3)}{(1+2n')(3+2n')\cdots (2n-2t+1)}}P_{n-2t}(x).}
where n ^ ≡ ⌊ n / 2 ⌋ {\displaystyle {\hat {n}}\equiv \lfloor n/2\rfloor } , n ′ ≡ ⌊ ( n + 1 ) / 2 ⌋ {\displaystyle n'\equiv \lfloor (n+1)/2\rfloor } , and where the products
with the steps of two in the numerator and denominator are
to be interpreted as 1 if the are empty, i.e., if the last factor is smaller than the
first factor.
Another property is the expression for sin ( n + 1) θ , which is sin ( n + 1 ) θ sin θ = ∑ ℓ = 0 n P ℓ ( cos θ ) P n − ℓ ( cos θ ) . {\displaystyle {\frac {\sin(n+1)\theta }{\sin \theta }}=\sum _{\ell =0}^{n}P_{\ell }(\cos \theta )P_{n-\ell }(\cos \theta ).}
A recurrent neural network that contains a d -dimensional memory vector, m ∈ R d {\displaystyle \mathbf {m} \in \mathbb {R} ^{d}} , can be optimized such that its neural activities obey the linear time-invariant system given by the following state-space representation : θ m ˙ ( t ) = A m ( t ) + B u ( t ) , {\displaystyle \theta {\dot {\mathbf {m} }}(t)=A\mathbf {m} (t)+Bu(t),} A = [ a ] i j ∈ R d × d , a i j = ( 2 i + 1 ) { − 1 i < j ( − 1 ) i − j + 1 i ≥ j , B = [ b ] i ∈ R d × 1 , b i = ( 2 i + 1 ) ( − 1 ) i . {\displaystyle {\begin{aligned}A&=\left[a\right]_{ij}\in \mathbb {R} ^{d\times d}{\text{,}}\quad &&a_{ij}=\left(2i+1\right){\begin{cases}-1&i<j\\(-1)^{i-j+1}&i\geq j\end{cases}},\\B&=\left[b\right]_{i}\in \mathbb {R} ^{d\times 1}{\text{,}}\quad &&b_{i}=(2i+1)(-1)^{i}.\end{aligned}}}
In this case, the sliding window of u {\displaystyle u} across the past θ {\displaystyle \theta } units of time is best approximated by a linear combination of the first d {\displaystyle d} shifted Legendre polynomials, weighted together by the elements of m {\displaystyle \mathbf {m} } at time t {\displaystyle t} : u ( t − θ ′ ) ≈ ∑ ℓ = 0 d − 1 P ~ ℓ ( θ ′ θ ) m ℓ ( t ) , 0 ≤ θ ′ ≤ θ . {\displaystyle u(t-\theta ')\approx \sum _{\ell =0}^{d-1}{\widetilde {P}}_{\ell }\left({\frac {\theta '}{\theta }}\right)\,m_{\ell }(t),\quad 0\leq \theta '\leq \theta .}
When combined with deep learning methods, these networks can be trained to outperform long short-term memory units and related architectures, while using fewer computational resources. [ 7 ]
Legendre polynomials have definite parity. That is, they are even or odd , [ 8 ] according to P n ( − x ) = ( − 1 ) n P n ( x ) . {\displaystyle P_{n}(-x)=(-1)^{n}P_{n}(x)\,.}
Another useful property is ∫ − 1 1 P n ( x ) d x = 0 for n ≥ 1 , {\displaystyle \int _{-1}^{1}P_{n}(x)\,dx=0{\text{ for }}n\geq 1,} which follows from considering the orthogonality relation with P 0 ( x ) = 1 {\displaystyle P_{0}(x)=1} . It is convenient when a Legendre series ∑ i a i P i {\textstyle \sum _{i}a_{i}P_{i}} is used to approximate a function or experimental data: the average of the series over the interval [−1, 1] is simply given by the leading expansion coefficient a 0 {\displaystyle a_{0}} .
The underivative is [ 9 ]
∫ P n ( x ) d x = 1 2 n + 1 [ P n + 1 ( x ) − P n − 1 ( x ) ] , n ≥ 1. {\displaystyle \int P_{n}(x)dx={\frac {1}{2n+1}}[P_{n+1}(x)-P_{n-1}(x)],\quad n\geq 1.}
Since the differential equation and the orthogonality property are independent of scaling, the Legendre polynomials' definitions are "standardized" (sometimes called "normalization", but the actual norm is not 1) by being scaled so that P n ( 1 ) = 1 . {\displaystyle P_{n}(1)=1\,.}
The derivative at the end point is given by P n ′ ( 1 ) = n ( n + 1 ) 2 . {\displaystyle P_{n}'(1)={\frac {n(n+1)}{2}}\,.}
The product expansion is [ 10 ]
P m ( x ) P n ( x ) = ∑ r = 0 min ( m , n ) A r A m − r A n − r A m + n − r 2 m + 2 n − 4 r + 1 2 m + 2 n − 2 r + 1 P m + n − 2 r ( x ) {\displaystyle P_{m}(x)P_{n}(x)=\sum _{r=0}^{\min(m,n)}{\frac {A_{r}A_{m-r}A_{n-r}}{A_{m+n-r}}}{\frac {2m+2n-4r+1}{2m+2n-2r+1}}P_{m+n-2r}(x)}
where A r ≡ ( 2 r − 1 ) ! ! / r ! {\displaystyle A_{r}\equiv (2r-1)!!/r!} .
The Askey–Gasper inequality for Legendre polynomials reads ∑ j = 0 n P j ( x ) ≥ 0 for x ≥ − 1 . {\displaystyle \sum _{j=0}^{n}P_{j}(x)\geq 0\quad {\text{for }}\quad x\geq -1\,.}
The Legendre polynomials of a scalar product of unit vectors can be expanded with spherical harmonics using P ℓ ( r ⋅ r ′ ) = 4 π 2 ℓ + 1 ∑ m = − ℓ ℓ Y ℓ m ( θ , φ ) Y ℓ m ∗ ( θ ′ , φ ′ ) , {\displaystyle P_{\ell }\left(r\cdot r'\right)={\frac {4\pi }{2\ell +1}}\sum _{m=-\ell }^{\ell }Y_{\ell m}(\theta ,\varphi )Y_{\ell m}^{*}(\theta ',\varphi ')\,,} where the unit vectors r and r ′ have spherical coordinates ( θ , φ ) and ( θ ′, φ ′) , respectively.
The product of two Legendre polynomials [ 11 ] ∑ p = 0 ∞ t p P p ( cos θ 1 ) P p ( cos θ 2 ) = 2 π K ( 2 t sin θ 1 sin θ 2 t 2 − 2 t cos ( θ 1 + θ 2 ) + 1 ) t 2 − 2 t cos ( θ 1 + θ 2 ) + 1 , {\displaystyle \sum _{p=0}^{\infty }t^{p}P_{p}(\cos \theta _{1})P_{p}(\cos \theta _{2})={\frac {2}{\pi }}{\frac {\mathbf {K} \left(2{\sqrt {\frac {t\sin \theta _{1}\sin \theta _{2}}{t^{2}-2t\cos \left(\theta _{1}+\theta _{2}\right)+1}}}\right)}{\sqrt {t^{2}-2t\cos \left(\theta _{1}+\theta _{2}\right)+1}}}\,,} where K ( ⋅ ) {\displaystyle K(\cdot )} is the complete elliptic integral of the first kind .
The formulas of Dirichlet-Mehler: [ 12 ] [ 13 ] [ 14 ] : 86, Eq. 4.8.6, Eq. 4.8.7 [ 15 ] P n ( cos θ ) = 2 π ∫ 0 θ cos ( n + 1 2 ) ϕ ( 2 cos ϕ − 2 cos θ ) 1 2 d ϕ = 2 π ∫ θ π sin ( n + 1 2 ) ϕ ( 2 cos θ − 2 cos ϕ ) 1 2 d ϕ {\displaystyle P_{n}(\cos \theta )={\frac {2}{\pi }}\int _{0}^{\theta }{\frac {\cos \left(n+{\frac {1}{2}}\right)\phi }{(2\cos \phi -2\cos \theta )^{\frac {1}{2}}}}d\phi ={\frac {2}{\pi }}\int _{\theta }^{\pi }{\frac {\sin \left(n+{\frac {1}{2}}\right)\phi }{(2\cos \theta -2\cos \phi )^{\frac {1}{2}}}}d\phi } which has generalizations for associated Legendre polynomials. [ 16 ] [ 17 ]
The Fourier-Legendre series: [ 18 ] e i t x = ∑ n = 0 ∞ ( 2 n + 1 ) i n π 2 t J n + 1 2 ( t ) P n ( x ) {\displaystyle e^{itx}=\sum _{n=0}^{\infty }(2n+1)i^{n}{\sqrt {\frac {\pi }{2t}}}J_{n+{\frac {1}{2}}}(t)P_{n}(x)} where J {\displaystyle J} is the Bessel function of the first kind .
As discussed above, the Legendre polynomials obey the three-term recurrence relation known as Bonnet's recursion formula given by ( n + 1 ) P n + 1 ( x ) = ( 2 n + 1 ) x P n ( x ) − n P n − 1 ( x ) {\displaystyle (n+1)P_{n+1}(x)=(2n+1)xP_{n}(x)-nP_{n-1}(x)} and x 2 − 1 n d d x P n ( x ) = x P n ( x ) − P n − 1 ( x ) {\displaystyle {\frac {x^{2}-1}{n}}{\frac {d}{dx}}P_{n}(x)=xP_{n}(x)-P_{n-1}(x)} or, with the alternative expression, which also holds at the endpoints d d x P n + 1 ( x ) = ( n + 1 ) P n ( x ) + x d d x P n ( x ) . {\displaystyle {\frac {d}{dx}}P_{n+1}(x)=(n+1)P_{n}(x)+x{\frac {d}{dx}}P_{n}(x)\,.}
Useful for the integration of Legendre polynomials is ( 2 n + 1 ) P n ( x ) = d d x ( P n + 1 ( x ) − P n − 1 ( x ) ) . {\displaystyle (2n+1)P_{n}(x)={\frac {d}{dx}}{\bigl (}P_{n+1}(x)-P_{n-1}(x){\bigr )}\,.}
From the above one can see also that d d x P n + 1 ( x ) = ( 2 n + 1 ) P n ( x ) + ( 2 ( n − 2 ) + 1 ) P n − 2 ( x ) + ( 2 ( n − 4 ) + 1 ) P n − 4 ( x ) + ⋯ {\displaystyle {\frac {d}{dx}}P_{n+1}(x)=(2n+1)P_{n}(x)+{\bigl (}2(n-2)+1{\bigr )}P_{n-2}(x)+{\bigl (}2(n-4)+1{\bigr )}P_{n-4}(x)+\cdots } or equivalently d d x P n + 1 ( x ) = 2 P n ( x ) ‖ P n ‖ 2 + 2 P n − 2 ( x ) ‖ P n − 2 ‖ 2 + ⋯ {\displaystyle {\frac {d}{dx}}P_{n+1}(x)={\frac {2P_{n}(x)}{\left\|P_{n}\right\|^{2}}}+{\frac {2P_{n-2}(x)}{\left\|P_{n-2}\right\|^{2}}}+\cdots } where ‖ P n ‖ is the norm over the interval −1 ≤ x ≤ 1 ‖ P n ‖ = ∫ − 1 1 ( P n ( x ) ) 2 d x = 2 2 n + 1 . {\displaystyle \|P_{n}\|={\sqrt {\int _{-1}^{1}{\bigl (}P_{n}(x){\bigr )}^{2}\,dx}}={\sqrt {\frac {2}{2n+1}}}\,.} More generally, all orders of derivatives are expressible as a sum of Legendre polynomials: [ 19 ] d q d x q P q + 2 j ( x ) = 2 q − 1 ( q − 1 ) ! ∑ i = 0 j ( 4 i + 1 ) ( q + j − i − 1 ) ! Γ ( q + j + i + 1 2 ) ( j − i ) ! Γ ( j + i + 3 / 2 ) P 2 i ( x ) = 1 2 q − 2 ( q − 1 ) ! ∑ i = 0 j ( 4 i + 1 ) ( q + j − i − 1 ) ! ( 2 q + 2 j + 2 i − 1 ) ! ( j − i ) ! ( 2 j + 2 i + 2 ) ! ( j + i + 1 ) ! ( q + j + i − 1 ) ! P 2 i ( x ) d q d x q P q + 2 j + 1 ( x ) = 2 q − 1 ( q − 1 ) ! ∑ i = 0 j ( 4 i + 3 ) ( q + j − i − 1 ) ! Γ ( q + j + i + 3 / 2 ) ( j − i ) ! Γ ( j + i + 5 / 2 ) P 2 i + 1 ( x ) = 1 2 q − 2 ( q − 1 ) ! ∑ i = 0 j ( 4 i + 3 ) ( q + j − i − 1 ) ! ( 2 q + 2 j + 2 i + 1 ) ! ( j − i ) ! ( 2 j + 2 i + 4 ) ! ( j + i + 2 ) ! ( q + j + i ) ! P 2 i + 1 ( x ) {\displaystyle {\begin{aligned}&{\begin{aligned}&{\frac {d^{q}}{dx^{q}}}P_{q+2j}(x)={\frac {2^{q-1}}{(q-1)!}}\sum _{i=0}^{j}(4i+1){\frac {(q+j-i-1)!\Gamma \left(q+j+i+{\frac {1}{2}}\right)}{(j-i)!\Gamma (j+i+3/2)}}P_{2i}(x)\\&\quad ={\frac {1}{2^{q-2}(q-1)!}}\sum _{i=0}^{j}(4i+1){\frac {(q+j-i-1)!(2q+2j+2i-1)!}{(j-i)!(2j+2i+2)!}}{\frac {(j+i+1)!}{(q+j+i-1)!}}P_{2i}(x)\end{aligned}}\\&{\begin{aligned}&{\frac {d^{q}}{dx^{q}}}P_{q+2j+1}(x)={\frac {2^{q-1}}{(q-1)!}}\sum _{i=0}^{j}(4i+3){\frac {(q+j-i-1)!\Gamma (q+j+i+3/2)}{(j-i)!\Gamma (j+i+5/2)}}P_{2i+1}(x)\\&\quad ={\frac {1}{2^{q-2}(q-1)!}}\sum _{i=0}^{j}(4i+3){\frac {(q+j-i-1)!(2q+2j+2i+1)!}{(j-i)!(2j+2i+4)!}}{\frac {(j+i+2)!}{(q+j+i)!}}P_{2i+1}(x)\end{aligned}}\end{aligned}}}
Asymptotically, for ℓ → ∞ {\displaystyle \ell \to \infty } , the Legendre polynomials can be written as [ 14 ] : 194, Theorem 8.21.2 P ℓ ( cos θ ) = θ sin ( θ ) { J 0 [ ( ℓ + 1 2 ) θ ] − ( 1 θ − cot θ ) 8 ( ℓ + 1 2 ) J 1 [ ( ℓ + 1 2 ) θ ] } + O ( ℓ − 2 ) = 2 π ℓ sin ( θ ) cos [ ( ℓ + 1 2 ) θ − π 4 ] + O ( ℓ − 3 / 2 ) , θ ∈ ( 0 , π ) , {\displaystyle {\begin{aligned}P_{\ell }(\cos \theta )&={\sqrt {\frac {\theta }{\sin \left(\theta \right)}}}\left\{J_{0}{\left[\left(\ell +{\tfrac {1}{2}}\right)\theta \right]}-{\frac {\left({\frac {1}{\theta }}-\cot \theta \right)}{8(\ell +{\frac {1}{2}})}}J_{1}{\left[\left(\ell +{\tfrac {1}{2}}\right)\theta \right]}\right\}+{\mathcal {O}}\left(\ell ^{-2}\right)\\[1ex]&={\sqrt {\frac {2}{\pi \ell \sin \left(\theta \right)}}}\cos \left[\left(\ell +{\tfrac {1}{2}}\right)\theta -{\tfrac {\pi }{4}}\right]+{\mathcal {O}}\left(\ell ^{-3/2}\right),\quad \theta \in (0,\pi ),\end{aligned}}} and for arguments of magnitude greater than 1 [ 20 ] P ℓ ( cosh ξ ) = ξ sinh ξ I 0 ( ( ℓ + 1 2 ) ξ ) ( 1 + O ( ℓ − 1 ) ) , P ℓ ( 1 1 − e 2 ) = 1 2 π ℓ e ( 1 + e ) ℓ + 1 2 ( 1 − e ) ℓ 2 + O ( ℓ − 1 ) {\displaystyle {\begin{aligned}P_{\ell }\left(\cosh \xi \right)&={\sqrt {\frac {\xi }{\sinh \xi }}}I_{0}\left(\left(\ell +{\frac {1}{2}}\right)\xi \right)\left(1+{\mathcal {O}}\left(\ell ^{-1}\right)\right)\,,\\P_{\ell }\left({\frac {1}{\sqrt {1-e^{2}}}}\right)&={\frac {1}{\sqrt {2\pi \ell e}}}{\frac {(1+e)^{\frac {\ell +1}{2}}}{(1-e)^{\frac {\ell }{2}}}}+{\mathcal {O}}\left(\ell ^{-1}\right)\end{aligned}}} where J 0 , J 1 , and I 0 are Bessel functions .
All n {\displaystyle n} zeros of P n ( x ) {\displaystyle P_{n}(x)} are real, distinct from each other, and lie in the interval ( − 1 , 1 ) {\displaystyle (-1,1)} . Furthermore, if we regard them as dividing the interval [ − 1 , 1 ] {\displaystyle [-1,1]} into n + 1 {\displaystyle n+1} subintervals, each subinterval will contain exactly one zero of P n + 1 {\displaystyle P_{n+1}} . This is known as the interlacing property. Because of the parity property it is evident that if x k {\displaystyle x_{k}} is a zero of P n ( x ) {\displaystyle P_{n}(x)} , so is − x k {\displaystyle -x_{k}} . These zeros play an important role in numerical integration based on Gaussian quadrature . The specific quadrature based on the P n {\displaystyle P_{n}} 's is known as Gauss-Legendre quadrature .
The zeros of P n ( cos θ ) {\displaystyle P_{n}(\cos \theta )} are distributed nearly uniformly over the range of θ ∈ ( 0 , π ) {\displaystyle \theta \in (0,\pi )} , in the sense that there is one zero θ ∈ ( π ( k + 1 / 2 ) n + 1 / 2 , π ( k + 1 ) n + 1 / 2 ) {\displaystyle \theta \in \left({\frac {\pi (k+1/2)}{n+1/2}},{\frac {\pi (k+1)}{n+1/2}}\right)} per k = 0 , 1 , … , n − 1 {\displaystyle k=0,1,\dots ,n-1} . [ 21 ] This can be proved by looking at the first formula of Dirichlet-Mehler. [ 22 ]
From this property and the facts that P n ( ± 1 ) ≠ 0 {\displaystyle P_{n}(\pm 1)\neq 0} , it follows that P n ( x ) {\displaystyle P_{n}(x)} has n − 1 {\displaystyle n-1} local minima and maxima in ( − 1 , 1 ) {\displaystyle (-1,1)} . Equivalently, d P n ( x ) / d x {\displaystyle dP_{n}(x)/dx} has n − 1 {\displaystyle n-1} zeros in ( − 1 , 1 ) {\displaystyle (-1,1)} .
The parity and normalization implicate the values at the boundaries x = ± 1 {\displaystyle x=\pm 1} to be P n ( 1 ) = 1 , P n ( − 1 ) = ( − 1 ) n {\displaystyle P_{n}(1)=1\,,\quad P_{n}(-1)=(-1)^{n}} At the origin x = 0 {\displaystyle x=0} one can show that the values are given by P 2 n ( 0 ) = ( − 1 ) n 4 n ( 2 n n ) = ( − 1 ) n 2 2 n ( 2 n ) ! ( n ! ) 2 = ( − 1 ) n ( 2 n − 1 ) ! ! ( 2 n ) ! ! {\displaystyle P_{2n}(0)={\frac {(-1)^{n}}{4^{n}}}{\binom {2n}{n}}={\frac {(-1)^{n}}{2^{2n}}}{\frac {(2n)!}{\left(n!\right)^{2}}}=(-1)^{n}{\frac {(2n-1)!!}{(2n)!!}}} P 2 n + 1 ( 0 ) = 0 {\displaystyle P_{2n+1}(0)=0}
The shifted Legendre polynomials are defined as P ~ n ( x ) = P n ( 2 x − 1 ) . {\displaystyle {\widetilde {P}}_{n}(x)=P_{n}(2x-1)\,.} Here the "shifting" function x ↦ 2 x − 1 is an affine transformation that bijectively maps the interval [0, 1] to the interval [−1, 1] , implying that the polynomials P̃ n ( x ) are orthogonal on [0, 1] : ∫ 0 1 P ~ m ( x ) P ~ n ( x ) d x = 1 2 n + 1 δ m n . {\displaystyle \int _{0}^{1}{\widetilde {P}}_{m}(x){\widetilde {P}}_{n}(x)\,dx={\frac {1}{2n+1}}\delta _{mn}\,.}
An explicit expression for the shifted Legendre polynomials is given by P ~ n ( x ) = ( − 1 ) n ∑ k = 0 n ( n k ) ( n + k k ) ( − x ) k . {\displaystyle {\widetilde {P}}_{n}(x)=(-1)^{n}\sum _{k=0}^{n}{\binom {n}{k}}{\binom {n+k}{k}}(-x)^{k}\,.}
The analogue of Rodrigues' formula for the shifted Legendre polynomials is P ~ n ( x ) = 1 n ! d n d x n ( x 2 − x ) n . {\displaystyle {\widetilde {P}}_{n}(x)={\frac {1}{n!}}{\frac {d^{n}}{dx^{n}}}\left(x^{2}-x\right)^{n}\,.}
The first few shifted Legendre polynomials are:
The Legendre rational functions are a sequence of orthogonal functions on [0, ∞). They are obtained by composing the Cayley transform with Legendre polynomials.
A rational Legendre function of degree n is defined as: R n ( x ) = 2 x + 1 P n ( x − 1 x + 1 ) . {\displaystyle R_{n}(x)={\frac {\sqrt {2}}{x+1}}\,P_{n}\left({\frac {x-1}{x+1}}\right)\,.}
They are eigenfunctions of the singular Sturm–Liouville problem : ( x + 1 ) d d x ( x d d x [ ( x + 1 ) v ( x ) ] ) + λ v ( x ) = 0 {\displaystyle \left(x+1\right){\frac {d}{dx}}\left(x{\frac {d}{dx}}\left[\left(x+1\right)v(x)\right]\right)+\lambda v(x)=0} with eigenvalues λ n = n ( n + 1 ) . {\displaystyle \lambda _{n}=n(n+1)\,.} | https://en.wikipedia.org/wiki/Legendre_polynomials |
In physics , the Leggett inequalities , [ 1 ] named for Anthony James Leggett , who derived them, are a related pair of mathematical expressions concerning the correlations of properties of entangled particles. (As published by Leggett, the inequalities were exemplified in terms of relative angles of elliptical and linear polarizations .)
They are fulfilled by a large class of physical theories based on particular non-local and realistic assumptions, that may be considered to be plausible or intuitive according to common physical reasoning .
The Leggett inequalities are violated by quantum mechanical theory . The results of experimental tests in 2007 and 2010 have shown agreement with quantum mechanics rather than the Leggett inequalities. [ 2 ] [ 3 ] Given that experimental tests of Bell's inequalities have ruled out local realism in quantum mechanics, the violation of Leggett's inequalities is considered to have falsified realism in quantum mechanics. [ 4 ] In quantum mechanics "realism" means "notion that physical systems possess complete sets of definite values for various parameters prior to, and independent of, measurement". [ 5 ] | https://en.wikipedia.org/wiki/Leggett_inequality |
The Leggett–Garg inequality , [ 1 ] named for Anthony James Leggett and Anupam Garg , is a mathematical inequality fulfilled by all macrorealistic physical theories. Here, macrorealism (macroscopic realism) is a classical worldview defined by the conjunction of two postulates, of which the second has actually nothing to do with “macro-realism”: [ 1 ]
In quantum mechanics , the Leggett–Garg inequality is violated, meaning that the time evolution of a system cannot be understood classically. The situation is similar to the violation of Bell's inequalities in Bell test experiments , which plays an important role in understanding the nature of the Einstein–Podolsky–Rosen paradox . Here quantum entanglement plays the central role.
The simplest form of the Leggett–Garg inequality derives from examining a system that has only two possible states. These states have corresponding measurement values Q = ± 1 {\displaystyle Q=\pm 1} . The key here is that we have measurements at two different times, and one or more times between the first and last measurement. The simplest example is where the system is measured at three successive times t 1 < t 2 < t 3 {\displaystyle t_{1}<t_{2}<t_{3}} . Now suppose, for instance, that there is a perfect correlation C 13 = 1 {\displaystyle C_{13}=1} between times t 1 {\displaystyle t_{1}} and t 3 {\displaystyle t_{3}} . That is to say, that for N realisations of the experiment, the temporal correlation reads
We look at this case in some detail. What can be said about what happens at time t 2 {\displaystyle t_{2}} ? Well, it is possible that C 12 = C 23 = 1 {\displaystyle C_{12}=C_{23}=1} , so that if the value of Q {\displaystyle Q} at t 1 {\displaystyle t_{1}} is ± 1 {\displaystyle \pm 1} , then it is also ± 1 {\displaystyle \pm 1} for both times t 2 {\displaystyle t_{2}} and t 3 {\displaystyle t_{3}} .
It is also quite possible that C 12 = C 23 = − 1 {\displaystyle C_{12}=C_{23}=-1} , so that the value of Q {\displaystyle Q} at t 1 {\displaystyle t_{1}} is
flipped twice, and so has the same value at t 3 {\displaystyle t_{3}} as it did at t 1 {\displaystyle t_{1}} .
So, we can have both Q ( t 1 ) {\displaystyle Q(t_{1})} and Q ( t 2 ) {\displaystyle Q(t_{2})} anti-correlated as long as we have Q ( t 2 ) {\displaystyle Q(t_{2})} and Q ( t 3 ) {\displaystyle Q(t_{3})} anti-correlated.
Yet another possibility is that there is no correlation between Q ( t 1 ) {\displaystyle Q(t_{1})} and Q ( t 2 ) {\displaystyle Q(t_{2})} .
That is, we could have C 12 = C 23 = 0 {\displaystyle C_{12}=C_{23}=0} .
So, although it is known that if Q = ± 1 {\displaystyle Q=\pm 1} at t 1 {\displaystyle t_{1}} , it must also be ± 1 {\displaystyle \pm 1} at t 3 {\displaystyle t_{3}} ; the value at t 2 {\displaystyle t_{2}} may as well be determined by the toss of a coin.
We define K {\displaystyle K} as K = C 12 + C 23 − C 13 {\displaystyle K=C_{12}+C_{23}-C_{13}} .
In these three cases, we have K = 1 , − 3 , − 1 {\displaystyle K=1,-3,-1} respectively.
All that was for complete correlation between times t 1 {\displaystyle t_{1}} and t 3 {\displaystyle t_{3}} . In fact, for any correlation between these times K = C 12 + C 23 − C 13 ≤ 1 {\displaystyle K=C_{12}+C_{23}-C_{13}\leq 1} . To see this, we note that
It is easily seen that for every realisation r {\displaystyle r} , the term in the parentheses must be less than or equal to unity, so that the result for the average is also less than (or equal to) unity. If we have four distinct times rather than three, we have K = C 12 + C 23 + C 34 − C 14 ≤ 2 {\displaystyle K=C_{12}+C_{23}+C_{34}-C_{14}\leq 2} , and so on. These are the Leggett–Garg inequalities. They express the relation between the temporal correlations of ⟨ Q ( start ) Q ( end ) ⟩ {\displaystyle \langle Q({\text{start}})Q({\text{end}})\rangle } and the correlations between successive times in going from the start to the end.
In the derivations above, it has been assumed that the quantity Q , representing the state of the system, always has a definite value (macrorealism per se) and that its measurement at a certain time does not change this value nor its subsequent evolution (noninvasive measurability). A violation of the Leggett–Garg inequality implies that at least one of these two assumptions fails.
One of the first proposed experiments for demonstrating a violation of macroscopic realism employs superconducting quantum interference devices. There, using Josephson junctions , one should be able to prepare macroscopic superpositions of left and right rotating macroscopically large electronic currents in a superconducting ring. Under sufficient suppression of decoherence one should be able to demonstrate a violation of the Leggett–Garg inequality. [ 2 ] However, some criticism has been raised concerning the nature of indistinguishable electrons in a Fermi sea. [ 3 ] [ 4 ]
A criticism of some other proposed experiments on the Leggett–Garg inequality is that they do not really show a violation of macrorealism because they are essentially about measuring spins of individual particles. [ 5 ] In 2015 Robens et al. [ 6 ] demonstrated an experimental violation of the Leggett–Garg inequality using superpositions of positions instead of spin with a massive particle. At that time, and so far up until today, the Cesium atoms employed in their experiment represent the largest quantum objects which have been used to experimentally test the Leggett–Garg inequality. [ 7 ]
The experiments of Robens et al. [ 6 ] as well as Knee et al. , [ 8 ] using ideal negative measurements, also avoid a second criticism (referred to as “clumsiness loophole” [ 9 ] ) that has been directed to previous experiments using measurement protocols that could be interpreted as invasive, thereby conflicting with postulate 2.
Several other experimental violations have been reported, including in 2016 with neutrino particles using the MINOS dataset. [ 10 ]
Brukner and Kofler have also demonstrated that quantum violations can be found for arbitrarily large macroscopic systems. As an alternative to quantum decoherence , Brukner and Kofler are proposing a solution of the quantum-to-classical transition in terms of coarse-grained quantum measurements under which usually no violation of the Leggett–Garg inequality can be seen anymore. [ 11 ] [ 12 ]
Experiments proposed by Mermin [ 13 ] and Braunstein and Mann [ 14 ] would be better for testing macroscopic realism, but warns that the experiments may be complex enough to admit unforeseen loopholes in the analysis. A detailed discussion of the subject can be found in the review by Emary et al. [ 15 ]
The four-term Leggett–Garg inequality can be seen to be similar to the CHSH inequality . Moreover, equalities were proposed by Jaeger et al. [ 16 ]
Leggett-Garg Inequalities is the title of a 2021 music album by the Japanese band First Prequel. [1] | https://en.wikipedia.org/wiki/Leggett–Garg_inequality |
Terminator: Dark Fate is a 2019 American science fiction action film directed by Tim Miller and written by David S. Goyer , Justin Rhodes , and Billy Ray , based on a story by James Cameron , Charles H. Eglee , Josh Friedman , Goyer, and Rhodes. It is the sixth installment in the Terminator franchise and a direct sequel to The Terminator (1984) and Terminator 2: Judgment Day (1991), ignoring the events depicted in Terminator 3: Rise of the Machines (2003), Terminator Salvation (2009), and Terminator Genisys (2015).
The film stars Linda Hamilton and Arnold Schwarzenegger reprising their roles as Sarah Connor and the T‑101 Terminator respectively, and introduces Mackenzie Davis , Natalia Reyes , Gabriel Luna and Diego Boneta as new characters.
The film is set 25 years after the events of Terminator 2 , when the machines send an advanced Terminator (Luna) back in time to 2020 with instructions to kill Dani Ramos (Reyes), whose fate is connected to the future. The Resistance also sends Grace (Davis), an augmented soldier, back in time to defend Dani, who is also joined by Sarah Connor and Skynet's T-800. Principal photography took place from June to November 2018 in Hungary, Spain, and the United States.
Distributed by Paramount Pictures in the United States and Canada and 20th Century Fox internationally, the film was released theatrically in the United States on November 1, 2019. Terminator: Dark Fate received mixed reviews from critics, though it was considered an improvement over recent predecessors. However, the film grossed $261.1 million worldwide and lost $122.6 million, making it one of the biggest box-office bombs of all time .
In 1998, three years after destroying Cyberdyne Systems , [ a ] Sarah and John Connor have retired to Livingston , Guatemala . They are suddenly ambushed by a T-800 Terminator , one of several sent back through time by Skynet , which kills John despite Sarah's attempts to stop it.
In 2020, an advanced Terminator, the Rev-9 , is sent back in time to Mexico City to murder Dani Ramos, while cybernetically enhanced soldier Grace is sent from 2042 to protect her. The Rev-9, disguised as Dani's father, infiltrates the automobile assembly plant where she and her brother Diego work, but is thwarted by Grace, who escapes with the siblings. The Rev-9, using its ability to split into its cybernetic endoskeleton and shape-shifting liquid metal exterior, pursues the trio, killing Diego and cornering Grace and Dani. However, Sarah arrives and temporarily disables both forms of the entity using military-grade weaponry. Grace and Dani steal Sarah's car to escape, but due to her enhanced cybernetic capabilities, Grace needs medicine or else she falls unconscious. Raiding a pharmacy, they get the medicine they need before Sarah catches up and guides them to a motel.
At the motel, Sarah reveals that she found them because in the years since John's death she has received encrypted messages detailing the locations of arriving Terminators, each ending with "For John", allowing her to destroy them before they become threats. Grace notes that Skynet and John do not exist in her future, meaning Sarah succeeded in destroying the former after Cyberdyne went defunct. However, humanity's future is threatened by another AI called Legion, originally developed for cyberwarfare , which was built in Skynet's place. When Legion became a threat to humans, an attempt was made to neutralize it with nuclear weapons , resulting in a nuclear holocaust and the AI creating a global network of machines to terminate humanity's survivors, who organized a resistance movement to counter Legion's onslaughts, and Dani's destiny is linked to their war against it.
Grace traces the source of Sarah's messages to Laredo, Texas . Barely evading the Rev-9 and the authorities while crossing the Mexico–United States border , they arrive at their source, where they discover the same T-800 that had killed John. Having fulfilled his mission and with Skynet no longer existing to give him further orders, the T-800 was left aimless. Over time and through his adaptability, he became self-aware , learned from humanity, and developed a conscience , taking the name "Carl" and adopting a human family. After learning how his actions affected Sarah and being able to detect the location of temporal displacements, Carl began to forewarn her of them to give her a purpose to make amends. Carl joins them against the Rev-9 and they prepare to destroy it, with Sarah begrudgingly agreeing to work together for Dani's sake. Anticipating the Rev-9's arrival, Carl bids his family farewell and tells them to escape.
The group seek out a military-grade electromagnetic pulse (EMP) generator from Air Force officer and acquaintance of Sarah's, Major Dean. The Rev-9 catches up with them, forcing them to steal a plane to escape, though the EMP generators are destroyed in the resulting shootout. During the flight, Grace reveals Dani will become the future founding commander of the resistance before the Rev-9 boards their airplane and temporarily subdues Carl, forcing the three women to parachute from the plane into a river near a hydroelectric plant, with Carl and the Rev-9 following close behind. Trapped, the group makes their stand inside the plant. In the ensuing battle, Carl and Grace force the Rev-9 into a spinning turbine, causing an explosion that critically damages the two Terminators. The severely damaged Rev-9 endoskeleton incapacitates Sarah, forcing Dani to confront it herself. A dying Grace tells Dani to use her power source to destroy it. Dani tries to fight the Rev-9 but is quickly overpowered. Carl reactivates himself and restrains it, allowing Dani to stab it with Grace's power source. He then drags himself and the Rev-9 over a ledge before the power core explodes, destroying them both.
Sometime later, Dani and Sarah watch a young Grace at a playground with her family, the former determined to avert Grace's death and Legion's rise, before driving off to prepare.
By December 2013, Skydance Productions was planning for Terminator Genisys to be the start of a new trilogy of films . [ 22 ] [ 23 ] The Genisys sequels were scheduled for release on May 19, 2017 and June 29, 2018. [ 24 ] [ 25 ] For the second film in the planned trilogy, actor Arnold Schwarzenegger was to reprise his role as the T-800 . [ 26 ] Terminator Genisys was produced by Skydance founder David Ellison and was released in 2015, but its disappointing box-office performance stalled the development of the planned trilogy. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Dana Goldberg, the chief creative officer for Skydance, said in October 2015 that she "wouldn't say [the franchise is] on hold, so much as re-adjusting". According to Goldberg, despite Genisys ' disappointing domestic performance, the company was happy with its worldwide numbers and still intended to make new films. Production of a sequel would begin no earlier than 2016 because the company planned market research to determine its direction after Genisys . [ 29 ] The Genisys sequels were ultimately canceled. [ 31 ] [ 32 ]
Tim Miller and Ellison talked about Miller eventually directing a new Terminator film after completing Deadpool 2 (2018). [ 33 ] [ 34 ] When Miller left the Deadpool 2 project in October 2016, [ 35 ] he took on the Terminator film as his next project instead. [ 33 ] [ 34 ] At the request of Miller, [ 36 ] franchise creator James Cameron subsequently joined the project. Cameron had directed and co-written the first two Terminator films, [ 37 ] [ 38 ] and Miller, through his company Blur Studio , had previously worked with Cameron. [ 39 ] Ellison felt that Genisys could have been better, so he recruited Cameron as a fellow producer in hopes of creating a better film. [ 40 ] [ 41 ] Cameron was intrigued by Ellison's proposal to make a sequel to Terminator 2: Judgment Day (1991), ignoring the events of Terminator 3: Rise of the Machines (2003), Terminator Salvation (2009), and Terminator Genisys . [ 41 ] [ 42 ] Cameron said "we're pretending the other films were a bad dream. Or an alternate timeline , which is permissible in our multi-verse ." [ 33 ] Other filmmakers on the project had suggested making the film without Schwarzenegger, but Cameron disliked the idea as he and Schwarzenegger were friends. [ 43 ] Cameron agreed to produce the film on the condition that Schwarzenegger be involved. [ 44 ] [ 45 ] As producer, Cameron was involved in pre-production and script work, [ 41 ] and also provided his input on the project. [ 37 ] Miller felt that audiences had "lost hope" in the franchise following the last three films. He believed that Cameron's involvement would serve as a "seal of quality" which would convince fans that the franchise "was going to be handled at least in a way that the original filmmaker would want". [ 38 ]
Cameron was involved with the film as of January 2017 and Ellison was searching for a writer among science fiction authors with the intention that Miller direct. [ 46 ] Later in the month, Ellison said there would be an announcement regarding the future of the franchise before the end of the year, adding that it was going to be in a direction that would provide "the continuation of what the fans really wanted since T2 ". [ 47 ] In July 2017, Cameron said that he was working with Ellison to set up a trilogy of films and supervise them. The intention was for Schwarzenegger to be involved, but also to introduce new characters and "pass the baton". [ 48 ]
On September 12, 2017, Skydance Media confirmed that Tim Miller would direct the new Terminator film [ 49 ] which was initially scheduled for a theatrical release on July 26, 2019. [ 50 ] The film's budget was approximately $185–$196 million, [ 51 ] split roughly three ways between Skydance, Paramount Pictures and 20th Century Fox , [ 52 ] all of which were production companies for the film. [ 53 ] Chinese company Tencent Pictures joined the project as a co-financier in April 2018, [ 54 ] ultimately financing ten percent of the budget. [ 51 ] Tencent was a production company on the project [ 53 ] and also handled the film's distribution, marketing and merchandising in China. [ 54 ] TSG Entertainment and James Cameron 's Lightstorm Entertainment were also involved in the film's production. [ 53 ]
Before screenwriters were hired, Miller had asked that a group of novelists be consulted on how to reinvent the franchise. [ 55 ] Among the novelists were Joe Abercrombie , Neal Asher , Greg Bear , Warren Ellis and Neal Stephenson . [ 34 ] Abercrombie suggested the idea of a female character who is half human and half machine, forming the origins of the character Grace. [ 55 ] A human-machine character, Marcus Wright, was previously featured in Terminator Salvation , portrayed by Sam Worthington . [ 56 ]
The film's story was conceived by Miller, Cameron and Ellison and a team of writers was hired to write the script. They included Charles H. Eglee , David S. Goyer and his writing partner Justin Rhodes and Josh Friedman , creator of the television series Terminator: The Sarah Connor Chronicles . [ 41 ] [ 57 ] [ 58 ] [ 34 ] Cameron and the writers watched the Terminator sequels that came after his initial films. They determined that the storylines of the later films were too complex when it came to time travel. [ 41 ] [ 42 ] Weeks were spent working on the story which was eventually envisioned as a new Terminator film trilogy. [ 41 ] [ 59 ] [ 60 ] Goyer wrote a draft for the first film in the trilogy that would ultimately become Terminator: Dark Fate . [ 34 ]
Goyer moved on to other projects. [ 34 ] By November 2017, Billy Ray was brought in to polish the script. [ 57 ] Ray rewrote much of Goyer's draft. Miller wrote the film's action scenes, while Ray handled the characters. [ 34 ] Cameron had a list of action scenes, for no particular film, that he had wanted to shoot over the years. He gave this list to Miller, so he could work them into Terminator: Dark Fate . The list formed the basis for scenes involving a dam and a Humvee underwater. [ 61 ] As the start of filming approached, Cameron felt that the script needed improvement and made the changes himself. [ 41 ] The film's story credits were given to James Cameron, Charles Eglee, Josh Friedman, David Goyer and Justin Rhodes; screenplay by David Goyer, Justin Rhodes and Billy Ray. [ 62 ] Cameron said that he and Miller ultimately had many disagreements about the film, but he described it as being part of the creative process. [ 37 ] Among their disagreements was whether the human resistance would be winning or losing to Legion in the future. Miller wanted the humans to be losing, while Cameron felt differently. Miller said, "Legion is so powerful, the only way to beat it is going back in time and strangle it in the crib. Jim says, 'What's dramatic about the humans losing?' And I say, 'Well, What's dramatic about the humans winning and they just need to keep on winning?' I like a last stand. It's not his thing." Miller also had disagreements with Ellison. [ 63 ] [ 64 ]
Miller said that the destruction of Cyberdyne at the end of Terminator 2: Judgment Day is an event which would change the future "but no one knew how. And I don't think the movies that came after it really explored that in a clean way like I believe we are, with true consequences, and it makes perfect sense for Sarah to be the one to face those consequences since they were her choices to begin with." [ 65 ] One consequence would be the death of John Connor, who was initially meant to become the future leader of the human resistance against machines. [ 61 ] The decision to kill the John Connor character came from Cameron, who wanted to surprise audiences who had become invested in the character's mythology: "It's like, 'Let's just get that right off the table. Let's just pull the carpet out from underneath all of our assumptions of what a Terminator movie is going to be about. Let's just put a bullet in his head at a pizzeria in the first 45 seconds.'" [ 66 ] Cameron said that John's death serves as "a springboard for the story to show Sarah's ultimate trauma from which she only begins to recover right at the end of the new film. She's driven by hatred, by revenge. (...) Her badassery comes from a place of deep hurt and deep pain." [ 66 ]
Miller said that he and the other filmmakers did not find the decision to kill John controversial. Miller felt that Sarah Connor was best portrayed as an unhappy character and he said that John's death provided a reason for her to be that way. [ 61 ] Miller said of Sarah Connor: "Grief has made her want to be an emotionless killing machine. And at the end of the movie, she's allowing herself to care again, she comes back to humanity. Her shriveled heart has blossomed again. That was the journey". However, Miller did not want Sarah Connor to be an unpleasant and "unwatchable" character and said, "I think Sarah is tough, but it's not uncomfortable to watch." [ 34 ]
Cameron believed that removing John Connor would prevent the film from feeling like a retread of previous films. [ 67 ] Discarding John Connor allowed for new characters to be worked into the story. Miller said, "You can't have John be a 36-year-old accountant somewhere. And really, when you think about it, he could be sort of a pathetic figure as a man who had missed his moment in history and was relegated to this banal, ordinary existence". Describing the opening scene, Miller said, "You want to slap the audience in the face and say, 'Wake up. This is going to be different.' I feel like that accomplished that. I hate the violence of it. I hate the idea of a kid being shot, but the dramatic fuel that it gives the story is kind of undeniable." In the early stages of development, there was consideration given to the idea that Dani Ramos could be portrayed as John's daughter, or that she could have some other connection to the Connors. However, Miller disliked the idea that she would be related to them. [ 61 ] There were never plans to feature John Connor in any other scenes besides the opening. [ 66 ] Linda Hamilton was somewhat shocked by the decision to kill John Connor, which she believed would upset a lot of fans, but she also said she wanted the film series and its characters to evolve. [ 67 ] [ 68 ] She was pleased with the film's characters, feeling that earlier sequels to Terminator 2 lacked characters the audience would care about. [ 69 ]
Miller was dissatisfied with the final film's idea that Dani would send Grace to the past, saying, "We set up this whole [story] where Grace is kind of Dani's surrogate child and a mother sending her child to die for her is just...yeah, I had a different scene in mind." [ 70 ] Additionally, several endings were considered, including one where Sarah and Dani would bury Grace and another where Grace's body would be burned and sent down a river. Eventually, Miller suggested the idea that Dani would go to see the younger Grace. The ending playground scene was a late addition to the film. [ 70 ]
Cameron devised the idea of a T-800 Terminator that is "just out there in this kind of limbo" for more than 20 years after carrying out an order, becoming more human "in the sense that he's evaluating the moral consequences of things that he did, that he was ordered to do back in his early days, and really kind of developing a consciousness and a conscience". Cameron considered this iteration of the character to be more interesting than those featured in his first two films, saying, "We've seen the Terminator that was programmed to be bad; you've seen the one that was programmed to be good, to be a protector. But in both cases, neither one of them have free will." [ 37 ] Schwarzenegger enjoys interior decorating, so Cameron suggested that his T-800 character in the film have a drapery business. [ 71 ] [ 72 ] [ 73 ] Miller arranged the script's structure to have Schwarzenegger's character appear later in the story, to allow time for the three female lead characters to develop. [ 74 ]
By April 2017, Schwarzenegger had joined the project to reprise his role. [ 75 ] [ 76 ] That September, it was announced Hamilton would reprise her role as Sarah Connor, whom she previously portrayed in the first two films. [ 77 ] Hamilton had also briefly reprised the role for the 1996 theme park attraction T2-3D: Battle Across Time , [ 78 ] and provided her voice in an uncredited role for Terminator Salvation . [ 79 ] Because previous Terminator films did not do well with audiences, Miller felt it was necessary to have Hamilton reprise the role. [ 65 ] [ 80 ] Cameron, Ellison and Miller only wanted to bring back the Sarah Connor character if Hamilton would reprise the role. The film's storyline was devised first so the trio would have an idea to pitch to Hamilton. [ 41 ] Cameron said that he sent Hamilton a "long rambling email with a lot of reasons why she should do it and a lot of reasons why she shouldn't". Cameron's main reason why Hamilton should return was that people liked her in the role. [ 40 ] There was never a version of the film that excluded Hamilton and Miller said there was no backup plan in the event that she declined the role. [ 81 ]
After approximately six weeks, [ 82 ] Hamilton chose to sign on to the film, [ 65 ] which did not yet have a completed script for her to read; that was still being refined. [ 83 ] Initially, Hamilton was unsure if she wanted to reprise the role. [ 84 ] She had been semi-retired from acting, [ 85 ] and said, "I didn't want it to look like a shameless money grab. I am living this quiet, lovely life that doesn't involve being a celebrity, and you really have to think, do I really want to trade that in again for another 15 minutes ?" [ 82 ] Because so much time had passed since her last appearance as Sarah Connor, Hamilton had assumed that she would never reprise the role and she was surprised by the offer to do so. [ 65 ] Of her decision to return Hamilton said, "I was very pleased that all of the years had passed, because I could fill the years up with so much backstory and inner life that could power the character." [ 83 ]
Hamilton spent more than a year working with a fitness trainer to get into physical shape for the role. [ 83 ] [ 65 ] [ 86 ] Hamilton said she put 10 times more effort into her physique than she did for Terminator 2 . This included a regimen of supplements and bioidentical hormones , as well as training with Green Berets . [ 87 ] [ 88 ] She also took weapons training. [ 89 ] Commenting on Hamilton's role, Cameron said he liked the idea of an action film starring a 62-year-old actress. [ 90 ] [ 91 ] Hamilton chose to dye her hair gray for the film, as she wanted viewers to see her character as an old woman. [ 92 ] Hamilton disliked the physical training, [ 93 ] and she had suggested that her character be portrayed as a fat person so she would not have to train for the film, although the idea was rejected. [ 94 ] [ 89 ]
In March 2018, it was announced that Mackenzie Davis had been cast in the film. [ 6 ] Miller said of Davis, "I didn't just want a woman who could physically fit the role but emotionally as well. Mackenzie really wanted to do it; she came after the role. She worked harder than anybody." [ 55 ] After Davis was cast, she undertook physical training for the film's fight scenes. [ 95 ] [ 96 ] Schwarzenegger and Gabriel Luna also underwent physical training for the film. [ 97 ] Luna was first considered for a role in December 2017, when a four-month casting process began for him. [ 98 ]
The production team wanted to cast an 18-to-20-year-old woman as the new centerpiece of the story. [ 77 ] Hamilton rehearsed lines with several actresses who were auditioning for the role of Dani and she immediately felt that Natalia Reyes was the right choice. [ 99 ] When Reyes sent in an audition tape, all she knew about the project was that it was a "big American movie". She soon had a meeting with Miller through Skype , before coming to Los Angeles to audition with Hamilton. For her next audition, Reyes was flown to Dublin to audition with Davis, who was there shooting another film. The casting process lasted a month and a half for Reyes before she was finally cast. Afterwards, she went through physical training to prepare for the role. [ 100 ]
Because the film is partially set in Mexico City, the cast includes several Latino actors, [ 101 ] [ 102 ] including Reyes, Luna and Diego Boneta , who were cast as primary characters in April 2018. [ 8 ] Reyes said, "This movie is a reflection of Hollywood now. We are just changing these stereotypes and the ideas and the cliches of what a Latino should be." [ 101 ] Cameron watched all the audition tapes and gave his approval to the casting choices. [ 100 ] By June 2018, Jude Collie had been cast as the double for a young John Connor, with Brett Azar reprising his role from Genisys as the body double for a younger T-800. [ 18 ]
Cameron announced in July 2019 that Edward Furlong would reprise his role as John Connor from Terminator 2: Judgment Day . [ 20 ] Furlong later maintained that his role in the film was small, [ 103 ] and Miller regretted that Cameron had made such an announcement. [ 104 ] Furlong's likeness was used to recreate his younger face digitally using CGI. He also gave a performance through facial motion capture footage to de-age him that was added into the film. [ 66 ] [ 19 ] [ 104 ] For his performance, Furlong simultaneously watched footage of Collie during the film's opening scene and had to match his own performance with Collie's precisely. [ 105 ] Furlong is credited as "John Connor reference". [ 16 ] Furlong was disappointed by his small role, which was limited to one day of work. [ 17 ]
Production was intended to start initially in March 2018, but was delayed due to casting. It was then expected to start during May and end during November with filming taking place in Hungary , the United Kingdom , Spain and Mexico . [ 106 ] In April 2018, the film's release date was delayed until November 2019. [ 107 ] Filming began in Spain on June 4, 2018, under the working title Terminator 6: Phoenix . [ 14 ] [ 108 ] [ 109 ] Filming subsequently moved to Hungary and the United States, [ 110 ] before concluding in November 2018. [ 111 ]
The film, like Cameron's initial Terminator films, is rated R , whereas the previous two films were rated PG-13. [ 112 ] [ 113 ] [ 41 ] Miller said the film is rated R because "the fans kind of demanded it, in a way", saying that "the DNA of Terminator" is an R-rated movie and that "to not do it R feels disingenuous to the source material". [ 45 ] [ 114 ] Initially, certain scenes were filmed in two ways—with and without R-rated violence and language. This gave the filmmakers an alternative in the event that the film's intended R rating should be reconsidered. The filmmakers eventually abandoned this method after deciding definitively on an R-rated film. [ 41 ] [ 59 ]
During filming, Cameron made further changes to the script to perfect the characters. In some cases, his script changes were submitted to Miller only a day prior to filming the scene. [ 41 ] Hamilton rejected certain actions and lines of dialogue that she felt were uncharacteristic for Sarah Connor. [ 83 ] [ 93 ] [ 115 ] Schwarzenegger also added and changed some of his own lines during filming. [ 92 ] Cameron did not visit the set, as he was busy filming his Avatar sequels . [ 41 ] He also did not want to interfere with Miller's directorial work. [ 37 ]
The first day of filming took place in Isleta del Moro , Almería , Spain. [ 116 ] [ 117 ] It involved the pivotal opening scene featuring the characters of the T-800, Sarah Connor and John Connor. The three characters were portrayed in the scene by body doubles and digital de-aging was later applied to give them a youthful appearance. The doubles wore special hoods that tracked their head movements, allowing their facial features to be replaced later by new motion capture facial footage recorded by Schwarzenegger, Hamilton and Furlong. [ 104 ] [ 92 ] [ 118 ]
During filming of the opening scene, Hamilton expressed dissatisfaction with the body double's portrayal, feeling that it did not accurately reflect the character. Hamilton advised the body double on how to portray the character for a more fierce response to the T-800 character. Hamilton was disappointed that she had no onscreen part in the scene and later said, "It wasn't me and it really hurt. I cried my eyes out when I got home." [ 92 ] [ 119 ] The film used more stuntwomen for Sarah Connor than Terminator 2 . Hamilton said she "really got a little crazy trying to micromanage" them to ensure that they moved the way her character should. For this reason, Hamilton performed some of her own stunts. [ 120 ]
Scenes that were set in Mexico were shot entirely in Spain, where filming lasted approximately 30 days. [ 121 ] Spain was chosen for budgetary reasons and because of safety concerns over drug cartel violence in Mexico. Filming locations included the Madrid neighborhoods of Pueblo Nuevo and Lavapiés , which stood in as Mexican towns. For these scenes, the film crew repainted cars to resemble taxis and also left old vehicles on the streets to suggest they were abandoned. [ 122 ] An artist was also hired to paint graffiti art to further give the location a Mexican appearance. Boneta, who was born and raised in Mexico City, was asked to meet with the film's art department leaders to ensure that the filming locations in Spain had an authentic Mexican look. [ 123 ] While filming in Spain, Luna coached several actors on how to speak Spanish with a Mexican accent. [ 101 ]
In July 2018, filming took place for two weeks in Catral , including the San Juan industrial estate. [ 124 ] Filming also took place in Cartagena , [ 125 ] and at the Aldeadávila Dam . [ 124 ] A combination of practical effects and CGI were used for a highway chase sequence in which the Rev-9 pursues Grace, Dani and Diego. [ 126 ] Sarah Connor's present-day introduction also takes place on the highway and Hamilton rehearsed the scene extensively before it was filmed. [ 127 ] Approximately seven freeway locations in Spain had been considered before settling on the final choice, [ 123 ] consisting of new roads leading to the then-unopened Región de Murcia International Airport . [ 124 ] [ 128 ]
The highway chase sequence required a crew of approximately 300 people, with many different film departments involved. A custom-built pod car, similar to a dune buggy , was built to haul a pickup truck during filming. This allowed Davis, Reyes and Boneta to act out their scenes in the truck while the driving was handled by a professional driver in the pod car. Cameras were attached to the pickup truck to film the actors while the vehicle was in motion. [ 123 ] One shot filmed at the San Juan industrial estate depicts the Rev-9 driving its plow truck through a wall, which was built specifically for the shot. [ 128 ] The highway chase was initially planned to be twice as long. The Rev-9 was to have killed a cop and stolen a motorcycle to continue its pursuit and the motorcycle would be shot at and destroyed. The Rev-9 would subsequently leap onto a truck and then onto Dani's vehicle. The extended sequence was previsualized, but Miller chose not to film it as the sequence was considered "crazy" enough already. Previously, Miller had wanted to film the motorcycle sequence for his 2016 film Deadpool . [ 61 ]
Filming moved to Hungary on July 19, 2018. [ 110 ] Filming locations there included Origo Film Studios in Budapest . [ 129 ] [ 130 ] [ 131 ] [ 132 ] Part of the film's C-5 plane sequence involves the characters floating in the fuselage in zero gravity. Miller spoke with pilots to do research into gravity and the plane's action scenes, which were difficult to choreograph because of the constant gravity changes depicted. [ 123 ] Hamilton said the film's script was the first one that she did not fully understand, because of the large amount of action. Animated previsualization aided the cast during such scenes. [ 92 ] [ 93 ] In Budapest, special effects supervisor Neil Corbould created the film's largest set piece: the fuselage of the C-5. The set was constructed on an 85-ton gimbal , the largest ever built. The set was capable of rotating 360 degrees and could tilt backwards and forwards at 10 degrees. It was powered by five 200-liter-per-minute hydraulic pumps , as well as more than a mile and a half of hydraulic hoses . A pit had to be dug in the concrete floor of the sound stage to accommodate the large set, which took approximately five months to design and another five months to build. The set was 60 feet long, half the length of a real C-5 fuselage and it contained a bluescreen at one end for post-production effects to be added in later. The rotating set helped to achieve the sense of gravity needed for the scene and the set also allowed the camera crew members to strap themselves inside. The plane set was padded for actors who shot scenes inside it. Foam replicas of military vehicles were also situated inside the plane with the actors. [ 123 ] [ 133 ]
Davis said shooting the film was "the hardest thing" she had ever done because of the physical requirements. [ 134 ] One scene depicts a Humvee falling out of the C-5 plane, with Grace having to open the vehicle's parachutes to land it safely. Davis was suspended with wires to perform the scene, which was filmed in Budapest. [ 135 ] An underwater action scene took weeks to shoot and involved immersing Hamilton and Reyes in a water tank. [ 92 ] The scene depicts Sarah and Dani inside the Humvee after it falls over the dam and into water. The scene was shot in a tank surrounded by a large bluescreen stage which depicted the exterior environment. For the scene, each day of shooting took place over 12-hour periods from the evening to the morning. Another scene depicts the T-800 and Rev-9 fighting underwater. [ 100 ] [ 136 ]
The film includes a scene where the characters are held in a detention center on the Mexico–United States border . Miller said it was not meant as a social commentary or political statement on immigrant issues related to the border, [ 34 ] [ 38 ] [ 137 ] stating that the scene was "just a natural evolution of the story". He noted: "I tried to walk a line there because it's a terrible situation, but I didn't want to vilify border guards. They're people doing a job. The system is the problem. And even the choice to do it really wasn't a statement. It really was a function of us putting the story's beginning in central Mexico and then traveling." [ 34 ] Miller was emotional while filming the scene because of its depiction of immigrants being held in a detention center. [ 92 ] [ 138 ] Luna said, "We don't make any overt political stances; we just show you what's happening in the world and you receive it however as you may." [ 138 ] Scenes at the detention center were filmed in July 2018, at an old Nokia factory in the Hungarian city of Komárom . [ 123 ] [ 139 ] [ 140 ]
In late July 2018, Schwarzenegger began filming scenes in Budapest. [ 141 ] In September 2018, filming took place at a Mercedes-Benz factory in Kecskemét . [ 110 ] Filming in the United States was scheduled to begin in mid-October. [ 110 ] Carl's cabin was built from scratch. While the filmmakers liked the surrounding scenery, they rejected a previous house that was built on the property for another production, so it was torn down to construct the new home. [ 123 ] Schwarzenegger completed filming on October 28, 2018. [ 142 ] Filming wrapped in early November 2018. [ 111 ]
Cameron, who also works as a film editor, was heavily involved in the editing of Terminator: Dark Fate . He saw a rough cut of the film in early 2019 and provided Miller with notes on how to improve it feeling it needed to be perfected. He said the film "transformed quite a bit" from the rough cut. [ 37 ] The initial cut of the film, known as an assembly cut , was two hours and 50 minutes. Miller's director's cut was closer to the film's final runtime. Three or four minutes were removed from the director's cut, including a few scenes. Some scenes were also trimmed, including the underwater fight and those on board the C-5 airplane. [ 143 ] In his director's cut, Miller said he removed "a lot of stuff" that Cameron thought was important. Miller also said that he and Cameron had many disagreements about lines of dialogue which Miller thought were "poetic and beautiful", while Cameron thought they were unimportant. Because of the lack of full control throughout the project, Miller said he would likely not work with Cameron again, although the two maintained a good relationship. [ 63 ] The final cut of the film runs for 128 minutes. [ 144 ]
At one point late in production, Miller considered placing the opening scene later in the film, when Sarah is in the motel room explaining John's death to Grace and Dani. However, Miller said this structure "really changed a whole lot of stuff in a negative way" and he ultimately decided to keep it as an opening scene, in order to start the film off by shocking the audience. [ 74 ] The opening scene was originally longer as it featured dialogue between Sarah and John. This was cut from the final film as Cameron and Miller believed that the visual effects did not hold up well when the characters spoke. [ 66 ] [ 104 ] Another deleted scene went into more detail on how Carl knew about other Terminators arriving from the future. The scene, written by Cameron, explained that Carl created a cell phone app to track the arrivals, which disrupt cell phone signals. The scene was removed because it was considered too humorous compared to the rest of the sequence, which has a serious tone as it involves Sarah meeting her son's killer. [ 70 ] A shot was deleted from Carl's final fight with the Rev-9 that depicted it ripping flesh off of Carl's arm. Miller said, "We had to walk the line between gross and horrific" and he described the arm skin as "hanging like a big piece of jerky", saying, "That's where we drew the line." [ 145 ]
The film contains 2,600 visual effects shots and was edited using Adobe Premiere Pro and Adobe After Effects . [ 146 ] The visual effects were provided by Industrial Light & Magic (ILM) and Scanline VFX , supervised by Alex Wang, David Seager, Arek Komorowski. Eric Barba was the production supervisor with help from Blur Studio , Digital Domain , Method Studios , Unit Image, Rebellion VFX, Mammal Studios, Universal Production Partners (UPP), [ 147 ] Weta Digital , [ 148 ] Les Androïds Associés, The Third Floor, Inc. and Cantina Creative . [ 149 ] ILM was initially going to be the sole company working on visual effects, but others were brought on due to the amount of work that had to be done on the film. The Third Floor handled some of the previsualization. [ 150 ] Method Studios created visual effects for scenes involving the C-5 airplane and a helicopter crash. The company also created an establishing shot of a military base and several shots set during the border crossing. [ 151 ] Blur Studio handled scenes that depict Grace's future as a soldier. [ 123 ]
ILM handled the de-aging in the opening scene. [ 118 ] ILM's visual effects supervisor, Jeff White , said a lot of work went into the scene to ensure that the characters' faces looked realistic and had the same likenesses as Terminator 2 . [ 150 ] After seeing the digital head shots, Schwarzenegger provided guidance to the ILM team, which made subtle adjustments to perfect his character's facial movements. The ILM team also created the liquid metal effects of the Rev-9. The team studied time-lapse photography which depicted the growth of algae and fungus and this inspired the liquid metal movements. [ 152 ]
According to Cameron in February 2019, the film's working title was Terminator: Dark Fate . [ 153 ] This was confirmed as the film's official title the following month. [ 154 ]
Tom Holkenborg composed the film's score, reuniting with director Tim Miller after their collaboration in Deadpool . [ 155 ] Holkenborg recreated Brad Fiedel 's original " Terminator " theme while also introducing Latino elements to reflect the ethnicity of Dani Ramos. He used approximately 15 instruments while composing the score and also used the sound of an anvil and the banging of a washing machine, describing his score as being "way more aggressive" than Fiedel's. [ 156 ] The soundtrack was released digitally on November 1, 2019, by Paramount Music .
A first-look promotional image showing Hamilton, Davis and Reyes was released in August 2018. It was the subject of comments which criticized the absence of the Terminator and John Connor and received backlash for its focus on the female cast members. [ 157 ] [ 158 ] [ 159 ] A teaser trailer for the film was released on May 23, 2019, [ 160 ] that features a cover version of Björk 's " Hunter " performed by John Mark McMillan . [ 161 ] [ 162 ] The film's theatrical and international trailers were released on August 29, 2019. [ 163 ] [ 164 ] The trailers' release date marked the anniversary of the original Judgment Day date given in the second film. [ 165 ] Initially, the marketing campaign highlighted the return of Cameron and Hamilton. In the final months, the campaign focused more on the film's action and special effects. Promotional partners included Adobe Inc. and Ruffles . [ 166 ] In September 2019, Adobe and Paramount Pictures launched a contest for people to create their own remix version of the trailer using Adobe software and assets from the film. [ 167 ]
In early October 2019, brief footage of the film was shown during IMAX screenings of Joker . Miller and the cast went on a global press tour to promote the film and Hamilton attended a premiere event in Seoul on October 21, 2019. [ 166 ]
In the film, Schwarzenegger's character has a van which advertises "Carl's Draperies 888-512-1984" on the side of it. The number was an actual phone number which, when dialed, plays a recording of Schwarzenegger as Carl. The number references May 12, 1984, the date that Kyle Reese time-travels to in the first film. [ 168 ] [ 169 ]
Terminator: Dark Fate was theatrically released in Europe on October 23, 2019 [ 170 ] and was also theatrically released on November 1, 2019 by Paramount Pictures in the United States and Canada and 20th Century Fox internationally. [ 171 ] [ 172 ] On October 19, 2019, Alamo Drafthouse Cinema hosted surprise screenings of the film in 15 theaters, disguised as screenings of Terminator 2: Judgment Day (1991). [ 173 ] The premiere event for Terminator: Dark Fate in the United States was to be held on October 28, 2019 at TCL Chinese Theatre in Hollywood, Los Angeles , but it was canceled because of the nearby wildfires . [ 174 ] [ 175 ]
Terminator: Dark Fate was released digitally on January 14, 2020, before its home video releases on 4K Ultra HD, Blu-ray and DVD on January 28. [ 176 ] Several deleted scenes were included with the home video release, including one in which Sarah learns that Carl has informed Alicia of his past and his true nature as a killing machine. [ 177 ] In another scene, Sarah hijacks a man's vehicle on the highway after Grace and Dani steal hers. [ 178 ] Another scene depicts the characters being attacked by guards as they journey towards the border. [ 179 ] One deleted scene depicts Grace volunteering herself to an older Dani to send her to the past. [ 70 ] [ 180 ]
Terminator: Dark Fate grossed $62.3 million in the United States and Canada and $198.9 million in other territories, for a worldwide total of $261.1 million. [ 181 ] With a production budget between $185–196 million and an additional $80–100 million spent on marketing and distribution, early estimates stated the film needed to earn over $450 million worldwide to break even. [ 51 ] [ 182 ] The film ended up losing Paramount, Skydance and other studios $122.6 million. [ 183 ] It was labeled a box-office bomb after its dismal opening weekend, [ 184 ] [ 185 ] and it finished as the second biggest bomb of 2019. [ 183 ] As a result of the losses, sources close to Skydance said shortly after the release that there were no plans to continue the franchise. [ 186 ]
In the United States and Canada, Dark Fate was released at the same time as Harriet , Arctic Dogs and Motherless Brooklyn and was initially projected to gross $40–47 million from 4,086 theaters in its opening weekend. [ 187 ] The film made $2.35 million from Thursday night previews, on a par with the $2.3 million that Genisys made from its Tuesday night previews in 2015, but after making just $10.6 million on its first day, weekend estimates were lowered to $27 million. It went on to debut to $29 million. Although it finished first at the box office, it was the lowest opening in the series since the original film (when accounting for inflation), which was blamed on the lukewarm critical reception, as well as the audience's disinterest in another Terminator film. [ 51 ] The film made $10.8 million in its second weekend, dropping 63% and finishing fifth and then $4.3 million in its third weekend, falling to 11th. [ 188 ] [ 189 ]
In Germany, the film started out with 132,500 viewers, placing it third on that week's charts. [ 190 ] In the weekend following its international debut, the film grossed $12.8 million from countries in Europe and Asia, considered a low start. [ 191 ] [ 192 ] The film was projected to gross $125 million globally during the first weekend of November 2019. [ 52 ] Instead, it only made $101.9 million (18% below projections), including $72.9 million overseas. As it did in the U.S., the film under-performed in China, where it opened to just $28.2 million, far below the $40–50 million estimates. [ 51 ] [ 193 ]
On the review aggregator website Rotten Tomatoes , 70% of 351 critics' reviews are positive, with an average rating of 6.2/10. The website's consensus reads: " Terminator: Dark Fate represents a significant upgrade over its immediate predecessors, even if it lacks the thrilling firepower of the franchise's best installments." [ 194 ] Metacritic , which uses a weighted average , assigned the film a score of 54 out of 100, based on 51 critics, indicating "mixed or average" reviews. [ 195 ] Audiences polled by CinemaScore gave the film an average grade of "B+" on an A+ to F scale, the same score as its three immediate predecessors, while those at PostTrak gave it an overall positive score of 78%, with 51% saying they would definitely recommend it. [ 51 ]
The Hollywood Reporter wrote that critics overall seemed "cautiously excited about Dark Fate , although there's a certain awkwardness about seeing repeated recommendations that it is 'easily the third-best' movie in the series". [ 196 ] William Bibbiani of TheWrap wrote that, "Whether Terminator: Dark Fate is the last chapter in this story or the first in an all-new franchise is, for now, irrelevant. The film works either way, bringing the tale of the first two films to a satisfying conclusion while reintroducing the classic storyline, in exciting new ways, to an excited new audience. It's a breathtaking blockbuster, and a welcome return to form." [ 197 ] Variety ' s Owen Gleiberman called the film "the first vital Terminator sequel since Terminator 2 " and wrote that " Terminator: Dark Fate is a movie designed to impress you with its scale and visual effects, but it's also a film that returns, in good and gratifying ways, to the smartly packaged low-down genre-thriller classicism that gave the original Terminator its kick." [ 53 ]
Joe Morgenstern of The Wall Street Journal gave the film a negative review, describing it as "cobbled together by dunces in a last-ditch effort to wring revenue from a moribund concept. The plot makes no sense—time travel as multiverse Dada. Worse still, it renders meaningless the struggles that gave the first two films of the franchise an epic dimension." [ 198 ] Jefferey M. Anderson of Common Sense Media gave the movie two out of five stars: "This sixth Terminator movie erases the events of the previous three (dud) sequels but winds up feeling half-erased itself. It's like a dull, pale, irrelevant carbon copy of a once glorious hit." [ 199 ] Christy Lemire of RogerEbert.com gave Dark Fate two out of four stars, arguing that it suffered from "empty fanservice" and that Hamilton "deserves better" as does her supporting female cast. [ 200 ] David Ehrlich of IndieWire praised Hamilton's performance and the movie's digital recreations of her, Furlong's and Schwarzenegger's younger likenesses, but concluded that "this painfully generic action movie proves that the Terminator franchise is obsolete". [ 201 ] Tasha Robinson of The Verge stated that some combat sequences "are staged clearly and cleanly", while others "are packed with CGI blurs and muddy action and are hard to follow in even the most basic 'who's where, and are they dead?' kind of way. And when Dark Fate does deign to explain what's going on, it delivers its exposition in a self-important, hushed, clumsy way, as if audiences should be astonished by the most basic plot revelations." [ 202 ]
Peter Bradshaw of The Guardian awarded it two stars out of five, stating "The Terminator franchise has come clanking robotically into view once again with its sixth film – it absolutely will not stop – not merely repeating itself but somehow repeating the repetitions." While he wrote that it was "good to see Hamilton getting a robust role", he added that "sadly, she has to concede badass superiority to Davis." He concluded by writing, "This sixth Terminator surely has to be the last. Yet the very nature of the Terminator story means that going round and round in existential circles comes with the territory." [ 203 ] Richard Roeper of the Chicago Sun-Times awarded the film two stars out of four, calling it a "boring retread" and "so derivative of Judgment Day ", although he welcomed the return of Linda Hamilton, praised an "impressively effective" Mackenzie Davis and the "winning screen presence" of Natalia Reyes." [ 204 ] Angie Han of Mashable found the film underwhelming and its title to be quite apt: " Dark Fate is too thinly sketched to be anything but pastiche. It feels like a Terminator movie spit out by a machine designed to make Terminator movies. A dark fate for the franchise, indeed." [ 205 ]
Regarding the film's mixed reception, Tim Miller believed that some audiences were predisposed to dislike the film after being disappointed by the last three films, adding that some audiences "hate it because it's the sixth movie, and Hollywood should be making original movies and not repeating franchises". [ 38 ] Miller later gave a more blunt assessment in 2022: "Terminator's an interesting movie to explore, but maybe we've explored it enough. I went in with the rock hard nerd belief that if I made a good movie that I wanted to see, it would do well. And I was wrong. It was one of those f**king Eureka moments in a bad way because the movie tanked." [ 206 ]
James Cameron reflected on Dark Fate in 2022: "I'm actually reasonably happy with the film ... I think the problem, and I'm going to wear this one, is that I refused to do it without Arnold ... And then Tim wanted Linda. I think what happened is I think the movie could have survived having Linda in it, I think it could have survived having Arnold in it, but when you put Linda and Arnold in it and then, you know, she's 60-something, he's 70-something, all of a sudden it wasn't your Terminator movie, it wasn't even your dad's Terminator movie, it was your granddad's Terminator movie. And we didn't see that ... it was just our own myopia. We kind of got a little high on our own supply and I think that's the lesson there." [ 207 ]
Schwarzenegger also gave a blunt assessment in a 2023 interview about the franchise: "The first three movies were great. Number four [ Salvation ] I was not in because I was governor. Then five [ Genisys ] and six [ Dark Fate ] didn't close the deal as far as I'm concerned. We knew that ahead of time because they were just not well written." [ 208 ]
The death of John Connor early in the film was criticized by both critics and fans. [ 209 ] [ 210 ] Fred Hawson of News.ABS-CBN.com wrote that "deciding to lose John Connor early on in this one made the emotional heart of the first two classic Terminator films stop beating as well." [ 211 ] Richard Roeper argued that killing John Connor ruined what the previous two films established: "Even though Dark Fate tosses aside the third, fourth and fifth entries in the series like a Terminator disposing of a hapless cop, it also undercuts the impact of the first film and the follow-up (which is one of the two or three greatest sequels of all time). First, they get rid of the John Connor character in almost casual fashion." [ 204 ]
Corey Plante of Inverse , who was critical of Furlong's portrayal of the character in Terminator 2 , nonetheless found his character's death off-putting: "The character at the focus of every previous Terminator movie—the same young boy I irrationally hated since I was a young boy myself—was dead. Needless to say, it rattled me." [ 212 ] He also found that replacing him with new heroes undermined the Connors' importance established in the previous films: "The future that made [Sarah Connor] important died with John, and now there's a new Terminator story with a new set of heroes that makes it seem like no matter how many times Skynet or its next iteration sends a murder robot back in time to kill someone, there will always be a new hero waiting to rise up." [ 212 ] Robert Yaniz Jr. of CheatSheet described the twist as unthinkable: "In an instant, the entire crux of the franchise—the human resistance led by John—is torn away." [ 213 ]
Matt Goldberg of Collider felt the opening did irreparable damage to the legacy of Terminator 2 by rendering it pointless: "Every sequel since has diminished the ending of Judgment Day because the story 'needs' to continue (because studios like money and can't leave well enough alone). But Terminator: Dark Fate may be the worst offender thus far as its prologue directly follows T2 and goes for shock value rather than considering what it means to continue the narrative." [ 214 ] Richard Trenholm of CNET felt the opening twist summed up everything wrong with Dark Fate : "The joy [of seeing the de-aged characters] instantly becomes cringeworthy, as this prologue undermines Terminator 2 by killing a major character in such a cursory fashion it just feels silly." [ 215 ] Ian Sandwell of Digital Spy suggested that the twist was not particularly important, given that in the other films, John Connor only exists to "motivate the other characters and sets the plot in motion" and that John's role as a future leader had already been rendered moot through the elimination of Skynet. [ 209 ]
About the controversial scene, Furlong also expressed his displeasure and hoped to reprise the role in full in a future film. [ 216 ] Linda Hamilton also voiced her opinion that the scene would upset fans, as she considered John to be the true main protagonist of the franchise. [ 217 ] Nick Stahl , who portrayed John in Terminator 3: Rise of the Machines , also expressed interest in reprising the role in a possible seventh film. [ 218 ]
Cathal Gunning of Screen Rant noted the similarity between the decision to kill off John Connor in the opening scene to the deaths of Newt and Corporal Dwayne Hicks in Alien 3 , which was criticized by Cameron, who had directed the preceding film, Aliens . [ 219 ]
Terminator: Dark Fate received a nomination for Outstanding Special (Practical) Effects in a Photoreal or Animated Project at the 18th Visual Effects Society Awards . [ 220 ] At the 2020 Dragon Awards , the film received a nomination for Best Science Fiction or Fantasy Movie. [ 221 ] It was nominated in two categories at the 2021 Golden Trailer Awards : "Talk" (Create Advertising Group) for Best Action and "Non Stop" (Aspect Ratio) for Best Home Ent Action. [ 222 ] Terminator: Dark Fate received three Saturn Award nominations for Best Science Fiction Film , Best Supporting Actress (for Hamilton) and Best Special Effects in 2021 . [ 223 ] [ 224 ]
The 2019 video game Gears 5 allows the player to play as either Sarah Connor with Hamilton voicing her character, Grace or a T-800 Terminator model. The game was released on September 6, 2019. [ 225 ] The T-800 model was later a downloadable playable character in Mortal Kombat 11 , using Schwarzenegger's likeness, but without the actor voicing the character; he was voiced by Chris Cox instead of Schwarzenegger. The downloadable content was released on October 8, 2019. [ 226 ] [ 227 ] [ 228 ]
A mobile game , titled Terminator: Dark Fate – The Game , was released in October 2019. [ 229 ]
The PC game, Terminator: Dark Fate – Defiance , developed by Slitherine Software in collaboration with Skydance, [ 230 ] was released on February 21, 2024. [ 231 ] It is a real-time tactics game that takes place ten years after the Judgment Day , during the war between humans and Legion. [ 232 ] [ 233 ] The player assumes the role of Lieutenant Alex Church, a commander of the Founders, a group consisting of ex-U.S. military personnel. A multiplayer mode features two other playable factions: Movement (a human resistance organization ) and Legion. [ 231 ] [ 234 ] The game received "mixed or average" reviews according to Metacritic , with a score of 72 out of 100. [ 235 ] Alex Avard of Empire found the quality of Terminator games to be inconsistent, but wrote that Dark Fate – Defiance , with its "tightly designed" gameplay, "may have just helped to raise that historically low bar a little bit higher." [ 236 ] Jon Bolding of IGN criticized the game's difficulty. [ 237 ]
National Entertainment Collectibles Association released action figures based on the film and Chronicle Collectibles released an 18-inch T-800 statue. [ 238 ]
Plans for a new Terminator film trilogy were announced in July 2017. [ 48 ] While working on the story for Terminator: Dark Fate that year, Cameron and the writers envisioned the film as the first in the new trilogy. They also worked out the basic storylines for each planned film. [ 41 ] [ 59 ] [ 60 ] [ 239 ] In October 2019, Cameron said that sequels to Terminator: Dark Fate would further explore the relationship between humans and artificial intelligence, while stating that a resolution between the two feuding sides would be the ultimate outcome. [ 239 ] [ 240 ] That month, Schwarzenegger said that Cameron would write the Terminator: Dark Fate sequels and that Cameron would begin work on the next film in early 2020, for release in 2022. [ 241 ]
Although the events of Terminator: Dark Fate erase Schwarzenegger's T-800 character from existence, Cameron did not rule out the possibility of Schwarzenegger reprising the character, saying, "Look, if we make a ton of money with this film [ Dark Fate ] and the cards say that they like Arnold, I think Arnold can come back. I'm a writer. I can think of scenarios. We don't have a plan for that right now, let me put it that way." [ 37 ] Hamilton said in October 2019 that she would probably reprise her role for a sequel, [ 242 ] although she joked that she would fake her own death to avoid appearing in it, saying that making Terminator: Dark Fate "really was hard" because of the physical training she had to undergo. [ 243 ] [ 244 ] In April 2024, Hamilton confirmed that she was done with the role. [ 245 ]
Following the film's performance at the box office, sources close to Skydance told The Hollywood Reporter that there are no plans for further films. [ 186 ] In June 2020, star Mackenzie Davis expressed: "I really loved the movie and I'm so proud of what we did, but there wasn't a demand for it [at the box office] and to think that there'd be a demand for a seventh film is quite insane. You should just pay attention to what audiences want". [ 246 ] Later in December, Davis went on to reveal that the seventh film would not have been a sequel to Dark Fate , but a spin-off focusing on an alternate timeline version of Grace set in the future war similar to Terminator Salvation and would not have featured Schwarzenegger. [ 247 ]
It was later reported in May 2023 that Cameron was developing a script for a Terminator reboot. [ 248 ] | https://en.wikipedia.org/wiki/Legion_(Terminator) |
In the New Testament , Legion ( Ancient Greek : λεγιών ) is a group of demons , particularly those in two of three versions of the exorcism of the Gerasene demoniac , an account in the synoptic Gospels of an incident in which Jesus of Nazareth performs an exorcism . Legion is a large collection of demons that share a single mind and will .
The earliest version of this story exists in the Gospel of Mark , described as taking place in "the country of the Gerasenes ". Jesus encounters a possessed man and calls on the demon to emerge, demanding to know its name – an important element of traditional exorcism practice. [ 1 ] He finds the man is possessed by a multitude of demons who give the collective name of "Legion". Fearing that Jesus will drive them out of the world and into the abyss, they beg him instead to cast them into a herd of pigs on a nearby hill, which he does. The pigs then rush into the sea and are drowned ( Mark 5:1–5:13 ).
This story is also in the other two Synoptic Gospels . The Gospel of Luke shortens the story but retains most of the details including the name ( Luke 8:26–8:33 ). The Gospel of Matthew shortens it more dramatically, changes the possessed man to two men (a particular stylistic device of this writer) and changes the location to "the country of the Gadarenes ". This is probably because the author was aware that Gerasa is actually around 50 km (31 mi) away from the Sea of Galilee —although Gadara is still 10 km (6.2 mi) away. In this version, the demons are unnamed [ 2 ] [ 3 ] ( Matthew 8:28–8:32 ).
According to Michael Willett Newheart, professor of New Testament Language and Literature at the Washington, D.C. Howard University School of Divinity, in a 2004 lecture, the author of the Gospel of Mark could well have expected readers to associate the name Legion with the Roman military formation , active in the area at the time ( around 70 AD ). [ 4 ] The intention may be to show that Jesus is stronger than the occupying force of the Romans. [ 5 ] The Biblical scholar Seyoon Kim , however, points out that the Latin legio was commonly used as a loanword in Hebrew and Aramaic to indicate an unspecified but large quantity. [ 6 ] In the New Testament text, it is used as a proper name, which is "saturated with meaning". [ 7 ] In this sense, it can mean both the size and power of the occupying Roman army as well as a multitude uncounted/ uncountable of demonic spirits. It is the latter sense that has become the common understanding of the term as an adjective in modern English (whereas when used as a noun it indicates the Roman military number, between 3,000 and 6,000 infantry with cavalry; cf. "we are legion" and "we are a legion"). [ 8 ] | https://en.wikipedia.org/wiki/Legion_(demons) |
The legion , in biological classification , is a non-obligatory taxonomic rank within the Linnaean hierarchy sometimes used in zoology .
In zoological taxonomy , the legion is:
Legions may be grouped into superlegions or subdivided into sublegions , and these again into infralegions .
Legions and their super/sub/infra groups have been employed in some classifications of birds and mammals . Full use is made of all of these (along with cohorts and supercohorts ) in, for example, McKenna and Bell's classification of mammals . [ 1 ]
This biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Legion_(taxonomy) |
In statistics , the Lehmann–Scheffé theorem is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation. [ 1 ] The theorem states that any estimator that is unbiased for a given unknown quantity and that depends on the data only through a complete , sufficient statistic is the unique best unbiased estimator of that quantity. The Lehmann–Scheffé theorem is named after Erich Leo Lehmann and Henry Scheffé , given their two early papers. [ 2 ] [ 3 ]
If T {\displaystyle T} is a complete sufficient statistic for θ {\displaystyle \theta } and E [ g ( T ) ] = τ ( θ ) {\displaystyle \operatorname {E} [g(T)]=\tau (\theta )} then g ( T ) {\displaystyle g(T)} is the uniformly minimum-variance unbiased estimator (UMVUE) of τ ( θ ) {\displaystyle \tau (\theta )} .
Let X → = X 1 , X 2 , … , X n {\displaystyle {\vec {X}}=X_{1},X_{2},\dots ,X_{n}} be a random sample from a distribution that has p.d.f (or p.m.f in the discrete case) f ( x : θ ) {\displaystyle f(x:\theta )} where θ ∈ Ω {\displaystyle \theta \in \Omega } is a parameter in the parameter space. Suppose Y = u ( X → ) {\displaystyle Y=u({\vec {X}})} is a sufficient statistic for θ , and let { f Y ( y : θ ) : θ ∈ Ω } {\displaystyle \{f_{Y}(y:\theta ):\theta \in \Omega \}} be a complete family. If φ : E [ φ ( Y ) ] = θ {\displaystyle \varphi :\operatorname {E} [\varphi (Y)]=\theta } then φ ( Y ) {\displaystyle \varphi (Y)} is the unique MVUE of θ .
By the Rao–Blackwell theorem , if Z {\displaystyle Z} is an unbiased estimator of θ then φ ( Y ) := E [ Z ∣ Y ] {\displaystyle \varphi (Y):=\operatorname {E} [Z\mid Y]} defines an unbiased estimator of θ with the property that its variance is not greater than that of Z {\displaystyle Z} .
Now we show that this function is unique. Suppose W {\displaystyle W} is another candidate MVUE estimator of θ . Then again ψ ( Y ) := E [ W ∣ Y ] {\displaystyle \psi (Y):=\operatorname {E} [W\mid Y]} defines an unbiased estimator of θ with the property that its variance is not greater than that of W {\displaystyle W} . Then
Since { f Y ( y : θ ) : θ ∈ Ω } {\displaystyle \{f_{Y}(y:\theta ):\theta \in \Omega \}} is a complete family
and therefore the function φ {\displaystyle \varphi } is the unique function of Y with variance not greater than that of any other unbiased estimator. We conclude that φ ( Y ) {\displaystyle \varphi (Y)} is the MVUE.
An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is not complete , was provided by Galili and Meilijson in 2016. [ 4 ] Let X 1 , … , X n {\displaystyle X_{1},\ldots ,X_{n}} be a random sample from a scale-uniform distribution X ∼ U ( ( 1 − k ) θ , ( 1 + k ) θ ) , {\displaystyle X\sim U((1-k)\theta ,(1+k)\theta ),} with unknown mean E [ X ] = θ {\displaystyle \operatorname {E} [X]=\theta } and known design parameter k ∈ ( 0 , 1 ) {\displaystyle k\in (0,1)} . In the search for "best" possible unbiased estimators for θ {\displaystyle \theta } , it is natural to consider X 1 {\displaystyle X_{1}} as an initial (crude) unbiased estimator for θ {\displaystyle \theta } and then try to improve it. Since X 1 {\displaystyle X_{1}} is not a function of T = ( X ( 1 ) , X ( n ) ) {\displaystyle T=\left(X_{(1)},X_{(n)}\right)} , the minimal sufficient statistic for θ {\displaystyle \theta } (where X ( 1 ) = min i X i {\displaystyle X_{(1)}=\min _{i}X_{i}} and X ( n ) = max i X i {\displaystyle X_{(n)}=\max _{i}X_{i}} ), it may be improved using the Rao–Blackwell theorem as follows:
However, the following unbiased estimator can be shown to have lower variance:
And in fact, it could be even further improved when using the following estimator:
The model is a scale model . Optimal equivariant estimators can then be derived for loss functions that are invariant. [ 5 ] | https://en.wikipedia.org/wiki/Lehmann–Scheffé_theorem |
Lehmer's conjecture , also known as the Lehmer's Mahler measure problem, is a problem in number theory raised by Derrick Henry Lehmer . [ 1 ] The conjecture asserts that there is an absolute constant μ > 1 {\displaystyle \mu >1} such that every polynomial with integer coefficients P ( x ) ∈ Z [ x ] {\displaystyle P(x)\in \mathbb {Z} [x]} satisfies one of the following properties:
There are a number of definitions of the Mahler measure, one of which is to factor P ( x ) {\displaystyle P(x)} over C {\displaystyle \mathbb {C} } as
and then set
The smallest known Mahler measure (greater than 1) is for "Lehmer's polynomial"
for which the Mahler measure is the Salem number [ 3 ]
It is widely believed that this example represents the true minimal value: that is, μ = 1.176280818 … {\displaystyle \mu =1.176280818\dots } in Lehmer's conjecture. [ 4 ] [ 5 ]
Consider Mahler measure for one variable and Jensen's formula shows that if P ( x ) = a 0 ( x − α 1 ) ( x − α 2 ) ⋯ ( x − α D ) {\displaystyle P(x)=a_{0}(x-\alpha _{1})(x-\alpha _{2})\cdots (x-\alpha _{D})} then
In this paragraph denote m ( P ) = log ( M ( P ( x ) ) {\displaystyle m(P)=\log({\mathcal {M}}(P(x))} , which is also called Mahler measure .
If P {\displaystyle P} has integer coefficients, this shows that M ( P ) {\displaystyle {\mathcal {M}}(P)} is an algebraic number so m ( P ) {\displaystyle m(P)} is the logarithm of an algebraic integer. It also shows that m ( P ) ≥ 0 {\displaystyle m(P)\geq 0} and that if m ( P ) = 0 {\displaystyle m(P)=0} then P {\displaystyle P} is a product of cyclotomic polynomials i.e. monic polynomials whose all roots are roots of unity, or a monomial polynomial of x {\displaystyle x} i.e. a power x n {\displaystyle x^{n}} for some n {\displaystyle n} .
Lehmer noticed [ 1 ] [ 6 ] that m ( P ) = 0 {\displaystyle m(P)=0} is an important value in the study of the integer sequences Δ n = Res ( P ( x ) , x n − 1 ) = ∏ i = 1 D ( α i n − 1 ) {\displaystyle \Delta _{n}={\text{Res}}(P(x),x^{n}-1)=\prod _{i=1}^{D}(\alpha _{i}^{n}-1)} for monic P {\displaystyle P} . If P {\displaystyle P} does not vanish on the circle then lim | Δ n | 1 / n = M ( P ) {\displaystyle \lim |\Delta _{n}|^{1/n}={\mathcal {M}}(P)} . If P {\displaystyle P} does vanish on the circle but not at any root of unity, then the same convergence holds by Baker's theorem (in fact an earlier result of Gelfond is sufficient for this, as pointed out by Lind in connection with his study of quasihyperbolic toral automorphisms [ 7 ] ). [ 8 ] As a result, Lehmer was led to ask
or
Some positive answers have been provided as follows, but Lehmer's conjecture is not yet completely proved and is still a question of much interest.
Let P ( x ) ∈ Z [ x ] {\displaystyle P(x)\in \mathbb {Z} [x]} be an irreducible monic polynomial of degree D {\displaystyle D} .
Smyth [ 9 ] proved that Lehmer's conjecture is true for all polynomials that are not reciprocal , i.e., all polynomials satisfying x D P ( x − 1 ) ≠ P ( x ) {\displaystyle x^{D}P(x^{-1})\neq P(x)} .
Blanksby and Montgomery [ 10 ] and Stewart [ 11 ] independently proved that there is an absolute constant C > 1 {\displaystyle C>1} such that either M ( P ( x ) ) = 1 {\displaystyle {\mathcal {M}}(P(x))=1} or [ 12 ]
Dobrowolski [ 13 ] improved this to
Dobrowolski obtained the value C ≥ 1/1200 and asymptotically C > 1-ε for all sufficiently large D . Voutier in 1996 obtained C ≥ 1/4 for D ≥ 2. [ 14 ]
Let E / K {\displaystyle E/K} be an elliptic curve defined over a number field K {\displaystyle K} , and let h ^ E : E ( K ¯ ) → R {\displaystyle {\hat {h}}_{E}:E({\bar {K}})\to \mathbb {R} } be the canonical height function. The canonical height is the analogue for elliptic curves of the function ( deg P ) − 1 log M ( P ( x ) ) {\displaystyle (\deg P)^{-1}\log {\mathcal {M}}(P(x))} . It has the property that h ^ E ( Q ) = 0 {\displaystyle {\hat {h}}_{E}(Q)=0} if and only if Q {\displaystyle Q} is a torsion point in E ( K ¯ ) {\displaystyle E({\bar {K}})} . The elliptic Lehmer conjecture asserts that there is a constant C ( E / K ) > 0 {\displaystyle C(E/K)>0} such that
where D = [ K ( Q ) : K ] {\displaystyle D=[K(Q):K]} . If the elliptic curve E has complex multiplication , then the analogue of Dobrowolski's result holds:
due to Laurent. [ 15 ] For arbitrary elliptic curves, the best known result is
due to Masser . [ 16 ] For elliptic curves with non-integral j-invariant , this has been improved to
by Hindry and Silverman . [ 17 ]
Stronger results are known for restricted classes of polynomials or algebraic numbers.
If P ( x ) is not reciprocal then
and this is clearly best possible. [ 18 ] If further all the coefficients of P are odd then [ 19 ]
For any algebraic number α , let M ( α ) {\displaystyle M(\alpha )} be the Mahler measure of the minimal polynomial P α {\displaystyle P_{\alpha }} of α . If the field Q ( α ) is a Galois extension of Q , then Lehmer's conjecture holds for P α {\displaystyle P_{\alpha }} . [ 19 ]
The measure-theoretic entropy of an ergodic automorphism of a compact metrizable abelian group is known to be given by the logarithmic Mahler measure of a polynomial with integer coefficients if it is finite. [ 20 ] As pointed out by Lind, this means that the set of possible values of the entropy of such actions is either all of ( 0 , ∞ ] {\displaystyle (0,\infty ]} or a countable set depending on the solution to Lehmer's problem. [ 21 ] Lind also showed that the infinite-dimensional torus either has ergodic automorphisms of finite positive entropy or only has automorphisms of infinite entropy depending on the solution to Lehmer's problem. Since an ergodic compact group automorphism is measurably isomorphic to a Bernoulli shift , and the Bernoulli shifts are classified up to measurable isomorphism by their entropy by Ornstein's theorem , this means that the moduli space of all ergodic compact group automorphisms up to measurable isomorphism is either countable or uncountable depending on the solution to Lehmer's problem. | https://en.wikipedia.org/wiki/Lehmer's_conjecture |
In mathematics and in particular in combinatorics , the Lehmer code is a particular way to encode each possible permutation of a sequence of n numbers. It is an instance of a scheme for numbering permutations and is an example of an inversion table.
The Lehmer code is named in reference to D. H. Lehmer , [ 1 ] but the code had been known since 1888 at least. [ 2 ]
The Lehmer code makes use of the fact that there are
permutations of a sequence of n numbers. If a permutation σ is specified by the sequence ( σ 1 , ..., σ n ) of its images of 1, ..., n , then it is encoded by a sequence of n numbers, but not all such sequences are valid since every number must be used only once. By contrast the encodings considered here choose the first number from a set of n values, the next number from a fixed set of n − 1 values, and so forth decreasing the number of possibilities until the last number for which only a single fixed value is allowed; every sequence of numbers chosen from these sets encodes a single permutation. While several encodings can be defined, the Lehmer code has several additional useful properties; it is the sequence
in other words the term L ( σ ) i counts the number of terms in ( σ 1 , ..., σ n ) to the right of σ i that are smaller than it, a number between 0 and n − i , allowing for n + 1 − i different values.
A pair of indices ( i , j ) with i < j and σ i > σ j is called an inversion of σ , and L ( σ ) i counts the number of inversions ( i , j ) with i fixed and varying j . It follows that L ( σ ) 1 + L ( σ ) 2 + … + L ( σ ) n is the total number of inversions of σ , which is also the number of adjacent transpositions that are needed to transform the permutation into the identity permutation. Other properties of the Lehmer code include that the lexicographical order of the encodings of two permutations is the same as that of their sequences ( σ 1 , ..., σ n ), that any value 0 in the code represents a right-to-left minimum in the permutation (i.e., a σ i smaller than any σ j to its right), and a value n − i at position i similarly signifies a right-to-left maximum, and that the Lehmer code of σ coincides with the factorial number system representation of its position in the list of permutations of n in lexicographical order (numbering the positions starting from 0).
Variations of this encoding can be obtained by counting inversions ( i , j ) for fixed j rather than fixed i , by counting inversions with a fixed smaller value σ j rather than smaller index i , or by counting non-inversions rather than inversions; while this does not produce a fundamentally different type of encoding, some properties of the encoding will change correspondingly. In particular counting inversions with a fixed smaller value σ j gives the inversion table of σ , which can be seen to be the Lehmer code of the inverse permutation.
The usual way to prove that there are n ! different permutations of n objects is to observe that the first object can be chosen in n different ways, the next object in n − 1 different ways (because choosing the same number as the first is forbidden), the next in n − 2 different ways (because there are now 2 forbidden values), and so forth. Translating this freedom of choice at each step into a number, one obtains an encoding algorithm, one that finds the Lehmer code of a given permutation. One need not suppose the objects permuted to be numbers, but one needs a total ordering of the set of objects. Since the code numbers are to start from 0, the appropriate number to encode each object σ i by is the number of objects that were available at that point (so they do not occur before position i ), but which are smaller than the object σ i actually chosen. (Inevitably such objects must appear at some position j > i , and ( i , j ) will be an inversion, which shows that this number is indeed L ( σ ) i .)
This number to encode each object can be found by direct counting, in several ways (directly counting inversions, or correcting the total number of objects smaller than a given one, which is its sequence number starting from 0 in the set, by those that are unavailable at its position). Another method which is in-place, but not really more efficient, is to start with the permutation of {0, 1, ... n − 1 } obtained by representing each object by its mentioned sequence number, and then for each entry x , in order from left to right, correct the items to its right by subtracting 1 from all entries (still) greater than x (to reflect the fact that the object corresponding to x is no longer available). Concretely a Lehmer code for the permutation B,F,A,G,D,E,C of letters, ordered alphabetically, would first give the list of sequence numbers 1,5,0,6,3,4,2, which is successively transformed
where the final line is the Lehmer code (at each line one subtracts 1 from the larger entries to the right of the boldface element to form the next line).
For decoding a Lehmer code into a permutation of a given set, the latter procedure may be reversed: for each entry x , in order from right to left, correct the items to its right by adding 1 to all those (currently) greater than or equal to x ; finally interpret the resulting permutation of {0, 1, ... n − 1 } as sequence numbers (which amounts to adding 1 to each entry if a permutation of {1, 2, ... n } is sought). Alternatively the entries of the Lehmer code can be processed from left to right, and interpreted as a number determining the next choice of an element as indicated above; this requires maintaining a list of available elements, from which each chosen element is removed. In the example this would mean choosing element 1 from {A,B,C,D,E,F,G} (which is B) then element 4 from {A,C,D,E,F,G} (which is F), then element 0 from {A,C,D,E,G} (giving A) and so on, reconstructing the sequence B,F,A,G,D,E,C.
The Lehmer code defines a bijection from the symmetric group S n to the Cartesian product [ n ] × [ n − 1 ] × ⋯ × [ 2 ] × [ 1 ] {\displaystyle [n]\times [n-1]\times \cdots \times [2]\times [1]} , where [ k ] designates the k -element set { 0 , 1 , … , k − 1 } {\displaystyle \{0,1,\ldots ,k-1\}} . As a consequence, under the uniform distribution on S n , the component L ( σ ) i defines a uniformly distributed random variable on [ n − i ] , and these random variables are mutually independent , because they are projections on different factors of a Cartesian product .
Definition : In a sequence u=(u k ) 1≤k≤n , there is right-to-left minimum (resp. maximum ) at rank k if u k is strictly smaller (resp. strictly bigger) than each element u i with i>k , i.e., to its right.
Let B(k) (resp. H(k) ) be the event "there is right-to-left minimum (resp. maximum) at rank k ", i.e. B(k) is the set of the permutations S n {\displaystyle \scriptstyle \ {\mathfrak {S}}_{n}} which exhibit a right-to-left minimum (resp. maximum) at rank k . We clearly have
Thus the number N b (ω) (resp. N h (ω) ) of right-to-left minimum (resp. maximum) for the permutation ω can be written as a sum of independent Bernoulli random variables each with a respective parameter of 1/k :
Indeed, as L(k) follows the uniform law on [ [ 1 , k ] ] , {\displaystyle \scriptstyle \ [\![1,k]\!],}
The generating function for the Bernoulli random variable 1 1 B ( k ) {\displaystyle 1\!\!1_{B(k)}} is
therefore the generating function of N b is
(using the rising factorial notation),
which allows us to recover the product formula for the generating function of the Stirling numbers of the first kind (unsigned).
This is an optimal stop problem, a classic in decision theory, statistics and applied probabilities, where a random permutation is gradually revealed through the first elements of its Lehmer code, and where the goal is to stop exactly at the element k such as σ(k)=n, whereas the only available information (the k first values of the Lehmer code) is not sufficient to compute σ(k).
In less mathematical words: a series of n applicants are interviewed one after the other. The interviewer must hire the best applicant, but must make his decision (“Hire” or “Not hire”) on the spot, without interviewing the next applicant (and a fortiori without interviewing all applicants).
The interviewer thus knows the rank of the k th applicant, therefore, at the moment of making his k th decision, the interviewer knows only the k first elements of the Lehmer code whereas he would need to know all of them to make a well informed decision.
To determine the optimal strategies (i.e. the strategy maximizing the probability of a win), the statistical properties of the Lehmer code are crucial.
Allegedly, Johannes Kepler clearly exposed this secretary problem to a friend of his at a time when he was trying to make up his mind and choose one out eleven prospective brides as his second wife. His first marriage had been an unhappy one, having been arranged without himself being consulted, and he was thus very concerned that he could reach the right decision. [ 3 ]
Two similar vectors are in use. One of them is often called inversion vector, e.g. by Wolfram Alpha .
See also Inversion (discrete mathematics) § Inversion related vectors . | https://en.wikipedia.org/wiki/Lehmer_code |
The Lehmer random number generator [ 1 ] (named after D. H. Lehmer ), sometimes also referred to as the Park–Miller random number generator (after Stephen K. Park and Keith W. Miller ), is a type of linear congruential generator (LCG) that operates in multiplicative group of integers modulo n . The general formula is
where the modulus m is a prime number or a power of a prime number , the multiplier a is an element of high multiplicative order modulo m (e.g., a primitive root modulo n ), and the seed X 0 is coprime to m .
Other names are multiplicative linear congruential generator (MLCG) [ 2 ] and multiplicative congruential generator (MCG) .
In 1988, Park and Miller [ 3 ] suggested a Lehmer RNG with particular parameters m = 2 31 − 1 = 2,147,483,647 (a Mersenne prime M 31 ) and a = 7 5 = 16,807 (a primitive root modulo M 31 ), now known as MINSTD . Although MINSTD was later criticized by Marsaglia and Sullivan (1993), [ 4 ] [ 5 ] it is still in use today (in particular, in CarbonLib and C++11 's minstd_rand0 ). Park, Miller and Stockmeyer responded to the criticism (1993), [ 6 ] saying:
Given the dynamic nature of the area, it is difficult for nonspecialists to make decisions about what generator to use. "Give me something I can understand, implement and port... it needn't be state-of-the-art, just make sure it's reasonably good and efficient." Our article and the associated minimal standard generator was an attempt to respond to this request. Five years later, we see no need to alter our response other than to suggest the use of the multiplier a = 48271 in place of 16807.
This revised constant is used in C++11 's minstd_rand random number generator.
The Sinclair ZX81 and its successors use the Lehmer RNG with parameters m = 2 16 + 1 = 65,537 (a Fermat prime F 4 ) and a = 75 (a primitive root modulo F 4 ). [ 7 ] [ 8 ] The CRAY random number generator RANF is a Lehmer RNG with the power-of-two modulus m = 2 48 and a = 44,485,709,377,909. [ 9 ] The GNU Scientific Library includes several random number generators of the Lehmer form, including MINSTD, RANF, and the infamous IBM random number generator RANDU . [ 9 ]
Most commonly, the modulus is chosen as a prime number, making the choice of a coprime seed trivial (any 0 < X 0 < m will do). This produces the best-quality output, but introduces some implementation complexity, and the range of the output is unlikely to match the desired application; converting to the desired range requires an additional multiplication.
Using a modulus m which is a power of two makes for a particularly convenient computer implementation, but comes at a cost: the period is at most m /4, and the lower bits have periods shorter than that. This is because the lowest k bits form a modulo-2 k generator all by themselves; the higher-order bits never affect lower-order bits. [ 10 ] The values X i are always odd (bit 0 never changes), bits 2 and 1 alternate (the lower 3 bits repeat with a period of 2), the lower 4 bits repeat with a period of 4, and so on. Therefore, the application using these random numbers must use the most significant bits; reducing to a smaller range using a modulo operation with an even modulus will produce disastrous results. [ 11 ]
To achieve this period, the multiplier must satisfy a ≡ ±3 (mod 8), [ 12 ] and the seed X 0 must be odd.
Using a composite modulus is possible, but the generator must be seeded with a value coprime to m , or the period will be greatly reduced. For example, a modulus of F 5 = 2 32 + 1 might seem attractive, as the outputs can be easily mapped to a 32-bit word 0 ≤ X i − 1 < 2 32 . However, a seed of X 0 = 6700417 (which divides 2 32 + 1) or any multiple would lead to an output with a period of only 640.
Another generator with a composite modulus is the one recommended by Nakazawa & Nakazawa: [ 13 ]
As both factors of the modulus are less than 2 32 , it is possible to maintain the state modulo each of the factors, and construct the output value using the Chinese remainder theorem , using no more than 64-bit intermediate arithmetic. [ 13 ] : 70
A more popular implementation for large periods is a combined linear congruential generator ; combining (e.g. by summing their outputs) several generators is equivalent to the output of a single generator whose modulus is the product of the component generators' moduli. [ 14 ] and whose period is the least common multiple of the component periods. Although the periods will share a common divisor of 2, the moduli can be chosen so that is the only common divisor and the resultant period is ( m 1 − 1)( m 2 − 1)···( m k − 1)/2 k −1 . [ 2 ] : 744 One example of this is the Wichmann–Hill generator.
While the Lehmer RNG can be viewed as a particular case of the linear congruential generator with c = 0 , it is a special case that implies certain restrictions and properties. In particular, for the Lehmer RNG, the initial seed X 0 must be coprime to the modulus m , which is not required for LCGs in general. The choice of the modulus m and the multiplier a is also more restrictive for the Lehmer RNG. In contrast to LCG, the maximum period of the Lehmer RNG equals m − 1, and it is such when m is prime and a is a primitive root modulo m .
On the other hand, the discrete logarithms (to base a or any primitive root modulo m ) of X k in Z m {\displaystyle \mathbb {Z} _{m}} represent a linear congruential sequence modulo the Euler totient φ ( m ) {\displaystyle \varphi (m)} .
A prime modulus requires the computation of a double-width product and an explicit reduction step. If a modulus just less than a power of 2 is used (the Mersenne primes 2 31 − 1 and 2 61 − 1 are popular, as are 2 32 − 5 and 2 64 − 59), reduction modulo m = 2 e − d can be implemented more cheaply than a general double-width division using the identity 2 e ≡ d (mod m ) .
The basic reduction step divides the product into two e -bit parts, multiplies the high part by d , and adds them: ( ax mod 2 e ) + d ⌊ ax /2 e ⌋ . This can be followed by subtracting m until the result is in range. The number of subtractions is limited to ad / m , which can be easily limited to one if d is small and a < m / d is chosen. (This condition also ensures that d ⌊ ax /2 e ⌋ is a single-width product; if it is violated, a double-width product must be computed.)
When the modulus is a Mersenne prime ( d = 1), the procedure is particularly simple. Not only is multiplication by d trivial, but the conditional subtraction can be replaced by an unconditional shift and addition. To see this, note that the algorithm guarantees that x ≢ 0 (mod m ) , meaning that x = 0 and x = m are both impossible. This avoids the need to consider equivalent e -bit representations of the state; only values where the high bits are non-zero need reduction.
The low e bits of the product ax cannot represent a value larger than m , and the high bits will never hold a value greater than a − 1 ≤ m − 2. Thus the first reduction step produces a value at most m + a − 1 ≤ 2 m − 2 = 2 e +1 − 4. This is an ( e + 1)-bit number, which can be greater than m (i.e. might have bit e set), but the high half is at most 1, and if it is, the low e bits will be strictly less than m . Thus whether the high bit is 1 or 0, a second reduction step (addition of the halves) will never overflow e bits, and the sum will be the desired value.
If d > 1, conditional subtraction can also be avoided, but the procedure is more intricate. The fundamental challenge of a modulus like 2 32 − 5 lies in ensuring that we produce only one representation for values such as 1 ≡ 2 32 − 4. The solution is to temporarily add d , so that the range of possible values is d through 2 e − 1, and reduce values larger than e bits in a way that never generates representations less than d . Finally subtracting the temporary offset produces the desired value.
Begin by assuming that we have a partially reduced value y bounded so that 0 ≤ y < 2 m = 2 e +1 − 2 d . In this case, a single offset subtraction step will produce 0 ≤ y ′ = (( y + d ) mod 2 e ) + d ⌊ ( y + d )/2 e ⌋ − d < m . To see this, consider two cases:
(For the case of a Lehmer generator specifically, a zero state or its image y = m will never occur, so an offset of d − 1 will work just the same, if that is more convenient. This reduces the offset to 0 in the Mersenne prime case, when d = 1.)
Reducing a larger product ax to less than 2 m = 2 e +1 − 2 d can be done by one or more reduction steps without an offset.
If ad ≤ m , then one additional reduction step suffices. Since x < m , ax < am ≤ ( a − 1)2 e , and one reduction step converts this to at most 2 e − 1 + ( a − 1) d = m + ad − 1. This is within the limit of 2 m if ad − 1 < m , which is the initial assumption.
If ad > m , then it is possible for the first reduction step to produce a sum greater than 2 m = 2 e +1 − 2 d , which is too large for the final reduction step. (It also requires the multiplication by d to produce a product larger than e bits, as mentioned above.) However, as long as d 2 < 2 e , the first reduction will produce a value in the range required for the preceding case of two reduction steps to apply.
If a double-width product is not available, Schrage's method , [ 15 ] [ 16 ] also called the approximate factoring method, [ 17 ] may be used to compute ax mod m , but this comes at the cost:
While this technique is popular for portable implementations in high-level languages which lack double-width operations, [ 2 ] : 744 on modern computers division by a constant is usually implemented using double-width multiplication, so this technique should be avoided if efficiency is a concern. Even in high-level languages, if the multiplier a is limited to √ m , then the double-width product ax can be computed using two single-width multiplications, and reduced using the techniques described above.
To use Schrage's method, first factor m = qa + r , i.e. precompute the auxiliary constants r = m mod a and q = ⌊ m / a ⌋ = ( m − r )/ a . Then, each iteration, compute ax ≡ a ( x mod q ) − r ⌊ x / q ⌋ (mod m ) .
This equality holds because
so if we factor x = ( x mod q ) + q ⌊ x / q ⌋ , we get:
The reason it does not overflow is that both terms are less than m . Since x mod q < q ≤ m / a , the first term is strictly less than am / a = m and may be computed with a single-width product.
If a is chosen so that r ≤ q (and thus r / q ≤ 1), then the second term is also less than m : r ⌊ x / q ⌋ ≤ rx / q = x ( r / q ) ≤ x (1) = x < m . Thus, the difference lies in the range [1− m , m −1] and can be reduced to [0, m −1] with a single conditional add. [ 18 ]
This technique may be extended to allow a negative r (− q ≤ r < 0), changing the final reduction to a conditional subtract.
The technique may also be extended to allow larger a by applying it recursively. [ 17 ] : 102 Of the two terms subtracted to produce the final result, only the second ( r ⌊ x / q ⌋ ) risks overflow. But this is itself a modular multiplication by a compile-time constant r , and may be implemented by the same technique. Because each step, on average, halves the size of the multiplier (0 ≤ r < a , average value ( a −1)/2), this would appear to require one step per bit and be spectacularly inefficient. However, each step also divides x by an ever-increasing quotient q = ⌊ m / a ⌋ , and quickly a point is reached where the argument is 0 and the recursion may be terminated.
Using C code, the Park-Miller RNG can be written as follows:
This function can be called repeatedly to generate pseudorandom numbers, as long as the caller is careful to initialize the state to any number greater than zero and less than the modulus. In this implementation, 64-bit arithmetic is required; otherwise, the product of two 32-bit integers may overflow.
To avoid the 64-bit division, do the reduction by hand:
To use only 32-bit arithmetic, use Schrage's method:
or use two 16×16-bit multiplies:
Another popular Lehmer generator uses the prime modulus 2 32 −5:
This can also be written without a 64-bit division:
Many other Lehmer generators have good properties. The following modulo-2 128 Lehmer generator requires 128-bit support from the compiler and uses a multiplier computed by L'Ecuyer. [ 19 ] It has a period of 2 126 :
The generator computes an odd 128-bit value and returns its upper 64 bits.
This generator passes BigCrush from TestU01 , but fails the TMFn test from PractRand . That test has been designed to catch exactly the defect of this type of generator: since the modulus is a power of 2, the period of the lowest bit in the output is only 2 62 , rather than 2 126 . Linear congruential generators with a power-of-2 modulus have a similar behavior.
The following core routine improves upon the speed of the above code for integer workloads (if the constant declaration is allowed to be optimized out of a calculation loop by the compiler):
However, because the multiplication is deferred, it is not suitable for hashing, since the first call simply returns the upper 64 bits of the seed state. | https://en.wikipedia.org/wiki/Lehmer_random_number_generator |
The Lehmstedt–Tanasescu reaction is a method in organic chemistry for the organic synthesis of acridone derivatives ( 3 ) from a 2-nitro benzaldehyde ( 1 ) and an arene compound ( 2 ): [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ]
The reaction is named after two chemists who devoted part of their careers to research into this synthetic method, the German chemist Kurt Lehmstedt and the Romanian chemist Ion Tănăsescu . Variations of the reaction name include Lehmsted–Tănăsescu reaction , Lehmsted–Tănăsescu acridone synthesis and Lehmsted–Tanasescu acridone synthesis .
In the first step of the reaction mechanism the precursor molecule 2-nitrobenzaldehyde 4 is protonated, often by sulfuric acid , to intermediate 5 , followed by an electrophilic attack to benzene (other arenes can be used as well). The resulting benzhydrol 6 cyclisizes to 7 and finally to compound 8 . Treatment of this intermediate with nitrous acid ( sodium nitrite en sulfuric acid) leads to the N- nitroso acridone 11 via intermediates 9 en 10 . The N-nitroso group is removed by an acid in the final step. The procedure is an example of a one-pot synthesis . | https://en.wikipedia.org/wiki/Lehmstedt–Tanasescu_reaction |
In the manufacture of float glass , a lehr oven is a long kiln with an end-to-end temperature gradient , which is used for annealing newly made glass objects that are transported through the temperature gradient either on rollers or on a conveyor belt . The annealing renders glass into a stronger material with fewer internal stresses , and with a lower probability of breaking. [ 1 ]
The rapid cooling of molten glass results in an uneven temperature distribution throughout the material. This temperature differential results in mechanical stresses throughout the molten glass, which may be sufficient to cause the material to crack as it cools to ambient temperature or to make it susceptible to cracking during later use, either spontaneously or due to mechanical or thermal shock . To prevent such material weaknesses, objects made from molten glass are annealed by gradual cooling in a lehr oven, from the annealing point, a temperature just below the solidification temperature of the glass. [ 1 ] In the process of annealing glass, the temperature is first equalised by holding or "soaking" the glass at the annealing point for a period of time that depends on the maximum thickness of the glass. The glass is then slowly cooled at a rate that depends upon the maximum thickness of the glass, ranging from tens of degrees Celsius per hour (for thin slabs of glass) to fractions of a degree Celsius per hour (for thick slabs of glass). [ 2 ]
This glass engineering or glass science related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lehr_(glassmaking) |
In calculus , Leibniz's notation , named in honor of the 17th-century German philosopher and mathematician Gottfried Wilhelm Leibniz , uses the symbols dx and dy to represent infinitely small (or infinitesimal ) increments of x and y , respectively, just as Δ x and Δ y represent finite increments of x and y , respectively. [ 1 ]
Consider y as a function of a variable x , or y = f ( x ) . If this is the case, then the derivative of y with respect to x , which later came to be viewed as the limit
was, according to Leibniz, the quotient of an infinitesimal increment of y by an infinitesimal increment of x , or
where the right hand side is Joseph-Louis Lagrange's notation for the derivative of f at x . The infinitesimal increments are called differentials . Related to this is the integral in which the infinitesimal increments are summed (e.g. to compute lengths, areas and volumes as sums of tiny pieces), for which Leibniz also supplied a closely related notation involving the same differentials, a notation whose efficiency proved decisive in the development of continental European mathematics.
Leibniz's concept of infinitesimals, long considered to be too imprecise to be used as a foundation of calculus, was eventually replaced by rigorous concepts developed by Weierstrass and others in the 19th century. Consequently, Leibniz's quotient notation was re-interpreted to stand for the limit of the modern definition. However, in many instances, the symbol did seem to act as an actual quotient would and its usefulness kept it popular even in the face of several competing notations. Several different formalisms were developed in the 20th century that can give rigorous meaning to notions of infinitesimals and infinitesimal displacements, including nonstandard analysis , tangent space , O notation and others.
The derivatives and integrals of calculus can be packaged into the modern theory of differential forms , in which the derivative is genuinely a ratio of two differentials, and the integral likewise behaves in exact accordance with Leibniz notation. However, this requires that derivative and integral first be defined by other means, and as such expresses the self-consistency and computational efficacy of the Leibniz notation rather than giving it a new foundation.
The Newton–Leibniz approach to infinitesimal calculus was introduced in the 17th century. While Newton worked with fluxions and fluents, Leibniz based his approach on generalizations of sums and differences. [ 2 ] Leibniz adapted the integral symbol ∫ {\displaystyle \textstyle \int } from the initial elongated s of the Latin word ſ umma ("sum") as written at the time. Viewing differences as the inverse operation of summation, [ 3 ] he used the symbol d , the first letter of the Latin differentia , to indicate this inverse operation. [ 2 ] Leibniz was fastidious about notation, having spent years experimenting, adjusting, rejecting and corresponding with other mathematicians about them. [ 4 ] Notations he used for the differential of y ranged successively from ω , l , and y / d until he finally settled on dy . [ 5 ] His integral sign first appeared publicly in the article " De Geometria Recondita et analysi indivisibilium atque infinitorum " ("On a hidden geometry and analysis of indivisibles and infinites"), published in Acta Eruditorum in June 1686, [ 6 ] [ 7 ] but he had been using it in private manuscripts at least since 1675. [ 8 ] [ 9 ] [ 10 ] Leibniz first used dx in the article " Nova Methodus pro Maximis et Minimis " also published in Acta Eruditorum in 1684. [ 11 ] While the symbol dx / dy does appear in private manuscripts of 1675, [ 12 ] [ 13 ] it does not appear in this form in either of the above-mentioned published works. Leibniz did, however, use forms such as dy ad dx and dy : dx in print. [ 11 ]
At the end of the 19th century, Weierstrass's followers ceased to take Leibniz's notation for derivatives and integrals literally. That is, mathematicians felt that the concept of infinitesimals contained logical contradictions in its development. A number of 19th century mathematicians (Weierstrass and others) found logically rigorous ways to treat derivatives and integrals without infinitesimals using limits as shown above, while Cauchy exploited both infinitesimals and limits (see Cours d'Analyse ). Nonetheless, Leibniz's notation is still in general use. Although the notation need not be taken literally, it is usually simpler than alternatives when the technique of separation of variables is used in the solution of differential equations. In physical applications, one may for example regard f ( x ) as measured in meters per second, and d x in seconds, so that f ( x ) d x is in meters, and so is the value of its definite integral. In that way the Leibniz notation is in harmony with dimensional analysis .
Suppose a dependent variable y represents a function f of an independent variable x , that is,
Then the derivative of the function f , in Leibniz's notation for differentiation , can be written as
The Leibniz expression, also, at times, written dy / dx , is one of several notations used for derivatives and derived functions. A common alternative is Lagrange's notation
Another alternative is Newton's notation , often used for derivatives with respect to time (like velocity ), which requires placing a dot over the dependent variable (in this case, x ):
Lagrange's " prime " notation is especially useful in discussions of derived functions and has the advantage of having a natural way of denoting the value of the derived function at a specific value. However, the Leibniz notation has other virtues that have kept it popular through the years.
In its modern interpretation, the expression dy / dx should not be read as the division of two quantities dx and dy (as Leibniz had envisioned it); rather, the whole expression should be seen as a single symbol that is shorthand for
(note Δ vs. d , where Δ indicates a finite difference).
The expression may also be thought of as the application of the differential operator d / dx (again, a single symbol) to y , regarded as a function of x . This operator is written D in Euler's notation . Leibniz did not use this form, but his use of the symbol d corresponds fairly closely to this modern concept.
While there is traditionally no division implied by the notation (but see Nonstandard analysis ), the division-like notation is useful since in many situations, the derivative operator does behave like a division, making some results about derivatives easy to obtain and remember. [ 14 ] This notation owes its longevity to the fact that it seems to reach to the very heart of the geometrical and mechanical applications of the calculus. [ 15 ]
If y = f ( x ) , the n th derivative of f in Leibniz notation is given by, [ 16 ]
This notation, for the second derivative , is obtained by using d / dx as an operator in the following way, [ 16 ]
A third derivative, which might be written as,
can be obtained from
Similarly, the higher derivatives may be obtained inductively.
While it is possible, with carefully chosen definitions, to interpret dy / dx as a quotient of differentials , this should not be done with the higher order forms. [ 17 ] However, an alternative Leibniz notation for differentiation for higher orders allows for this. [ citation needed ]
This notation was, however, not used by Leibniz. In print he did not use multi-tiered notation nor numerical exponents (before 1695). To write x 3 for instance, he would write xxx , as was common in his time. The square of a differential, as it might appear in an arc length formula for instance, was written as dxdx . However, Leibniz did use his d notation as we would today use operators, namely he would write a second derivative as ddy and a third derivative as dddy . In 1695 Leibniz started to write d 2 ⋅ x and d 3 ⋅ x for ddx and dddx respectively, but l'Hôpital , in his textbook on calculus written around the same time, used Leibniz's original forms. [ 18 ]
Leibniz introduced the integral symbol for integration [ 19 ] (or "antidifferentiation") now commonly used today: ∫ {\displaystyle \displaystyle \int }
The notation was introduced in 1675 in his private writings; [ 20 ] [ 21 ] it first appeared publicly in the article " De Geometria Recondita et analysi indivisibilium atque infinitorum " (On a hidden geometry and analysis of indivisibles and infinites), published in Acta Eruditorum in June 1686. [ 22 ] [ 23 ] The symbol was based on the ſ ( long s ) character and was chosen because Leibniz thought of the integral as an infinite sum of infinitesimal summands .
One reason that Leibniz's notations in calculus have endured so long is that they permit the easy recall of the appropriate formulas used for differentiation and integration. For instance, the chain rule —suppose that the function g is differentiable at x and y = f ( u ) is differentiable at u = g ( x ) . Then the composite function y = f ( g ( x )) is differentiable at x and its derivative can be expressed in Leibniz notation as, [ 24 ]
This can be generalized to deal with the composites of several appropriately defined and related functions, u 1 , u 2 , ..., u n and would be expressed as,
Also, the integration by substitution formula may be expressed by [ 25 ]
where x is thought of as a function of a new variable u and the function y on the left is expressed in terms of x while on the right it is expressed in terms of u .
If y = f ( x ) where f is a differentiable function that is invertible , the derivative of the inverse function, if it exists, can be given by, [ 26 ]
where the parentheses are added to emphasize the fact that the derivative is not a fraction.
However, when solving differential equations, it is easy to think of the dy s and dx s as separable. One of the simplest types of differential equations is [ 27 ]
where M and N are continuous functions. Solving (implicitly) such an equation can be done by examining the equation in its differential form ,
and integrating to obtain
Rewriting, when possible, a differential equation into this form and applying the above argument is known as the separation of variables technique for solving such equations.
In each of these instances the Leibniz notation for a derivative appears to act like a fraction, even though, in its modern interpretation, it isn't one.
In the 1960s, building upon earlier work by Edwin Hewitt and Jerzy Łoś , Abraham Robinson developed mathematical explanations for Leibniz's infinitesimals that were acceptable by contemporary standards of rigor, and developed nonstandard analysis based on these ideas. Robinson's methods are used by only a minority of mathematicians. Jerome Keisler wrote a first-year calculus textbook, Elementary calculus: an infinitesimal approach , based on Robinson's approach.
From the point of view of modern infinitesimal theory, Δ x is an infinitesimal x -increment, Δ y is the corresponding y -increment, and the derivative is the standard part of the infinitesimal ratio:
Then one sets d x = Δ x {\displaystyle dx=\Delta x} , d y = f ′ ( x ) d x {\displaystyle dy=f'(x)dx} , so that by definition, f ′ ( x ) {\displaystyle f'(x)} is the ratio of dy by dx .
Similarly, although most mathematicians now view an integral
as a limit
where Δ x is an interval containing x i , Leibniz viewed it as the sum (the integral sign denoted summation for him) of infinitely many infinitesimal quantities f ( x ) dx . From the viewpoint of nonstandard analysis, it is correct to view the integral as the standard part of such an infinite sum.
The trade-off needed to gain the precision of these concepts is that the set of real numbers must be extended to the set of hyperreal numbers .
Leibniz experimented with many different notations in various areas of mathematics. He felt that good notation was fundamental in the pursuit of mathematics. In a letter to l'Hôpital in 1693 he says: [ 28 ]
One of the secrets of analysis consists in the characteristic, that is, in the art of skilful employment of the available signs, and you will observe, Sir, by the small enclosure [on determinants] that Vieta and Descartes have not known all the mysteries.
He refined his criteria for good notation over time and came to realize the value of "adopting symbolisms which could be set up in a line like ordinary type, without the need of widening the spaces between lines to make room for symbols with sprawling parts." [ 29 ] For instance, in his early works he heavily used a vinculum to indicate grouping of symbols, but later he introduced the idea of using pairs of parentheses for this purpose, thus appeasing the typesetters who no longer had to widen the spaces between lines on a page and making the pages look more attractive. [ 30 ]
Many of the over 200 new symbols introduced by Leibniz are still in use today. [ 31 ] Besides the differentials dx , dy and the integral sign ( ∫ ) already mentioned, he also introduced the colon (:) for division, the middle dot (⋅) for multiplication, the geometric signs for similar (~) and congruence (≅), the use of Recorde's equal sign (=) for proportions (replacing Oughtred's :: notation) and the double-suffix [ clarification needed ] notation for determinants. [ 28 ] | https://en.wikipedia.org/wiki/Leibniz's_notation |
The Gottfried Wilhelm Leibniz Prize ( German : Förderpreis für deutsche Wissenschaftler im Gottfried Wilhelm Leibniz-Programm der Deutschen Forschungsgemeinschaft ), or Leibniz Prize , is awarded by the German Research Foundation to "exceptional scientists and academics for their outstanding achievements in the field of research". [ 1 ] Since 1986, up to ten prizes have been awarded annually to individuals or research groups working at a research institution in Germany or at a German research institution abroad. [ 2 ] It is considered the most important research award in Germany.
The prize is named after the German polymath and philosopher Gottfried Wilhelm Leibniz (1646–1716). It is one of the highest endowed research prizes in Germany with a maximum of €2.5 million per award. [ 2 ] Past prize winners include [ 3 ] Stefan Hell (2008), Gerd Faltings (1996), Peter Gruss (1994), Svante Pääbo (1992), Theodor W. Hänsch (1989), Erwin Neher (1987), Bert Sakmann (1987), Jürgen Habermas (1986), Hartmut Michel (1986), and Christiane Nüsslein-Volhard (1986).
2025 | 2024 | 2023 | 2022 | 2021 | 2020
2025: [ 4 ]
2024:
2023:
2022:
2021: [ 7 ]
2020: [ 8 ]
2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010
2019: [ 9 ]
2018: [ 10 ]
2017: [ 11 ]
2016: [ 12 ]
2015:
2014:
2013:
2012:
2011:
2010:
2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000
2009:
2008:
2007:
2006:
2005:
2004:
2003:
2002:
2001:
2000:
1999 | 1998 | 1997 | 1996 | 1995 | 1994 | 1993 | 1992 | 1991 | 1990
1999:
1998:
1997:
1996:
1995:
1994:
1993:
1992:
1991:
1990:
1989 | 1988 | 1987 | 1986
1989:
1988:
1987:
1986: | https://en.wikipedia.org/wiki/Leibniz_Prize |
In algebra , the Leibniz formula , named in honor of Gottfried Leibniz , expresses the determinant of a square matrix in terms of permutations of the matrix elements. If A {\displaystyle A} is an n × n {\displaystyle n\times n} matrix, where a i j {\displaystyle a_{ij}} is the entry in the i {\displaystyle i} -th row and j {\displaystyle j} -th column of A {\displaystyle A} , the formula is [ 1 ]
where sgn {\displaystyle \operatorname {sgn} } is the sign function of permutations in the permutation group S n {\displaystyle S_{n}} , which returns + 1 {\displaystyle +1} and − 1 {\displaystyle -1} for even and odd permutations , respectively.
Another common notation used for the formula is in terms of the Levi-Civita symbol and makes use of the Einstein summation notation , where it becomes
which may be more familiar to physicists.
Directly evaluating the Leibniz formula from the definition requires Ω ( n ! ⋅ n ) {\displaystyle \Omega (n!\cdot n)} operations in general—that is, a number of operations asymptotically proportional to n {\displaystyle n} factorial —because n ! {\displaystyle n!} is the number of order- n {\displaystyle n} permutations. This is impractically difficult for even relatively small n {\displaystyle n} . Instead, the determinant can be evaluated in O ( n 3 ) {\displaystyle O(n^{3})} operations by forming the LU decomposition A = L U {\displaystyle A=LU} (typically via Gaussian elimination or similar methods), in which case det A = det L ⋅ det U {\displaystyle \det A=\det L\cdot \det U} and the determinants of the triangular matrices L {\displaystyle L} and U {\displaystyle U} are simply the products of their diagonal entries. (In practical applications of numerical linear algebra, however, explicit computation of the determinant is rarely required.) See, for example, Trefethen & Bau (1997) . The determinant can also be evaluated in fewer than O ( n 3 ) {\displaystyle O(n^{3})} operations by reducing the problem to matrix multiplication , but most such algorithms are not practical.
Theorem. There exists exactly one function F : M n ( K ) → K {\displaystyle F:M_{n}(\mathbb {K} )\rightarrow \mathbb {K} } which is alternating multilinear w.r.t. columns and such that F ( I ) = 1 {\displaystyle F(I)=1} .
Proof.
Uniqueness: Let F {\displaystyle F} be such a function, and let A = ( a i j ) i = 1 , … , n j = 1 , … , n {\displaystyle A=(a_{i}^{j})_{i=1,\dots ,n}^{j=1,\dots ,n}} be an n × n {\displaystyle n\times n} matrix. Call A j {\displaystyle A^{j}} the j {\displaystyle j} -th column of A {\displaystyle A} , i.e. A j = ( a i j ) i = 1 , … , n {\displaystyle A^{j}=(a_{i}^{j})_{i=1,\dots ,n}} , so that A = ( A 1 , … , A n ) . {\displaystyle A=\left(A^{1},\dots ,A^{n}\right).}
Also, let E k {\displaystyle E^{k}} denote the k {\displaystyle k} -th column vector of the identity matrix.
Now one writes each of the A j {\displaystyle A^{j}} 's in terms of the E k {\displaystyle E^{k}} , i.e.
As F {\displaystyle F} is multilinear, one has
From alternation it follows that any term with repeated indices is zero. The sum can therefore be restricted to tuples with non-repeating indices, i.e. permutations:
Because F is alternating, the columns E {\displaystyle E} can be swapped until it becomes the identity. The sign function sgn ( σ ) {\displaystyle \operatorname {sgn}(\sigma )} is defined to count the number of swaps necessary and account for the resulting sign change. One finally gets:
as F ( I ) {\displaystyle F(I)} is required to be equal to 1 {\displaystyle 1} .
Therefore no function besides the function defined by the Leibniz Formula can be a multilinear alternating function with F ( I ) = 1 {\displaystyle F\left(I\right)=1} .
Existence: We now show that F, where F is the function defined by the Leibniz formula, has these three properties.
Multilinear :
Alternating :
For any σ ∈ S n {\displaystyle \sigma \in S_{n}} let σ ′ {\displaystyle \sigma '} be the tuple equal to σ {\displaystyle \sigma } with the j 1 {\displaystyle j_{1}} and j 2 {\displaystyle j_{2}} indices switched.
Thus if A j 1 = A j 2 {\displaystyle A^{j_{1}}=A^{j_{2}}} then F ( … , A j 1 , … , A j 2 , … ) = 0 {\displaystyle F(\dots ,A^{j_{1}},\dots ,A^{j_{2}},\dots )=0} .
Finally, F ( I ) = 1 {\displaystyle F(I)=1} :
Thus the only alternating multilinear functions with F ( I ) = 1 {\displaystyle F(I)=1} are restricted to the function defined by the Leibniz formula, and it in fact also has these three properties. Hence the determinant can be defined as the only function det : M n ( K ) → K {\displaystyle \det :M_{n}(\mathbb {K} )\rightarrow \mathbb {K} } with these three properties. | https://en.wikipedia.org/wiki/Leibniz_formula_for_determinants |
In mathematics , the Leibniz formula for π , named after Gottfried Wilhelm Leibniz , states that π 4 = 1 − 1 3 + 1 5 − 1 7 + 1 9 − ⋯ = ∑ k = 0 ∞ ( − 1 ) k 2 k + 1 , {\displaystyle {\frac {\pi }{4}}=1-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+{\frac {1}{9}}-\cdots =\sum _{k=0}^{\infty }{\frac {(-1)^{k}}{2k+1}},}
an alternating series .
It is sometimes called the Madhava–Leibniz series as it was first discovered by the Indian mathematician Madhava of Sangamagrama or his followers in the 14th–15th century (see Madhava series ), [ 1 ] and was later independently rediscovered by James Gregory in 1671 and Leibniz in 1673. [ 2 ] The Taylor series for the inverse tangent function, often called Gregory's series , is arctan x = x − x 3 3 + x 5 5 − x 7 7 + ⋯ = ∑ k = 0 ∞ ( − 1 ) k x 2 k + 1 2 k + 1 . {\displaystyle \arctan x=x-{\frac {x^{3}}{3}}+{\frac {x^{5}}{5}}-{\frac {x^{7}}{7}}+\cdots =\sum _{k=0}^{\infty }{\frac {(-1)^{k}x^{2k+1}}{2k+1}}.}
The Leibniz formula is the special case arctan 1 = 1 4 π . {\textstyle \arctan 1={\tfrac {1}{4}}\pi .} [ 3 ]
It also is the Dirichlet L -series of the non-principal Dirichlet character of modulus 4 evaluated at s = 1 , {\displaystyle s=1,} and therefore the value β (1) of the Dirichlet beta function .
π 4 = arctan ( 1 ) = ∫ 0 1 1 1 + x 2 d x = ∫ 0 1 ( ∑ k = 0 n ( − 1 ) k x 2 k + ( − 1 ) n + 1 x 2 n + 2 1 + x 2 ) d x = ( ∑ k = 0 n ( − 1 ) k 2 k + 1 ) + ( − 1 ) n + 1 ( ∫ 0 1 x 2 n + 2 1 + x 2 d x ) {\displaystyle {\begin{aligned}{\frac {\pi }{4}}&=\arctan(1)\\&=\int _{0}^{1}{\frac {1}{1+x^{2}}}\,dx\\[8pt]&=\int _{0}^{1}\left(\sum _{k=0}^{n}(-1)^{k}x^{2k}+{\frac {(-1)^{n+1}\,x^{2n+2}}{1+x^{2}}}\right)\,dx\\[8pt]&=\left(\sum _{k=0}^{n}{\frac {(-1)^{k}}{2k+1}}\right)+(-1)^{n+1}\left(\int _{0}^{1}{\frac {x^{2n+2}}{1+x^{2}}}\,dx\right)\end{aligned}}}
Considering only the integral in the last term, we have: 0 ≤ ∫ 0 1 x 2 n + 2 1 + x 2 d x ≤ ∫ 0 1 x 2 n + 2 d x = 1 2 n + 3 → 0 as n → ∞ . {\displaystyle 0\leq \int _{0}^{1}{\frac {x^{2n+2}}{1+x^{2}}}\,dx\leq \int _{0}^{1}x^{2n+2}\,dx={\frac {1}{2n+3}}\;\rightarrow 0{\text{ as }}n\rightarrow \infty .}
Therefore, by the squeeze theorem , as n → ∞ , we are left with the Leibniz series: π 4 = ∑ k = 0 ∞ ( − 1 ) k 2 k + 1 {\displaystyle {\frac {\pi }{4}}=\sum _{k=0}^{\infty }{\frac {(-1)^{k}}{2k+1}}}
Let f ( z ) = ∑ n = 0 ∞ ( − 1 ) n 2 n + 1 z 2 n + 1 {\displaystyle f(z)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2n+1}}z^{2n+1}} , when | z | < 1 {\displaystyle |z|<1} , the series ∑ k = 0 ∞ ( − 1 ) k z 2 k {\displaystyle \sum _{k=0}^{\infty }(-1)^{k}z^{2k}} converges uniformly, then arctan ( z ) = ∫ 0 z 1 1 + t 2 d t = ∑ n = 0 ∞ ( − 1 ) n 2 n + 1 z 2 n + 1 = f ( z ) ( | z | < 1 ) . {\displaystyle \arctan(z)=\int _{0}^{z}{\frac {1}{1+t^{2}}}dt=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2n+1}}z^{2n+1}=f(z)\ (|z|<1).}
Therefore, if f ( z ) {\displaystyle f(z)} approaches f ( 1 ) {\displaystyle f(1)} so that it is continuous and converges uniformly, the proof is complete, where, the series ∑ n = 0 ∞ ( − 1 ) n 2 n + 1 {\displaystyle \sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2n+1}}} to be converges by the Leibniz's test , and also, f ( z ) {\displaystyle f(z)} approaches f ( 1 ) {\displaystyle f(1)} from within the Stolz angle, so from Abel's theorem this is correct.
Leibniz's formula converges extremely slowly: it exhibits sublinear convergence . Calculating π to 10 correct decimal places using direct summation of the series requires precisely five billion terms because 4 / 2 k + 1 < 10 −10 for k > 2 × 10 10 − 1 / 2 (one needs to apply Calabrese error bound ). To get 4 correct decimal places (error of 0.00005) one needs 5000 terms. [ 4 ] Even better than Calabrese or Johnsonbaugh error bounds are available. [ 5 ]
However, the Leibniz formula can be used to calculate π to high precision (hundreds of digits or more) using various convergence acceleration techniques. For example, the Shanks transformation , Euler transform or Van Wijngaarden transformation , which are general methods for alternating series, can be applied effectively to the partial sums of the Leibniz series. Further, combining terms pairwise gives the non-alternating series π 4 = ∑ n = 0 ∞ ( 1 4 n + 1 − 1 4 n + 3 ) = ∑ n = 0 ∞ 2 ( 4 n + 1 ) ( 4 n + 3 ) {\displaystyle {\frac {\pi }{4}}=\sum _{n=0}^{\infty }\left({\frac {1}{4n+1}}-{\frac {1}{4n+3}}\right)=\sum _{n=0}^{\infty }{\frac {2}{(4n+1)(4n+3)}}}
which can be evaluated to high precision from a small number of terms using Richardson extrapolation or the Euler–Maclaurin formula . This series can also be transformed into an integral by means of the Abel–Plana formula and evaluated using techniques for numerical integration .
If the series is truncated at the right time, the decimal expansion of the approximation will agree with that of π for many more digits, except for isolated digits or digit groups. For example, taking five million terms yields 3.141592 4 _ 5358979323846 4 _ 643383279502 7 _ 841971693993 873 _ 058... {\displaystyle 3.141592{\underline {4}}5358979323846{\underline {4}}643383279502{\underline {7}}841971693993{\underline {873}}058...}
where the underlined digits are wrong. The errors can in fact be predicted; they are generated by the Euler numbers E n according to the asymptotic formula π 2 − 2 ∑ k = 1 N / 2 ( − 1 ) k − 1 2 k − 1 ∼ ∑ m = 0 ∞ E 2 m N 2 m + 1 {\displaystyle {\frac {\pi }{2}}-2\sum _{k=1}^{N/2}{\frac {(-1)^{k-1}}{2k-1}}\sim \sum _{m=0}^{\infty }{\frac {E_{2m}}{N^{2m+1}}}}
where N is an integer divisible by 4. If N is chosen to be a power of ten, each term in the right sum becomes a finite decimal fraction. The formula is a special case of the Euler–Boole summation formula for alternating series, providing yet another example of a convergence acceleration technique that can be applied to the Leibniz series. In 1992, Jonathan Borwein and Mark Limber used the first thousand Euler numbers to calculate π to 5,263 decimal places with the Leibniz formula. [ 6 ]
The Leibniz formula can be interpreted as a Dirichlet series using the unique non-principal Dirichlet character modulo 4. As with other Dirichlet series, this allows the infinite sum to be converted to an infinite product with one term for each prime number . Such a product is called an Euler product . It is: π 4 = ( ∏ p ≡ 1 ( mod 4 ) p p − 1 ) ( ∏ p ≡ 3 ( mod 4 ) p p + 1 ) = 3 4 ⋅ 5 4 ⋅ 7 8 ⋅ 11 12 ⋅ 13 12 ⋅ 17 16 ⋅ 19 20 ⋅ 23 24 ⋅ 29 28 ⋯ {\displaystyle {\begin{aligned}{\frac {\pi }{4}}&={\biggl (}\prod _{p\,\equiv \,1\ ({\text{mod}}\ 4)}{\frac {p}{p-1}}{\biggr )}{\biggl (}\prod _{p\,\equiv \,3\ ({\text{mod}}\ 4)}{\frac {p}{p+1}}{\biggr )}\\[7mu]&={\frac {3}{4}}\cdot {\frac {5}{4}}\cdot {\frac {7}{8}}\cdot {\frac {11}{12}}\cdot {\frac {13}{12}}\cdot {\frac {17}{16}}\cdot {\frac {19}{20}}\cdot {\frac {23}{24}}\cdot {\frac {29}{28}}\cdots \end{aligned}}} In this product, each term is a superparticular ratio , each numerator is an odd prime number, and each denominator is the nearest multiple of 4 to the numerator. [ 7 ] The product is conditionally convergent; its terms must be taken in order of increasing p . | https://en.wikipedia.org/wiki/Leibniz_formula_for_π |
In calculus , the Leibniz integral rule for differentiation under the integral sign, named after Gottfried Wilhelm Leibniz , states that for an integral of the form ∫ a ( x ) b ( x ) f ( x , t ) d t , {\displaystyle \int _{a(x)}^{b(x)}f(x,t)\,dt,} where − ∞ < a ( x ) , b ( x ) < ∞ {\displaystyle -\infty <a(x),b(x)<\infty } and the integrands are functions dependent on x , {\displaystyle x,} the derivative of this integral is expressible as d d x ( ∫ a ( x ) b ( x ) f ( x , t ) d t ) = f ( x , b ( x ) ) ⋅ d d x b ( x ) − f ( x , a ( x ) ) ⋅ d d x a ( x ) + ∫ a ( x ) b ( x ) ∂ ∂ x f ( x , t ) d t {\displaystyle {\begin{aligned}&{\frac {d}{dx}}\left(\int _{a(x)}^{b(x)}f(x,t)\,dt\right)\\&=f{\big (}x,b(x){\big )}\cdot {\frac {d}{dx}}b(x)-f{\big (}x,a(x){\big )}\cdot {\frac {d}{dx}}a(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}f(x,t)\,dt\end{aligned}}} where the partial derivative ∂ ∂ x {\displaystyle {\tfrac {\partial }{\partial x}}} indicates that inside the integral, only the variation of f ( x , t ) {\displaystyle f(x,t)} with x {\displaystyle x} is considered in taking the derivative. [ 1 ]
In the special case where the functions a ( x ) {\displaystyle a(x)} and b ( x ) {\displaystyle b(x)} are constants a ( x ) = a {\displaystyle a(x)=a} and b ( x ) = b {\displaystyle b(x)=b} with values that do not depend on x , {\displaystyle x,} this simplifies to: d d x ( ∫ a b f ( x , t ) d t ) = ∫ a b ∂ ∂ x f ( x , t ) d t . {\displaystyle {\frac {d}{dx}}\left(\int _{a}^{b}f(x,t)\,dt\right)=\int _{a}^{b}{\frac {\partial }{\partial x}}f(x,t)\,dt.}
If a ( x ) = a {\displaystyle a(x)=a} is constant and b ( x ) = x {\displaystyle b(x)=x} , which is another common situation (for example, in the proof of Cauchy's repeated integration formula ), the Leibniz integral rule becomes: d d x ( ∫ a x f ( x , t ) d t ) = f ( x , x ) + ∫ a x ∂ ∂ x f ( x , t ) d t , {\displaystyle {\frac {d}{dx}}\left(\int _{a}^{x}f(x,t)\,dt\right)=f{\big (}x,x{\big )}+\int _{a}^{x}{\frac {\partial }{\partial x}}f(x,t)\,dt,}
This important result may, under certain conditions, be used to interchange the integral and partial differential operators , and is particularly useful in the differentiation of integral transforms . An example of such is the moment generating function in probability theory , a variation of the Laplace transform , which can be differentiated to generate the moments of a random variable . Whether Leibniz's integral rule applies is essentially a question about the interchange of limits .
Theorem — Let f ( x , t ) {\displaystyle f(x,t)} be a function such that both f ( x , t ) {\displaystyle f(x,t)} and its partial derivative f x ( x , t ) {\displaystyle f_{x}(x,t)} are continuous in t {\displaystyle t} and x {\displaystyle x} in some region of the x t {\displaystyle xt} -plane, including a ( x ) ≤ t ≤ b ( x ) , {\displaystyle a(x)\leq t\leq b(x),} x 0 ≤ x ≤ x 1 . {\displaystyle x_{0}\leq x\leq x_{1}.} Also suppose that the functions a ( x ) {\displaystyle a(x)} and b ( x ) {\displaystyle b(x)} are both continuous and both have continuous derivatives for x 0 ≤ x ≤ x 1 . {\displaystyle x_{0}\leq x\leq x_{1}.} Then, for x 0 ≤ x ≤ x 1 , {\displaystyle x_{0}\leq x\leq x_{1},} d d x ( ∫ a ( x ) b ( x ) f ( x , t ) d t ) = f ( x , b ( x ) ) ⋅ d d x b ( x ) − f ( x , a ( x ) ) ⋅ d d x a ( x ) + ∫ a ( x ) b ( x ) ∂ ∂ x f ( x , t ) d t . {\displaystyle {\frac {d}{dx}}\left(\int _{a(x)}^{b(x)}f(x,t)\,dt\right)=f{\big (}x,b(x){\big )}\cdot {\frac {d}{dx}}b(x)-f{\big (}x,a(x){\big )}\cdot {\frac {d}{dx}}a(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}f(x,t)\,dt.}
The right hand side may also be written using Lagrange's notation as: f ( x , b ( x ) ) b ′ ( x ) − f ( x , a ( x ) ) a ′ ( x ) + ∫ a ( x ) b ( x ) f x ( x , t ) d t . {\textstyle f(x,b(x))\,b^{\prime }(x)-f(x,a(x))\,a^{\prime }(x)+\displaystyle \int _{a(x)}^{b(x)}f_{x}(x,t)\,dt.}
Stronger versions of the theorem only require that the partial derivative exist almost everywhere , and not that it be continuous. [ 2 ] This formula is the general form of the Leibniz integral rule and can be derived using the fundamental theorem of calculus . The (first) fundamental theorem of calculus is just the particular case of the above formula where a ( x ) = a ∈ R {\displaystyle a(x)=a\in \mathbb {R} } is constant, b ( x ) = x , {\displaystyle b(x)=x,} and f ( x , t ) = f ( t ) {\displaystyle f(x,t)=f(t)} does not depend on x . {\displaystyle x.}
If both upper and lower limits are taken as constants, then the formula takes the shape of an operator equation: I t ∂ x = ∂ x I t {\displaystyle {\mathcal {I}}_{t}\partial _{x}=\partial _{x}{\mathcal {I}}_{t}} where ∂ x {\displaystyle \partial _{x}} is the partial derivative with respect to x {\displaystyle x} and I t {\displaystyle {\mathcal {I}}_{t}} is the integral operator with respect to t {\displaystyle t} over a fixed interval . That is, it is related to the symmetry of second derivatives , but involving integrals as well as derivatives. This case is also known as the Leibniz integral rule.
The following three basic theorems on the interchange of limits are essentially equivalent:
A Leibniz integral rule for a two dimensional surface moving in three dimensional space is [ 3 ] [ 4 ]
d d t ∬ Σ ( t ) F ( r , t ) ⋅ d A = ∬ Σ ( t ) ( F t ( r , t ) + [ ∇ ⋅ F ( r , t ) ] v ) ⋅ d A − ∮ ∂ Σ ( t ) [ v × F ( r , t ) ] ⋅ d s , {\displaystyle {\frac {d}{dt}}\iint _{\Sigma (t)}\mathbf {F} (\mathbf {r} ,t)\cdot d\mathbf {A} =\iint _{\Sigma (t)}\left(\mathbf {F} _{t}(\mathbf {r} ,t)+\left[\nabla \cdot \mathbf {F} (\mathbf {r} ,t)\right]\mathbf {v} \right)\cdot d\mathbf {A} -\oint _{\partial \Sigma (t)}\left[\mathbf {v} \times \mathbf {F} (\mathbf {r} ,t)\right]\cdot d\mathbf {s} ,}
where:
The Leibniz integral rule can be extended to multidimensional integrals. In two and three dimensions, this rule is better known from the field of fluid dynamics as the Reynolds transport theorem : d d t ∫ D ( t ) F ( x , t ) d V = ∫ D ( t ) ∂ ∂ t F ( x , t ) d V + ∫ ∂ D ( t ) F ( x , t ) v b ⋅ d Σ , {\displaystyle {\frac {d}{dt}}\int _{D(t)}F(\mathbf {x} ,t)\,dV=\int _{D(t)}{\frac {\partial }{\partial t}}F(\mathbf {x} ,t)\,dV+\int _{\partial D(t)}F(\mathbf {x} ,t)\mathbf {v} _{b}\cdot d\mathbf {\Sigma } ,}
where F ( x , t ) {\displaystyle F(\mathbf {x} ,t)} is a scalar function, D ( t ) and ∂ D ( t ) denote a time-varying connected region of R 3 and its boundary, respectively, v b {\displaystyle \mathbf {v} _{b}} is the Eulerian velocity of the boundary (see Lagrangian and Eulerian coordinates ) and d Σ = n dS is the unit normal component of the surface element .
The general statement of the Leibniz integral rule requires concepts from differential geometry , specifically differential forms , exterior derivatives , wedge products and interior products . With those tools, the Leibniz integral rule in n dimensions is [ 4 ] d d t ∫ Ω ( t ) ω = ∫ Ω ( t ) i v ( d x ω ) + ∫ ∂ Ω ( t ) i v ω + ∫ Ω ( t ) ω ˙ , {\displaystyle {\frac {d}{dt}}\int _{\Omega (t)}\omega =\int _{\Omega (t)}i_{\mathbf {v} }(d_{x}\omega )+\int _{\partial \Omega (t)}i_{\mathbf {v} }\omega +\int _{\Omega (t)}{\dot {\omega }},} where Ω( t ) is a time-varying domain of integration, ω is a p -form, v = ∂ x ∂ t {\displaystyle \mathbf {v} ={\frac {\partial \mathbf {x} }{\partial t}}} is the vector field of the velocity, i v {\displaystyle i_{\mathbf {v} }} denotes the interior product with v {\displaystyle \mathbf {v} } , d x ω is the exterior derivative of ω with respect to the space variables only and ω ˙ {\displaystyle {\dot {\omega }}} is the time derivative of ω .
The above formula can be deduced directly from the fact that the Lie derivative interacts nicely with integration of differential forms d d t ∫ Ω ( t ) ω = ∫ Ω ( t ) L Ψ ω , {\displaystyle {\frac {d}{dt}}\int _{\Omega (t)}\omega =\int _{\Omega (t)}{\mathcal {L}}_{\Psi }\omega ,} for the spacetime manifold M = R × R 3 {\displaystyle M=\mathbb {R} \times \mathbb {R} ^{3}} , where the spacetime exterior derivative of ω {\displaystyle \omega } is d ω = d t ∧ ω ˙ + d x ω {\displaystyle d\omega =dt\wedge {\dot {\omega }}+d_{x}\omega } and the surface Ω ( t ) {\displaystyle \Omega (t)} has spacetime velocity field Ψ = ∂ ∂ t + v {\displaystyle \Psi ={\frac {\partial }{\partial t}}+\mathbf {v} } .
Since ω {\displaystyle \omega } has only spatial components, the Lie derivative can be simplified using Cartan's magic formula , to L Ψ ω = L v ω + L ∂ ∂ t ω = i v d ω + d i v ω + i ∂ ∂ t d ω = i v d x ω + d i v ω + ω ˙ {\displaystyle {\mathcal {L}}_{\Psi }\omega ={\mathcal {L}}_{\mathbf {v} }\omega +{\mathcal {L}}_{\frac {\partial }{\partial t}}\omega =i_{\mathbf {v} }d\omega +di_{\mathbf {v} }\omega +i_{\frac {\partial }{\partial t}}d\omega =i_{\mathbf {v} }d_{x}\omega +di_{\mathbf {v} }\omega +{\dot {\omega }}} which, after integrating over Ω ( t ) {\displaystyle \Omega (t)} and using generalized Stokes' theorem on the second term, reduces to the three desired terms.
Let X {\displaystyle X} be an open subset of R {\displaystyle \mathbf {R} } , and Ω {\displaystyle \Omega } be a measure space . Suppose f : X × Ω → R {\displaystyle f\colon X\times \Omega \to \mathbf {R} } satisfies the following conditions: [ 5 ] [ 6 ] [ 2 ]
Then, for all x ∈ X {\displaystyle x\in X} , d d x ∫ Ω f ( x , ω ) d ω = ∫ Ω f x ( x , ω ) d ω . {\displaystyle {\frac {d}{dx}}\int _{\Omega }f(x,\omega )\,d\omega =\int _{\Omega }f_{x}(x,\omega )\,d\omega .}
The proof relies on the dominated convergence theorem and the mean value theorem (details below).
We first prove the case of constant limits of integration a and b .
We use Fubini's theorem to change the order of integration. For every x and h , such that h > 0 and both x and x + h are within [ x 0 , x 1 ] , we have: ∫ x x + h ∫ a b f x ( x , t ) d t d x = ∫ a b ∫ x x + h f x ( x , t ) d x d t = ∫ a b ( f ( x + h , t ) − f ( x , t ) ) d t = ∫ a b f ( x + h , t ) d t − ∫ a b f ( x , t ) d t {\displaystyle {\begin{aligned}\int _{x}^{x+h}\int _{a}^{b}f_{x}(x,t)\,dt\,dx&=\int _{a}^{b}\int _{x}^{x+h}f_{x}(x,t)\,dx\,dt\\[2ex]&=\int _{a}^{b}\left(f(x+h,t)-f(x,t)\right)\,dt\\[2ex]&=\int _{a}^{b}f(x+h,t)\,dt-\int _{a}^{b}f(x,t)\,dt\end{aligned}}}
Note that the integrals at hand are well defined since f x ( x , t ) {\displaystyle f_{x}(x,t)} is continuous at the closed rectangle [ x 0 , x 1 ] × [ a , b ] {\displaystyle [x_{0},x_{1}]\times [a,b]} and thus also uniformly continuous there; thus its integrals by either dt or dx are continuous in the other variable and also integrable by it (essentially this is because for uniformly continuous functions, one may pass the limit through the integration sign, as elaborated below).
Therefore: ∫ a b f ( x + h , t ) d t − ∫ a b f ( x , t ) d t h = 1 h ∫ x x + h ∫ a b f x ( x , t ) d t d x = F ( x + h ) − F ( x ) h {\displaystyle {\begin{aligned}{\frac {\int _{a}^{b}f(x+h,t)\,dt-\int _{a}^{b}f(x,t)\,dt}{h}}&={\frac {1}{h}}\int _{x}^{x+h}\int _{a}^{b}f_{x}(x,t)\,dt\,dx\\[2ex]&={\frac {F(x+h)-F(x)}{h}}\end{aligned}}}
Where we have defined: F ( u ) := ∫ x 0 u ∫ a b f x ( x , t ) d t d x {\displaystyle F(u):=\int _{x_{0}}^{u}\int _{a}^{b}f_{x}(x,t)\,dt\,dx} (we may replace x 0 here by any other point between x 0 and x )
F is differentiable with derivative ∫ a b f x ( x , t ) d t {\textstyle \int _{a}^{b}f_{x}(x,t)\,dt} , so we can take the limit where h approaches zero. For the left hand side this limit is: d d x ∫ a b f ( x , t ) d t {\displaystyle {\frac {d}{dx}}\int _{a}^{b}f(x,t)\,dt}
For the right hand side, we get: F ′ ( x ) = ∫ a b f x ( x , t ) d t {\displaystyle F'(x)=\int _{a}^{b}f_{x}(x,t)\,dt} And we thus prove the desired result: d d x ∫ a b f ( x , t ) d t = ∫ a b f x ( x , t ) d t {\displaystyle {\frac {d}{dx}}\int _{a}^{b}f(x,t)\,dt=\int _{a}^{b}f_{x}(x,t)\,dt}
If the integrals at hand are Lebesgue integrals , we may use the bounded convergence theorem (valid for these integrals, but not for Riemann integrals ) in order to show that the limit can be passed through the integral sign.
Note that this proof is weaker in the sense that it only shows that f x ( x , t ) is Lebesgue integrable, but not that it is Riemann integrable. In the former (stronger) proof, if f ( x , t ) is Riemann integrable, then so is f x ( x , t ) (and thus is obviously also Lebesgue integrable).
Let
By the definition of the derivative,
Substitute equation ( 1 ) into equation ( 2 ). The difference of two integrals equals the integral of the difference, and 1/ h is a constant, so u ′ ( x ) = lim h → 0 ∫ a b f ( x + h , t ) d t − ∫ a b f ( x , t ) d t h = lim h → 0 ∫ a b ( f ( x + h , t ) − f ( x , t ) ) d t h = lim h → 0 ∫ a b f ( x + h , t ) − f ( x , t ) h d t . {\displaystyle {\begin{aligned}u'(x)&=\lim _{h\to 0}{\frac {\int _{a}^{b}f(x+h,t)\,dt-\int _{a}^{b}f(x,t)\,dt}{h}}\\&=\lim _{h\to 0}{\frac {\int _{a}^{b}\left(f(x+h,t)-f(x,t)\right)\,dt}{h}}\\&=\lim _{h\to 0}\int _{a}^{b}{\frac {f(x+h,t)-f(x,t)}{h}}\,dt.\end{aligned}}}
We now show that the limit can be passed through the integral sign.
We claim that the passage of the limit under the integral sign is valid by the bounded convergence theorem (a corollary of the dominated convergence theorem ). For each δ > 0, consider the difference quotient f δ ( x , t ) = f ( x + δ , t ) − f ( x , t ) δ . {\displaystyle f_{\delta }(x,t)={\frac {f(x+\delta ,t)-f(x,t)}{\delta }}.} For t fixed, the mean value theorem implies there exists z in the interval [ x , x + δ ] such that f δ ( x , t ) = f x ( z , t ) . {\displaystyle f_{\delta }(x,t)=f_{x}(z,t).} Continuity of f x ( x , t ) and compactness of the domain together imply that f x ( x , t ) is bounded. The above application of the mean value theorem therefore gives a uniform (independent of t {\displaystyle t} ) bound on f δ ( x , t ) {\displaystyle f_{\delta }(x,t)} . The difference quotients converge pointwise to the partial derivative f x by the assumption that the partial derivative exists.
The above argument shows that for every sequence { δ n } → 0, the sequence { f δ n ( x , t ) } {\displaystyle \{f_{\delta _{n}}(x,t)\}} is uniformly bounded and converges pointwise to f x . The bounded convergence theorem states that if a sequence of functions on a set of finite measure is uniformly bounded and converges pointwise, then passage of the limit under the integral is valid. In particular, the limit and integral may be exchanged for every sequence { δ n } → 0. Therefore, the limit as δ → 0 may be passed through the integral sign.
If instead we only know that there is an integrable function θ : Ω → R {\displaystyle \theta \colon \Omega \to \mathbf {R} } such that | f x ( x , ω ) | ≤ θ ( ω ) {\displaystyle |f_{x}(x,\omega )|\leq \theta (\omega )} , then | f δ ( x , t ) | = | f x ( z , t ) | ≤ θ ( ω ) {\displaystyle |f_{\delta }(x,t)|=|f_{x}(z,t)|\leq \theta (\omega )} and the dominated convergence theorem allows us to move the limit inside of the integral.
For a continuous real valued function g of one real variable , and real valued differentiable functions f 1 {\displaystyle f_{1}} and f 2 {\displaystyle f_{2}} of one real variable, d d x ( ∫ f 1 ( x ) f 2 ( x ) g ( t ) d t ) = g ( f 2 ( x ) ) f 2 ′ ( x ) − g ( f 1 ( x ) ) f 1 ′ ( x ) . {\displaystyle {\frac {d}{dx}}\left(\int _{f_{1}(x)}^{f_{2}(x)}g(t)\,dt\right)=g\left(f_{2}(x)\right){f_{2}'(x)}-g\left(f_{1}(x)\right){f_{1}'(x)}.}
This follows from the chain rule and the First Fundamental Theorem of Calculus . Define G ( x ) = ∫ f 1 ( x ) f 2 ( x ) g ( t ) d t , {\displaystyle G(x)=\int _{f_{1}(x)}^{f_{2}(x)}g(t)\,dt,} and Γ ( x ) = ∫ 0 x g ( t ) d t . {\displaystyle \Gamma (x)=\int _{0}^{x}g(t)\,dt.} (The lower limit just has to be some number in the domain of g {\displaystyle g} )
Then, G ( x ) {\displaystyle G(x)} can be written as a composition : G ( x ) = ( Γ ∘ f 2 ) ( x ) − ( Γ ∘ f 1 ) ( x ) {\displaystyle G(x)=(\Gamma \circ f_{2})(x)-(\Gamma \circ f_{1})(x)} . The Chain Rule then implies that G ′ ( x ) = Γ ′ ( f 2 ( x ) ) f 2 ′ ( x ) − Γ ′ ( f 1 ( x ) ) f 1 ′ ( x ) . {\displaystyle G'(x)=\Gamma '\left(f_{2}(x)\right)f_{2}'(x)-\Gamma '\left(f_{1}(x)\right)f_{1}'(x).} By the First Fundamental Theorem of Calculus , Γ ′ ( x ) = g ( x ) {\displaystyle \Gamma '(x)=g(x)} . Therefore, substituting this result above, we get the desired equation: G ′ ( x ) = g ( f 2 ( x ) ) f 2 ′ ( x ) − g ( f 1 ( x ) ) f 1 ′ ( x ) . {\displaystyle G'(x)=g\left(f_{2}(x)\right){f_{2}'(x)}-g\left(f_{1}(x)\right){f_{1}'(x)}.}
Note: This form can be particularly useful if the expression to be differentiated is of the form: ∫ f 1 ( x ) f 2 ( x ) h ( x ) g ( t ) d t {\displaystyle \int _{f_{1}(x)}^{f_{2}(x)}h(x)\,g(t)\,dt} Because h ( x ) {\displaystyle h(x)} does not depend on the limits of integration, it may be moved out from under the integral sign, and the above form may be used with the Product rule , i.e., d d x ( ∫ f 1 ( x ) f 2 ( x ) h ( x ) g ( t ) d t ) = d d x ( h ( x ) ∫ f 1 ( x ) f 2 ( x ) g ( t ) d t ) = h ′ ( x ) ∫ f 1 ( x ) f 2 ( x ) g ( t ) d t + h ( x ) d d x ( ∫ f 1 ( x ) f 2 ( x ) g ( t ) d t ) {\displaystyle {\begin{aligned}{\frac {d}{dx}}\left(\int _{f_{1}(x)}^{f_{2}(x)}h(x)g(t)\,dt\right)&={\frac {d}{dx}}\left(h(x)\int _{f_{1}(x)}^{f_{2}(x)}g(t)\,dt\right)\\&=h'(x)\int _{f_{1}(x)}^{f_{2}(x)}g(t)\,dt+h(x){\frac {d}{dx}}\left(\int _{f_{1}(x)}^{f_{2}(x)}g(t)\,dt\right)\end{aligned}}}
Set φ ( α ) = ∫ a b f ( x , α ) d x , {\displaystyle \varphi (\alpha )=\int _{a}^{b}f(x,\alpha )\,dx,} where a and b are functions of α that exhibit increments Δ a and Δ b , respectively, when α is increased by Δ α . Then, Δ φ = φ ( α + Δ α ) − φ ( α ) = ∫ a + Δ a b + Δ b f ( x , α + Δ α ) d x − ∫ a b f ( x , α ) d x = ∫ a + Δ a a f ( x , α + Δ α ) d x + ∫ a b f ( x , α + Δ α ) d x + ∫ b b + Δ b f ( x , α + Δ α ) d x − ∫ a b f ( x , α ) d x = − ∫ a a + Δ a f ( x , α + Δ α ) d x + ∫ a b [ f ( x , α + Δ α ) − f ( x , α ) ] d x + ∫ b b + Δ b f ( x , α + Δ α ) d x . {\displaystyle {\begin{aligned}\Delta \varphi &=\varphi (\alpha +\Delta \alpha )-\varphi (\alpha )\\[4pt]&=\int _{a+\Delta a}^{b+\Delta b}f(x,\alpha +\Delta \alpha )\,dx-\int _{a}^{b}f(x,\alpha )\,dx\\[4pt]&=\int _{a+\Delta a}^{a}f(x,\alpha +\Delta \alpha )\,dx+\int _{a}^{b}f(x,\alpha +\Delta \alpha )\,dx+\int _{b}^{b+\Delta b}f(x,\alpha +\Delta \alpha )\,dx-\int _{a}^{b}f(x,\alpha )\,dx\\[4pt]&=-\int _{a}^{a+\Delta a}f(x,\alpha +\Delta \alpha )\,dx+\int _{a}^{b}[f(x,\alpha +\Delta \alpha )-f(x,\alpha )]\,dx+\int _{b}^{b+\Delta b}f(x,\alpha +\Delta \alpha )\,dx.\end{aligned}}}
A form of the mean value theorem , ∫ a b f ( x ) d x = ( b − a ) f ( ξ ) {\textstyle \int _{a}^{b}f(x)\,dx=(b-a)f(\xi )} , where a < ξ < b , may be applied to the first and last integrals of the formula for Δ φ above, resulting in Δ φ = − Δ a f ( ξ 1 , α + Δ α ) + ∫ a b [ f ( x , α + Δ α ) − f ( x , α ) ] d x + Δ b f ( ξ 2 , α + Δ α ) . {\displaystyle \Delta \varphi =-\Delta af(\xi _{1},\alpha +\Delta \alpha )+\int _{a}^{b}[f(x,\alpha +\Delta \alpha )-f(x,\alpha )]\,dx+\Delta bf(\xi _{2},\alpha +\Delta \alpha ).}
Divide by Δ α and let Δ α → 0. Notice ξ 1 → a and ξ 2 → b . We may pass the limit through the integral sign: lim Δ α → 0 ∫ a b f ( x , α + Δ α ) − f ( x , α ) Δ α d x = ∫ a b ∂ ∂ α f ( x , α ) d x , {\displaystyle \lim _{\Delta \alpha \to 0}\int _{a}^{b}{\frac {f(x,\alpha +\Delta \alpha )-f(x,\alpha )}{\Delta \alpha }}\,dx=\int _{a}^{b}{\frac {\partial }{\partial \alpha }}f(x,\alpha )\,dx,} again by the bounded convergence theorem. This yields the general form of the Leibniz integral rule, d φ d α = ∫ a b ∂ ∂ α f ( x , α ) d x + f ( b , α ) d b d α − f ( a , α ) d a d α . {\displaystyle {\frac {d\varphi }{d\alpha }}=\int _{a}^{b}{\frac {\partial }{\partial \alpha }}f(x,\alpha )\,dx+f(b,\alpha ){\frac {db}{d\alpha }}-f(a,\alpha ){\frac {da}{d\alpha }}.}
The general form of Leibniz's Integral Rule with variable limits can be derived as a consequence of the basic form of Leibniz's Integral Rule, the multivariable chain rule , and the first fundamental theorem of calculus . Suppose f {\displaystyle f} is defined in a rectangle in the x − t {\displaystyle x-t} plane, for x ∈ [ x 1 , x 2 ] {\displaystyle x\in [x_{1},x_{2}]} and t ∈ [ t 1 , t 2 ] {\displaystyle t\in [t_{1},t_{2}]} . Also, assume f {\displaystyle f} and the partial derivative ∂ f ∂ x {\textstyle {\frac {\partial f}{\partial x}}} are both continuous functions on this rectangle. Suppose a , b {\displaystyle a,b} are differentiable real valued functions defined on [ x 1 , x 2 ] {\displaystyle [x_{1},x_{2}]} , with values in [ t 1 , t 2 ] {\displaystyle [t_{1},t_{2}]} (i.e. for every x ∈ [ x 1 , x 2 ] , a ( x ) , b ( x ) ∈ [ t 1 , t 2 ] {\displaystyle x\in [x_{1},x_{2}],a(x),b(x)\in [t_{1},t_{2}]} ). Now, set F ( x , y ) = ∫ t 1 y f ( x , t ) d t , for x ∈ [ x 1 , x 2 ] and y ∈ [ t 1 , t 2 ] {\displaystyle F(x,y)=\int _{t_{1}}^{y}f(x,t)\,dt,\qquad {\text{for}}~x\in [x_{1},x_{2}]~{\text{and}}~y\in [t_{1},t_{2}]} and G ( x ) = ∫ a ( x ) b ( x ) f ( x , t ) d t , for x ∈ [ x 1 , x 2 ] {\displaystyle G(x)=\int _{a(x)}^{b(x)}f(x,t)\,dt,\quad {\text{for}}~x\in [x_{1},x_{2}]}
Then, by properties of definite Integrals , we can write G ( x ) = ∫ t 1 b ( x ) f ( x , t ) d t − ∫ t 1 a ( x ) f ( x , t ) d t = F ( x , b ( x ) ) − F ( x , a ( x ) ) {\displaystyle G(x)=\int _{t_{1}}^{b(x)}f(x,t)\,dt-\int _{t_{1}}^{a(x)}f(x,t)\,dt=F(x,b(x))-F(x,a(x))}
Since the functions F , a , b {\displaystyle F,a,b} are all differentiable (see the remark at the end of the proof), by the multivariable chain rule , it follows that G {\displaystyle G} is differentiable, and its derivative is given by the formula: G ′ ( x ) = ( ∂ F ∂ x ( x , b ( x ) ) + ∂ F ∂ y ( x , b ( x ) ) b ′ ( x ) ) − ( ∂ F ∂ x ( x , a ( x ) ) + ∂ F ∂ y ( x , a ( x ) ) a ′ ( x ) ) {\displaystyle G'(x)=\left({\frac {\partial F}{\partial x}}(x,b(x))+{\frac {\partial F}{\partial y}}(x,b(x))b'(x)\right)-\left({\frac {\partial F}{\partial x}}(x,a(x))+{\frac {\partial F}{\partial y}}(x,a(x))a'(x)\right)} Now, note that for every x ∈ [ x 1 , x 2 ] {\displaystyle x\in [x_{1},x_{2}]} , and for every y ∈ [ t 1 , t 2 ] {\displaystyle y\in [t_{1},t_{2}]} , we have that ∂ F ∂ x ( x , y ) = ∫ t 1 y ∂ f ∂ x ( x , t ) d t {\textstyle {\frac {\partial F}{\partial x}}(x,y)=\int _{t_{1}}^{y}{\frac {\partial f}{\partial x}}(x,t)\,dt} , because when taking the partial derivative with respect to x {\displaystyle x} of F {\displaystyle F} , we are keeping y {\displaystyle y} fixed in the expression ∫ t 1 y f ( x , t ) d t {\textstyle \int _{t_{1}}^{y}f(x,t)\,dt} ; thus the basic form of Leibniz's Integral Rule with constant limits of integration applies. Next, by the first fundamental theorem of calculus , we have that ∂ F ∂ y ( x , y ) = f ( x , y ) {\textstyle {\frac {\partial F}{\partial y}}(x,y)=f(x,y)} ; because when taking the partial derivative with respect to y {\displaystyle y} of F {\displaystyle F} , the first variable x {\displaystyle x} is fixed, so the fundamental theorem can indeed be applied.
Substituting these results into the equation for G ′ ( x ) {\displaystyle G'(x)} above gives: G ′ ( x ) = ( ∫ t 1 b ( x ) ∂ f ∂ x ( x , t ) d t + f ( x , b ( x ) ) b ′ ( x ) ) − ( ∫ t 1 a ( x ) ∂ f ∂ x ( x , t ) d t + f ( x , a ( x ) ) a ′ ( x ) ) = f ( x , b ( x ) ) b ′ ( x ) − f ( x , a ( x ) ) a ′ ( x ) + ∫ a ( x ) b ( x ) ∂ f ∂ x ( x , t ) d t , {\displaystyle {\begin{aligned}G'(x)&=\left(\int _{t_{1}}^{b(x)}{\frac {\partial f}{\partial x}}(x,t)\,dt+f(x,b(x))b'(x)\right)-\left(\int _{t_{1}}^{a(x)}{\dfrac {\partial f}{\partial x}}(x,t)\,dt+f(x,a(x))a'(x)\right)\\[2pt]&=f(x,b(x))b'(x)-f(x,a(x))a'(x)+\int _{a(x)}^{b(x)}{\frac {\partial f}{\partial x}}(x,t)\,dt,\end{aligned}}} as desired.
There is a technical point in the proof above which is worth noting: applying the Chain Rule to G {\displaystyle G} requires that F {\displaystyle F} already be differentiable . This is where we use our assumptions about f {\displaystyle f} . As mentioned above, the partial derivatives of F {\displaystyle F} are given by the formulas ∂ F ∂ x ( x , y ) = ∫ t 1 y ∂ f ∂ x ( x , t ) d t {\textstyle {\frac {\partial F}{\partial x}}(x,y)=\int _{t_{1}}^{y}{\frac {\partial f}{\partial x}}(x,t)\,dt} and ∂ F ∂ y ( x , y ) = f ( x , y ) {\textstyle {\frac {\partial F}{\partial y}}(x,y)=f(x,y)} . Since ∂ f ∂ x {\textstyle {\dfrac {\partial f}{\partial x}}} is continuous, its integral is also a continuous function, [ 7 ] and since f {\displaystyle f} is also continuous, these two results show that both the partial derivatives of F {\displaystyle F} are continuous. Since continuity of partial derivatives implies differentiability of the function, [ 8 ] F {\displaystyle F} is indeed differentiable.
At time t the surface Σ in Figure 1 contains a set of points arranged about a centroid C ( t ) {\displaystyle \mathbf {C} (t)} . The function F ( r , t ) {\displaystyle \mathbf {F} (\mathbf {r} ,t)} can be written as F ( C ( t ) + r − C ( t ) , t ) = F ( C ( t ) + I , t ) , {\displaystyle \mathbf {F} (\mathbf {C} (t)+\mathbf {r} -\mathbf {C} (t),t)=\mathbf {F} (\mathbf {C} (t)+\mathbf {I} ,t),} with I {\displaystyle \mathbf {I} } independent of time. Variables are shifted to a new frame of reference attached to the moving surface, with origin at C ( t ) {\displaystyle \mathbf {C} (t)} . For a rigidly translating surface, the limits of integration are then independent of time, so: d d t ( ∬ Σ ( t ) d A r ⋅ F ( r , t ) ) = ∬ Σ d A I ⋅ d d t F ( C ( t ) + I , t ) , {\displaystyle {\frac {d}{dt}}\left(\iint _{\Sigma (t)}d\mathbf {A} _{\mathbf {r} }\cdot \mathbf {F} (\mathbf {r} ,t)\right)=\iint _{\Sigma }d\mathbf {A} _{\mathbf {I} }\cdot {\frac {d}{dt}}\mathbf {F} (\mathbf {C} (t)+\mathbf {I} ,t),} where the limits of integration confining the integral to the region Σ no longer are time dependent so differentiation passes through the integration to act on the integrand only: d d t F ( C ( t ) + I , t ) = F t ( C ( t ) + I , t ) + v ⋅ ∇ F ( C ( t ) + I , t ) = F t ( r , t ) + v ⋅ ∇ F ( r , t ) , {\displaystyle {\frac {d}{dt}}\mathbf {F} (\mathbf {C} (t)+\mathbf {I} ,t)=\mathbf {F} _{t}(\mathbf {C} (t)+\mathbf {I} ,t)+\mathbf {v\cdot \nabla F} (\mathbf {C} (t)+\mathbf {I} ,t)=\mathbf {F} _{t}(\mathbf {r} ,t)+\mathbf {v} \cdot \nabla \mathbf {F} (\mathbf {r} ,t),} with the velocity of motion of the surface defined by v = d d t C ( t ) . {\displaystyle \mathbf {v} ={\frac {d}{dt}}\mathbf {C} (t).}
This equation expresses the material derivative of the field, that is, the derivative with respect to a coordinate system attached to the moving surface. Having found the derivative, variables can be switched back to the original frame of reference. We notice that (see article on curl ) ∇ × ( v × F ) = ( ∇ ⋅ F + F ⋅ ∇ ) v − ( ∇ ⋅ v + v ⋅ ∇ ) F , {\displaystyle \nabla \times \left(\mathbf {v} \times \mathbf {F} \right)=(\nabla \cdot \mathbf {F} +\mathbf {F} \cdot \nabla )\mathbf {v} -(\nabla \cdot \mathbf {v} +\mathbf {v} \cdot \nabla )\mathbf {F} ,} and that Stokes theorem equates the surface integral of the curl over Σ with a line integral over ∂Σ : d d t ( ∬ Σ ( t ) F ( r , t ) ⋅ d A ) = ∬ Σ ( t ) ( F t ( r , t ) + ( F ⋅ ∇ ) v + ( ∇ ⋅ F ) v − ( ∇ ⋅ v ) F ) ⋅ d A − ∮ ∂ Σ ( t ) ( v × F ) ⋅ d s . {\displaystyle {\frac {d}{dt}}\left(\iint _{\Sigma (t)}\mathbf {F} (\mathbf {r} ,t)\cdot d\mathbf {A} \right)=\iint _{\Sigma (t)}{\big (}\mathbf {F} _{t}(\mathbf {r} ,t)+\left(\mathbf {F\cdot \nabla } \right)\mathbf {v} +\left(\nabla \cdot \mathbf {F} \right)\mathbf {v} -(\nabla \cdot \mathbf {v} )\mathbf {F} {\big )}\cdot d\mathbf {A} -\oint _{\partial \Sigma (t)}\left(\mathbf {v} \times \mathbf {F} \right)\cdot d\mathbf {s} .}
The sign of the line integral is based on the right-hand rule for the choice of direction of line element d s . To establish this sign, for example, suppose the field F points in the positive z -direction, and the surface Σ is a portion of the xy -plane with perimeter ∂Σ. We adopt the normal to Σ to be in the positive z -direction. Positive traversal of ∂Σ is then counterclockwise (right-hand rule with thumb along z -axis). Then the integral on the left-hand side determines a positive flux of F through Σ. Suppose Σ translates in the positive x -direction at velocity v . An element of the boundary of Σ parallel to the y -axis, say d s , sweeps out an area v t × d s in time t . If we integrate around the boundary ∂Σ in a counterclockwise sense, v t × d s points in the negative z -direction on the left side of ∂Σ (where d s points downward), and in the positive z -direction on the right side of ∂Σ (where d s points upward), which makes sense because Σ is moving to the right, adding area on the right and losing it on the left. On that basis, the flux of F is increasing on the right of ∂Σ and decreasing on the left. However, the dot product v × F ⋅ d s = − F × v ⋅ d s = − F ⋅ v × d s . Consequently, the sign of the line integral is taken as negative.
If v is a constant, d d t ∬ Σ ( t ) F ( r , t ) ⋅ d A = ∬ Σ ( t ) ( F t ( r , t ) + ( ∇ ⋅ F ) v ) ⋅ d A − ∮ ∂ Σ ( t ) ( v × F ) ⋅ d s , {\displaystyle {\frac {d}{dt}}\iint _{\Sigma (t)}\mathbf {F} (\mathbf {r} ,t)\cdot d\mathbf {A} =\iint _{\Sigma (t)}{\big (}\mathbf {F} _{t}(\mathbf {r} ,t)+\left(\nabla \cdot \mathbf {F} \right)\mathbf {v} {\big )}\cdot d\mathbf {A} -\oint _{\partial \Sigma (t)}\left(\mathbf {v} \times \mathbf {F} \right)\cdot \,d\mathbf {s} ,} which is the quoted result. This proof does not consider the possibility of the surface deforming as it moves.
Lemma. One has: ∂ ∂ b ( ∫ a b f ( x ) d x ) = f ( b ) , ∂ ∂ a ( ∫ a b f ( x ) d x ) = − f ( a ) . {\displaystyle {\frac {\partial }{\partial b}}\left(\int _{a}^{b}f(x)\,dx\right)=f(b),\qquad {\frac {\partial }{\partial a}}\left(\int _{a}^{b}f(x)\,dx\right)=-f(a).}
Proof. From the proof of the fundamental theorem of calculus ,
∂ ∂ b ( ∫ a b f ( x ) d x ) = lim Δ b → 0 1 Δ b ( ∫ a b + Δ b f ( x ) d x − ∫ a b f ( x ) d x ) = lim Δ b → 0 1 Δ b ( ∫ a b f ( x ) d x + ∫ b b + Δ b f ( x ) d x − ∫ a b f ( x ) d x ) = lim Δ b → 0 1 Δ b ∫ b b + Δ b f ( x ) d x = lim Δ b → 0 1 Δ b [ f ( b ) Δ b + O ( Δ b 2 ) ] = f ( b ) , {\displaystyle {\begin{aligned}{\frac {\partial }{\partial b}}\left(\int _{a}^{b}f(x)\,dx\right)&=\lim _{\Delta b\to 0}{\frac {1}{\Delta b}}\left(\int _{a}^{b+\Delta b}f(x)\,dx-\int _{a}^{b}f(x)\,dx\right)\\[1ex]&=\lim _{\Delta b\to 0}{\frac {1}{\Delta b}}\left(\int _{a}^{b}f(x)\,dx+\int _{b}^{b+\Delta b}f(x)\,dx-\int _{a}^{b}f(x)\,dx\right)\\[1ex]&=\lim _{\Delta b\to 0}{\frac {1}{\Delta b}}\int _{b}^{b+\Delta b}f(x)\,dx\\[1ex]&=\lim _{\Delta b\to 0}{\frac {1}{\Delta b}}\left[f(b)\Delta b+O\left(\Delta b^{2}\right)\right]\\[1ex]&=f(b),\end{aligned}}} and ∂ ∂ a ( ∫ a b f ( x ) d x ) = lim Δ a → 0 1 Δ a [ ∫ a + Δ a b f ( x ) d x − ∫ a b f ( x ) d x ] = lim Δ a → 0 1 Δ a ∫ a + Δ a a f ( x ) d x = lim Δ a → 0 1 Δ a [ − f ( a ) Δ a + O ( Δ a 2 ) ] = − f ( a ) . {\displaystyle {\begin{aligned}{\frac {\partial }{\partial a}}\left(\int _{a}^{b}f(x)\,dx\right)&=\lim _{\Delta a\to 0}{\frac {1}{\Delta a}}\left[\int _{a+\Delta a}^{b}f(x)\,dx-\int _{a}^{b}f(x)\,dx\right]\\[6pt]&=\lim _{\Delta a\to 0}{\frac {1}{\Delta a}}\int _{a+\Delta a}^{a}f(x)\,dx\\[6pt]&=\lim _{\Delta a\to 0}{\frac {1}{\Delta a}}\left[-f(a)\Delta a+O\left(\Delta a^{2}\right)\right]\\[6pt]&=-f(a).\end{aligned}}}
Suppose a and b are constant, and that f ( x ) involves a parameter α which is constant in the integration but may vary to form different integrals. Assume that f ( x , α ) is a continuous function of x and α in the compact set {( x , α ) : α 0 ≤ α ≤ α 1 and a ≤ x ≤ b }, and that the partial derivative f α ( x , α ) exists and is continuous. If one defines: φ ( α ) = ∫ a b f ( x , α ) d x , {\displaystyle \varphi (\alpha )=\int _{a}^{b}f(x,\alpha )\,dx,} then φ {\displaystyle \varphi } may be differentiated with respect to α by differentiating under the integral sign, i.e., d φ d α = ∫ a b ∂ ∂ α f ( x , α ) d x . {\displaystyle {\frac {d\varphi }{d\alpha }}=\int _{a}^{b}{\frac {\partial }{\partial \alpha }}f(x,\alpha )\,dx.}
By the Heine–Cantor theorem it is uniformly continuous in that set. In other words, for any ε > 0 there exists Δ α such that for all values of x in [ a , b ], | f ( x , α + Δ α ) − f ( x , α ) | < ε . {\displaystyle |f(x,\alpha +\Delta \alpha )-f(x,\alpha )|<\varepsilon .}
On the other hand, Δ φ = φ ( α + Δ α ) − φ ( α ) = ∫ a b f ( x , α + Δ α ) d x − ∫ a b f ( x , α ) d x = ∫ a b ( f ( x , α + Δ α ) − f ( x , α ) ) d x ≤ ε ( b − a ) . {\displaystyle {\begin{aligned}\Delta \varphi &=\varphi (\alpha +\Delta \alpha )-\varphi (\alpha )\\[6pt]&=\int _{a}^{b}f(x,\alpha +\Delta \alpha )\,dx-\int _{a}^{b}f(x,\alpha )\,dx\\[6pt]&=\int _{a}^{b}\left(f(x,\alpha +\Delta \alpha )-f(x,\alpha )\right)\,dx\\[6pt]&\leq \varepsilon (b-a).\end{aligned}}}
Hence φ ( α ) is a continuous function.
Similarly if ∂ ∂ α f ( x , α ) {\displaystyle {\frac {\partial }{\partial \alpha }}f(x,\alpha )} exists and is continuous, then for all ε > 0 there exists Δ α such that: ∀ x ∈ [ a , b ] , | f ( x , α + Δ α ) − f ( x , α ) Δ α − ∂ f ∂ α | < ε . {\displaystyle \forall x\in [a,b],\quad \left|{\frac {f(x,\alpha +\Delta \alpha )-f(x,\alpha )}{\Delta \alpha }}-{\frac {\partial f}{\partial \alpha }}\right|<\varepsilon .}
Therefore, Δ φ Δ α = ∫ a b f ( x , α + Δ α ) − f ( x , α ) Δ α d x = ∫ a b ∂ f ( x , α ) ∂ α d x + R , {\displaystyle {\frac {\Delta \varphi }{\Delta \alpha }}=\int _{a}^{b}{\frac {f(x,\alpha +\Delta \alpha )-f(x,\alpha )}{\Delta \alpha }}\,dx=\int _{a}^{b}{\frac {\partial f(x,\alpha )}{\partial \alpha }}\,dx+R,} where | R | < ∫ a b ε d x = ε ( b − a ) . {\displaystyle |R|<\int _{a}^{b}\varepsilon \,dx=\varepsilon (b-a).}
Now, ε → 0 as Δ α → 0, so lim Δ α → 0 Δ φ Δ α = d φ d α = ∫ a b ∂ ∂ α f ( x , α ) d x . {\displaystyle \lim _{{\Delta \alpha }\to 0}{\frac {\Delta \varphi }{\Delta \alpha }}={\frac {d\varphi }{d\alpha }}=\int _{a}^{b}{\frac {\partial }{\partial \alpha }}f(x,\alpha )\,dx.}
This is the formula we set out to prove.
Now, suppose ∫ a b f ( x , α ) d x = φ ( α ) , {\displaystyle \int _{a}^{b}f(x,\alpha )\,dx=\varphi (\alpha ),} where a and b are functions of α which take increments Δ a and Δ b , respectively, when α is increased by Δ α . Then, Δ φ = φ ( α + Δ α ) − φ ( α ) = ∫ a + Δ a b + Δ b f ( x , α + Δ α ) d x − ∫ a b f ( x , α ) d x = ∫ a + Δ a a f ( x , α + Δ α ) d x + ∫ a b f ( x , α + Δ α ) d x + ∫ b b + Δ b f ( x , α + Δ α ) d x − ∫ a b f ( x , α ) d x = − ∫ a a + Δ a f ( x , α + Δ α ) d x + ∫ a b [ f ( x , α + Δ α ) − f ( x , α ) ] d x + ∫ b b + Δ b f ( x , α + Δ α ) d x . {\displaystyle {\begin{aligned}\Delta \varphi &=\varphi (\alpha +\Delta \alpha )-\varphi (\alpha )\\[6pt]&=\int _{a+\Delta a}^{b+\Delta b}f(x,\alpha +\Delta \alpha )\,dx-\int _{a}^{b}f(x,\alpha )\,dx\\[6pt]&=\int _{a+\Delta a}^{a}f(x,\alpha +\Delta \alpha )\,dx+\int _{a}^{b}f(x,\alpha +\Delta \alpha )\,dx+\int _{b}^{b+\Delta b}f(x,\alpha +\Delta \alpha )\,dx-\int _{a}^{b}f(x,\alpha )\,dx\\[6pt]&=-\int _{a}^{a+\Delta a}f(x,\alpha +\Delta \alpha )\,dx+\int _{a}^{b}[f(x,\alpha +\Delta \alpha )-f(x,\alpha )]\,dx+\int _{b}^{b+\Delta b}f(x,\alpha +\Delta \alpha )\,dx.\end{aligned}}}
A form of the mean value theorem , ∫ a b f ( x ) d x = ( b − a ) f ( ξ ) , {\textstyle \int _{a}^{b}f(x)\,dx=(b-a)f(\xi ),} where a < ξ < b , can be applied to the first and last integrals of the formula for Δ φ above, resulting in Δ φ = − Δ a f ( ξ 1 , α + Δ α ) + ∫ a b [ f ( x , α + Δ α ) − f ( x , α ) ] d x + Δ b f ( ξ 2 , α + Δ α ) . {\displaystyle \Delta \varphi =-\Delta a\,f(\xi _{1},\alpha +\Delta \alpha )+\int _{a}^{b}[f(x,\alpha +\Delta \alpha )-f(x,\alpha )]\,dx+\Delta b\,f(\xi _{2},\alpha +\Delta \alpha ).}
Dividing by Δ α , letting Δ α → 0, noticing ξ 1 → a and ξ 2 → b and using the above derivation for d φ d α = ∫ a b ∂ ∂ α f ( x , α ) d x {\displaystyle {\frac {d\varphi }{d\alpha }}=\int _{a}^{b}{\frac {\partial }{\partial \alpha }}f(x,\alpha )\,dx} yields d φ d α = ∫ a b ∂ ∂ α f ( x , α ) d x + f ( b , α ) ∂ b ∂ α − f ( a , α ) ∂ a ∂ α . {\displaystyle {\frac {d\varphi }{d\alpha }}=\int _{a}^{b}{\frac {\partial }{\partial \alpha }}f(x,\alpha )\,dx+f(b,\alpha ){\frac {\partial b}{\partial \alpha }}-f(a,\alpha ){\frac {\partial a}{\partial \alpha }}.}
This is the general form of the Leibniz integral rule.
Consider the function
The function under the integral sign is not continuous at the point ( x , α ) = ( 0 , 0 ) {\displaystyle (x,\alpha )=(0,0)} , and the function φ ( α ) {\displaystyle \varphi (\alpha )} has a discontinuity at α = 0 {\displaystyle \alpha =0} because φ ( α ) {\displaystyle \varphi (\alpha )} approaches ± π / 2 {\displaystyle \pm \pi /2} as α → 0 ± {\displaystyle \alpha \to 0^{\pm }} .
If we differentiate φ ( α ) {\displaystyle \varphi (\alpha )} with respect to α {\displaystyle \alpha } under the integral sign, we get d d α φ ( α ) = ∫ 0 1 ∂ ∂ α ( α x 2 + α 2 ) d x = ∫ 0 1 x 2 − α 2 ( x 2 + α 2 ) 2 d x = − x x 2 + α 2 | 0 1 = − 1 1 + α 2 , {\displaystyle {\frac {d}{d\alpha }}\varphi (\alpha )=\int _{0}^{1}{\frac {\partial }{\partial \alpha }}\left({\frac {\alpha }{x^{2}+\alpha ^{2}}}\right)\,dx=\int _{0}^{1}{\frac {x^{2}-\alpha ^{2}}{(x^{2}+\alpha ^{2})^{2}}}dx=\left.-{\frac {x}{x^{2}+\alpha ^{2}}}\right|_{0}^{1}=-{\frac {1}{1+\alpha ^{2}}},} for α ≠ 0 {\displaystyle \alpha \neq 0} . This may be integrated (with respect to α {\displaystyle \alpha } ) to find φ ( α ) = { 0 , α = 0 , − arctan ( α ) + π 2 , α ≠ 0. {\displaystyle \varphi (\alpha )={\begin{cases}0,&\alpha =0,\\-\arctan({\alpha })+{\frac {\pi }{2}},&\alpha \neq 0.\end{cases}}}
An example with variable limits: d d x ∫ sin x cos x cosh t 2 d t = cosh ( cos 2 x ) d d x ( cos x ) − cosh ( sin 2 x ) d d x ( sin x ) + ∫ sin x cos x ∂ ∂ x ( cosh t 2 ) d t = cosh ( cos 2 x ) ( − sin x ) − cosh ( sin 2 x ) ( cos x ) + 0 = − cosh ( cos 2 x ) sin x − cosh ( sin 2 x ) cos x . {\displaystyle {\begin{aligned}{\frac {d}{dx}}\int _{\sin x}^{\cos x}\cosh t^{2}\,dt&=\cosh \left(\cos ^{2}x\right){\frac {d}{dx}}(\cos x)-\cosh \left(\sin ^{2}x\right){\frac {d}{dx}}(\sin x)+\int _{\sin x}^{\cos x}{\frac {\partial }{\partial x}}(\cosh t^{2})\,dt\\[6pt]&=\cosh(\cos ^{2}x)(-\sin x)-\cosh(\sin ^{2}x)(\cos x)+0\\[6pt]&=-\cosh(\cos ^{2}x)\sin x-\cosh(\sin ^{2}x)\cos x.\end{aligned}}}
The formula d d x ( ∫ a ( x ) b ( x ) f ( x , t ) d t ) = f ( x , b ( x ) ) ⋅ d d x b ( x ) − f ( x , a ( x ) ) ⋅ d d x a ( x ) + ∫ a ( x ) b ( x ) ∂ ∂ x f ( x , t ) d t {\displaystyle {\frac {d}{dx}}\left(\int _{a(x)}^{b(x)}f(x,t)\,dt\right)=f{\big (}x,b(x){\big )}\cdot {\frac {d}{dx}}b(x)-f{\big (}x,a(x){\big )}\cdot {\frac {d}{dx}}a(x)+\int _{a(x)}^{b(x)}{\frac {\partial }{\partial x}}f(x,t)\,dt} can be of use when evaluating certain definite integrals. When used in this context, the Leibniz integral rule for differentiating under the integral sign is also known as Feynman's trick/technique for integration.
Consider φ ( α ) = ∫ 0 π ln ( 1 − 2 α cos ( x ) + α 2 ) d x , | α | ≠ 1. {\displaystyle \varphi (\alpha )=\int _{0}^{\pi }\ln \left(1-2\alpha \cos(x)+\alpha ^{2}\right)\,dx,\qquad |\alpha |\neq 1.}
Now, d d α φ ( α ) = ∫ 0 π − 2 cos ( x ) + 2 α 1 − 2 α cos ( x ) + α 2 d x = 1 α ∫ 0 π ( 1 − 1 − α 2 1 − 2 α cos ( x ) + α 2 ) d x = π α − 2 α { arctan ( 1 + α 1 − α tan ( x 2 ) ) } | 0 π . {\displaystyle {\begin{aligned}{\frac {d}{d\alpha }}\varphi (\alpha )&=\int _{0}^{\pi }{\frac {-2\cos(x)+2\alpha }{1-2\alpha \cos(x)+\alpha ^{2}}}dx\\[6pt]&={\frac {1}{\alpha }}\int _{0}^{\pi }\left(1-{\frac {1-\alpha ^{2}}{1-2\alpha \cos(x)+\alpha ^{2}}}\right)dx\\[6pt]&=\left.{\frac {\pi }{\alpha }}-{\frac {2}{\alpha }}\left\{\arctan \left({\frac {1+\alpha }{1-\alpha }}\tan \left({\frac {x}{2}}\right)\right)\right\}\right|_{0}^{\pi }.\end{aligned}}}
As x {\displaystyle x} varies from 0 {\displaystyle 0} to π {\displaystyle \pi } , we have { 1 + α 1 − α tan ( x 2 ) ≥ 0 , | α | < 1 , 1 + α 1 − α tan ( x 2 ) ≤ 0 , | α | > 1. {\displaystyle {\begin{cases}{\frac {1+\alpha }{1-\alpha }}\tan \left({\frac {x}{2}}\right)\geq 0,&|\alpha |<1,\\{\frac {1+\alpha }{1-\alpha }}\tan \left({\frac {x}{2}}\right)\leq 0,&|\alpha |>1.\end{cases}}}
Hence, arctan ( 1 + α 1 − α tan ( x 2 ) ) | 0 π = { π 2 , | α | < 1 , − π 2 , | α | > 1. {\displaystyle \left.\arctan \left({\frac {1+\alpha }{1-\alpha }}\tan \left({\frac {x}{2}}\right)\right)\right|_{0}^{\pi }={\begin{cases}{\frac {\pi }{2}},&|\alpha |<1,\\-{\frac {\pi }{2}},&|\alpha |>1.\end{cases}}}
Therefore,
d d α φ ( α ) = { 0 , | α | < 1 , 2 π α , | α | > 1. {\displaystyle {\frac {d}{d\alpha }}\varphi (\alpha )={\begin{cases}0,&|\alpha |<1,\\{\frac {2\pi }{\alpha }},&|\alpha |>1.\end{cases}}}
Integrating both sides with respect to α {\displaystyle \alpha } , we get: φ ( α ) = { C 1 , | α | < 1 , 2 π ln | α | + C 2 , | α | > 1. {\displaystyle \varphi (\alpha )={\begin{cases}C_{1},&|\alpha |<1,\\2\pi \ln |\alpha |+C_{2},&|\alpha |>1.\end{cases}}}
C 1 = 0 {\displaystyle C_{1}=0} follows from evaluating φ ( 0 ) {\displaystyle \varphi (0)} : φ ( 0 ) = ∫ 0 π ln ( 1 ) d x = ∫ 0 π 0 d x = 0. {\displaystyle \varphi (0)=\int _{0}^{\pi }\ln(1)\,dx=\int _{0}^{\pi }0\,dx=0.}
To determine C 2 {\displaystyle C_{2}} in the same manner, we should need to substitute in a value of α {\displaystyle \alpha } greater than 1 in φ ( α ) {\displaystyle \varphi (\alpha )} . This is somewhat inconvenient. Instead, we substitute α = 1 β {\textstyle \alpha ={\frac {1}{\beta }}} , where | β | < 1 {\displaystyle |\beta |<1} . Then, φ ( α ) = ∫ 0 π ( ln ( 1 − 2 β cos ( x ) + β 2 ) − 2 ln | β | ) d x = ∫ 0 π ln ( 1 − 2 β cos ( x ) + β 2 ) d x − ∫ 0 π 2 ln | β | d x = 0 − 2 π ln | β | = 2 π ln | α | . {\displaystyle {\begin{aligned}\varphi (\alpha )&=\int _{0}^{\pi }\left(\ln \left(1-2\beta \cos(x)+\beta ^{2}\right)-2\ln |\beta |\right)dx\\[6pt]&=\int _{0}^{\pi }\ln \left(1-2\beta \cos(x)+\beta ^{2}\right)\,dx-\int _{0}^{\pi }2\ln |\beta |dx\\[6pt]&=0-2\pi \ln |\beta |\\[6pt]&=2\pi \ln |\alpha |.\end{aligned}}}
Therefore, C 2 = 0 {\displaystyle C_{2}=0}
The definition of φ ( α ) {\displaystyle \varphi (\alpha )} is now complete: φ ( α ) = { 0 , | α | < 1 , 2 π ln | α | , | α | > 1. {\displaystyle \varphi (\alpha )={\begin{cases}0,&|\alpha |<1,\\2\pi \ln |\alpha |,&|\alpha |>1.\end{cases}}}
The foregoing discussion, of course, does not apply when α = ± 1 {\displaystyle \alpha =\pm 1} , since the conditions for differentiability are not met.
I = ∫ 0 π / 2 1 ( a cos 2 x + b sin 2 x ) 2 d x , a , b > 0. {\displaystyle I=\int _{0}^{\pi /2}{\frac {1}{\left(a\cos ^{2}x+b\sin ^{2}x\right)^{2}}}\,dx,\qquad a,b>0.}
First we calculate: J = ∫ 0 π / 2 1 a cos 2 x + b sin 2 x d x = ∫ 0 π / 2 1 cos 2 x a + b sin 2 x cos 2 x d x = ∫ 0 π / 2 sec 2 x a + b tan 2 x d x = 1 b ∫ 0 π / 2 1 ( a b ) 2 + tan 2 x d ( tan x ) = 1 a b arctan ( b a tan x ) | 0 π / 2 = π 2 a b . {\displaystyle {\begin{aligned}J&=\int _{0}^{\pi /2}{\frac {1}{a\cos ^{2}x+b\sin ^{2}x}}dx\\[6pt]&=\int _{0}^{\pi /2}{\frac {\frac {1}{\cos ^{2}x}}{a+b{\frac {\sin ^{2}x}{\cos ^{2}x}}}}dx\\[6pt]&=\int _{0}^{\pi /2}{\frac {\sec ^{2}x}{a+b\tan ^{2}x}}dx\\[6pt]&={\frac {1}{b}}\int _{0}^{\pi /2}{\frac {1}{\left({\sqrt {\frac {a}{b}}}\right)^{2}+\tan ^{2}x}}\,d(\tan x)\\[6pt]&=\left.{\frac {1}{\sqrt {ab}}}\arctan \left({\sqrt {\frac {b}{a}}}\tan x\right)\right|_{0}^{\pi /2}\\[6pt]&={\frac {\pi }{2{\sqrt {ab}}}}.\end{aligned}}}
The limits of integration being independent of a {\displaystyle a} , we have: ∂ J ∂ a = − ∫ 0 π / 2 cos 2 x ( a cos 2 x + b sin 2 x ) 2 d x {\displaystyle {\frac {\partial J}{\partial a}}=-\int _{0}^{\pi /2}{\frac {\cos ^{2}x}{\left(a\cos ^{2}x+b\sin ^{2}x\right)^{2}}}\,dx}
On the other hand: ∂ J ∂ a = ∂ ∂ a ( π 2 a b ) = − π 4 a 3 b . {\displaystyle {\frac {\partial J}{\partial a}}={\frac {\partial }{\partial a}}\left({\frac {\pi }{2{\sqrt {ab}}}}\right)=-{\frac {\pi }{4{\sqrt {a^{3}b}}}}.}
Equating these two relations then yields ∫ 0 π / 2 cos 2 x ( a cos 2 x + b sin 2 x ) 2 d x = π 4 a 3 b . {\displaystyle \int _{0}^{\pi /2}{\frac {\cos ^{2}x}{\left(a\cos ^{2}x+b\sin ^{2}x\right)^{2}}}\,dx={\frac {\pi }{4{\sqrt {a^{3}b}}}}.}
In a similar fashion, pursuing ∂ J ∂ b {\displaystyle {\frac {\partial J}{\partial b}}} yields ∫ 0 π / 2 sin 2 x ( a cos 2 x + b sin 2 x ) 2 d x = π 4 a b 3 . {\displaystyle \int _{0}^{\pi /2}{\frac {\sin ^{2}x}{\left(a\cos ^{2}x+b\sin ^{2}x\right)^{2}}}\,dx={\frac {\pi }{4{\sqrt {ab^{3}}}}}.}
Adding the two results then produces I = ∫ 0 π / 2 1 ( a cos 2 x + b sin 2 x ) 2 d x = π 4 a b ( 1 a + 1 b ) , {\displaystyle I=\int _{0}^{\pi /2}{\frac {1}{\left(a\cos ^{2}x+b\sin ^{2}x\right)^{2}}}\,dx={\frac {\pi }{4{\sqrt {ab}}}}\left({\frac {1}{a}}+{\frac {1}{b}}\right),} which computes I {\displaystyle I} as desired.
This derivation may be generalized. Note that if we define I n = ∫ 0 π / 2 1 ( a cos 2 x + b sin 2 x ) n d x , {\displaystyle I_{n}=\int _{0}^{\pi /2}{\frac {1}{\left(a\cos ^{2}x+b\sin ^{2}x\right)^{n}}}\,dx,} it can easily be shown that ( 1 − n ) I n = ∂ I n − 1 ∂ a + ∂ I n − 1 ∂ b {\displaystyle (1-n)I_{n}={\frac {\partial I_{n-1}}{\partial a}}+{\frac {\partial I_{n-1}}{\partial b}}}
Given I 1 {\displaystyle I_{1}} , this integral reduction formula can be used to compute all of the values of I n {\displaystyle I_{n}} for n > 1 {\displaystyle n>1} . Integrals like I {\displaystyle I} and J {\displaystyle J} may also be handled using the Weierstrass substitution .
Here, we consider the integral I ( α ) = ∫ 0 π / 2 ln ( 1 + cos α cos x ) cos x d x , 0 < α < π . {\displaystyle I(\alpha )=\int _{0}^{\pi /2}{\frac {\ln(1+\cos \alpha \cos x)}{\cos x}}\,dx,\qquad 0<\alpha <\pi .}
Differentiating under the integral with respect to α {\displaystyle \alpha } , we have d d α I ( α ) = ∫ 0 π / 2 ∂ ∂ α ( ln ( 1 + cos α cos x ) cos x ) d x = − ∫ 0 π / 2 sin α 1 + cos α cos x d x = − ∫ 0 π / 2 sin α ( cos 2 x 2 + sin 2 x 2 ) + cos α ( cos 2 x 2 − sin 2 x 2 ) d x = − sin α 1 − cos α ∫ 0 π / 2 1 cos 2 x 2 1 1 + cos α 1 − cos α + tan 2 x 2 d x = − 2 sin α 1 − cos α ∫ 0 π / 2 1 2 sec 2 x 2 2 cos 2 α 2 2 sin 2 α 2 + tan 2 x 2 d x = − 2 ( 2 sin α 2 cos α 2 ) 2 sin 2 α 2 ∫ 0 π / 2 1 cot 2 α 2 + tan 2 x 2 d ( tan x 2 ) = − 2 cot α 2 ∫ 0 π / 2 1 cot 2 α 2 + tan 2 x 2 d ( tan x 2 ) = − 2 arctan ( tan α 2 tan x 2 ) | 0 π / 2 = − α . {\displaystyle {\begin{aligned}{\frac {d}{d\alpha }}I(\alpha )&=\int _{0}^{\pi /2}{\frac {\partial }{\partial \alpha }}\left({\frac {\ln(1+\cos \alpha \cos x)}{\cos x}}\right)\,dx\\[6pt]&=-\int _{0}^{\pi /2}{\frac {\sin \alpha }{1+\cos \alpha \cos x}}\,dx\\&=-\int _{0}^{\pi /2}{\frac {\sin \alpha }{\left(\cos ^{2}{\frac {x}{2}}+\sin ^{2}{\frac {x}{2}}\right)+\cos \alpha \left(\cos ^{2}{\frac {x}{2}}-\sin ^{2}{\frac {x}{2}}\right)}}\,dx\\[6pt]&=-{\frac {\sin \alpha }{1-\cos \alpha }}\int _{0}^{\pi /2}{\frac {1}{\cos ^{2}{\frac {x}{2}}}}{\frac {1}{{\frac {1+\cos \alpha }{1-\cos \alpha }}+\tan ^{2}{\frac {x}{2}}}}\,dx\\[6pt]&=-{\frac {2\sin \alpha }{1-\cos \alpha }}\int _{0}^{\pi /2}{\frac {{\frac {1}{2}}\sec ^{2}{\frac {x}{2}}}{{\frac {2\cos ^{2}{\frac {\alpha }{2}}}{2\sin ^{2}{\frac {\alpha }{2}}}}+\tan ^{2}{\frac {x}{2}}}}\,dx\\[6pt]&=-{\frac {2\left(2\sin {\frac {\alpha }{2}}\cos {\frac {\alpha }{2}}\right)}{2\sin ^{2}{\frac {\alpha }{2}}}}\int _{0}^{\pi /2}{\frac {1}{\cot ^{2}{\frac {\alpha }{2}}+\tan ^{2}{\frac {x}{2}}}}\,d\left(\tan {\frac {x}{2}}\right)\\[6pt]&=-2\cot {\frac {\alpha }{2}}\int _{0}^{\pi /2}{\frac {1}{\cot ^{2}{\frac {\alpha }{2}}+\tan ^{2}{\frac {x}{2}}}}\,d\left(\tan {\frac {x}{2}}\right)\\[6pt]&=-2\arctan \left(\tan {\frac {\alpha }{2}}\tan {\frac {x}{2}}\right){\bigg |}_{0}^{\pi /2}\\[6pt]&=-\alpha .\end{aligned}}}
Therefore: I ( α ) = C − α 2 2 . {\displaystyle I(\alpha )=C-{\frac {\alpha ^{2}}{2}}.}
But I ( π 2 ) = 0 {\textstyle I{\left({\frac {\pi }{2}}\right)}=0} by definition so C = π 2 8 {\textstyle C={\frac {\pi ^{2}}{8}}} and I ( α ) = π 2 8 − α 2 2 . {\displaystyle I(\alpha )={\frac {\pi ^{2}}{8}}-{\frac {\alpha ^{2}}{2}}.}
Here, we consider the integral ∫ 0 2 π e cos θ cos ( sin θ ) d θ . {\displaystyle \int _{0}^{2\pi }e^{\cos \theta }\cos(\sin \theta )\,d\theta .}
We introduce a new variable φ and rewrite the integral as f ( φ ) = ∫ 0 2 π e φ cos θ cos ( φ sin θ ) d θ . {\displaystyle f(\varphi )=\int _{0}^{2\pi }e^{\varphi \cos \theta }\cos(\varphi \sin \theta )\,d\theta .}
When φ = 1 this equals the original integral. However, this more general integral may be differentiated with respect to φ {\displaystyle \varphi } : d f d φ = ∫ 0 2 π ∂ ∂ φ [ e φ cos θ cos ( φ sin θ ) ] d θ = ∫ 0 2 π e φ cos θ [ cos θ cos ( φ sin θ ) − sin θ sin ( φ sin θ ) ] d θ . {\displaystyle {\frac {df}{d\varphi }}=\int _{0}^{2\pi }{\frac {\partial }{\partial \varphi }}\left[e^{\varphi \cos \theta }\cos(\varphi \sin \theta )\right]d\theta =\int _{0}^{2\pi }e^{\varphi \cos \theta }\left[\cos \theta \cos(\varphi \sin \theta )-\sin \theta \sin(\varphi \sin \theta )\right]d\theta .}
Now, fix φ , and consider the vector field on R 2 {\displaystyle \mathbb {R} ^{2}} defined by F ( x , y ) = ( F 1 ( x , y ) , F 2 ( x , y ) ) := ( e φ x sin ( φ y ) , e φ x cos ( φ y ) ) {\displaystyle \mathbf {F} (x,y)=(F_{1}(x,y),F_{2}(x,y)):=(e^{\varphi x}\sin(\varphi y),e^{\varphi x}\cos(\varphi y))} . Further, choose the positive oriented parameterization of the unit circle S 1 {\displaystyle S^{1}} given by r : [ 0 , 2 π ) → R 2 {\displaystyle \mathbf {r} \colon [0,2\pi )\to \mathbb {R} ^{2}} , r ( θ ) := ( cos θ , sin θ ) {\displaystyle \mathbf {r} (\theta ):=(\cos \theta ,\sin \theta )} , so that r ′ ( t ) = ( − sin θ , cos θ ) {\displaystyle \mathbf {r} '(t)=(-\sin \theta ,\cos \theta )} . Then the final integral above is precisely ∫ 0 2 π e φ cos θ [ cos θ cos ( φ sin θ ) − sin θ sin ( φ sin θ ) ] d θ = ∫ 0 2 π [ e φ cos θ sin ( φ sin θ ) e φ cos θ cos ( φ sin θ ) ] ⋅ [ − sin θ − cos θ ] d θ = ∫ 0 2 π F ( r ( θ ) ) ⋅ r ′ ( θ ) d θ = ∮ S 1 F ( r ) ⋅ d r = ∮ S 1 F 1 d x + F 2 d y , {\displaystyle {\begin{aligned}&\int _{0}^{2\pi }e^{\varphi \cos \theta }\left[\cos \theta \cos(\varphi \sin \theta )-\sin \theta \sin(\varphi \sin \theta )\right]d\theta \\[6pt]={}&\int _{0}^{2\pi }{\begin{bmatrix}e^{\varphi \cos \theta }\sin(\varphi \sin \theta )\\e^{\varphi \cos \theta }\cos(\varphi \sin \theta )\end{bmatrix}}\cdot {\begin{bmatrix}-\sin \theta \\{\hphantom {-}}\cos \theta \end{bmatrix}}\,d\theta \\[6pt]={}&\int _{0}^{2\pi }\mathbf {F} (\mathbf {r} (\theta ))\cdot \mathbf {r} '(\theta )\,d\theta \\[6pt]={}&\oint _{S^{1}}\mathbf {F} (\mathbf {r} )\cdot d\mathbf {r} =\oint _{S^{1}}F_{1}\,dx+F_{2}\,dy,\end{aligned}}} the line integral of F {\displaystyle \mathbf {F} } over S 1 {\displaystyle S^{1}} . By Green's Theorem , this equals the double integral ∬ D ∂ F 2 ∂ x − ∂ F 1 ∂ y d A , {\displaystyle \iint _{D}{\frac {\partial F_{2}}{\partial x}}-{\frac {\partial F_{1}}{\partial y}}\,dA,} where D {\displaystyle D} is the closed unit disc . Its integrand is identically 0, so d f / d φ {\displaystyle df/d\varphi } is likewise identically zero. This implies that f ( φ ) is constant. The constant may be determined by evaluating f {\displaystyle f} at φ = 0 {\displaystyle \varphi =0} : f ( 0 ) = ∫ 0 2 π 1 d θ = 2 π . {\displaystyle f(0)=\int _{0}^{2\pi }1\,d\theta =2\pi .}
Therefore, the original integral also equals 2 π {\displaystyle 2\pi } .
There are innumerable other integrals that can be solved using the technique of differentiation under the integral sign. For example, in each of the following cases, the original integral may be replaced by a similar integral having a new parameter α {\displaystyle \alpha } : ∫ 0 ∞ sin x x d x → ∫ 0 ∞ e − α x sin x x d x , ∫ 0 π / 2 x tan x d x → ∫ 0 π / 2 tan − 1 ( α tan x ) tan x d x , ∫ 0 ∞ ln ( 1 + x 2 ) 1 + x 2 d x → ∫ 0 ∞ ln ( 1 + α 2 x 2 ) 1 + x 2 d x ∫ 0 1 x − 1 ln x d x → ∫ 0 1 x α − 1 ln x d x . {\displaystyle {\begin{aligned}\int _{0}^{\infty }{\frac {\sin x}{x}}\,dx&\to \int _{0}^{\infty }e^{-\alpha x}{\frac {\sin x}{x}}dx,\\[6pt]\int _{0}^{\pi /2}{\frac {x}{\tan x}}\,dx&\to \int _{0}^{\pi /2}{\frac {\tan ^{-1}(\alpha \tan x)}{\tan x}}dx,\\[6pt]\int _{0}^{\infty }{\frac {\ln(1+x^{2})}{1+x^{2}}}\,dx&\to \int _{0}^{\infty }{\frac {\ln(1+\alpha ^{2}x^{2})}{1+x^{2}}}dx\\[6pt]\int _{0}^{1}{\frac {x-1}{\ln x}}\,dx&\to \int _{0}^{1}{\frac {x^{\alpha }-1}{\ln x}}dx.\end{aligned}}}
The first integral, the Dirichlet integral , is absolutely convergent for positive α but only conditionally convergent when α = 0 {\displaystyle \alpha =0} . Therefore, differentiation under the integral sign is easy to justify when α > 0 {\displaystyle \alpha >0} , but proving that the resulting formula remains valid when α = 0 {\displaystyle \alpha =0} requires some careful work.
The measure-theoretic version of differentiation under the integral sign also applies to summation (finite or infinite) by interpreting summation as counting measure . An example of an application is the fact that power series are differentiable in their radius of convergence. [ citation needed ]
The Leibniz integral rule is used in the derivation of the Euler-Lagrange equation in variational calculus .
Differentiation under the integral sign is mentioned in the late physicist Richard Feynman 's best-selling memoir Surely You're Joking, Mr. Feynman! in the chapter "A Different Box of Tools". He describes learning it, while in high school , from an old text, Advanced Calculus (1926), by Frederick S. Woods (who was a professor of mathematics in the Massachusetts Institute of Technology ). The technique was not often taught when Feynman later received his formal education in calculus , but using this technique, Feynman was able to solve otherwise difficult integration problems upon his arrival at graduate school at Princeton University :
One thing I never did learn was contour integration . I had learned to do integrals by various methods shown in a book that my high school physics teacher Mr. Bader had given me. One day he told me to stay after class. "Feynman," he said, "you talk too much and you make too much noise. I know why. You're bored. So I'm going to give you a book. You go up there in the back, in the corner, and study this book, and when you know everything that's in this book, you can talk again." So every physics class, I paid no attention to what was going on with Pascal's Law, or whatever they were doing. I was up in the back with this book: "Advanced Calculus" , by Woods. Bader knew I had studied "Calculus for the Practical Man" a little bit, so he gave me the real works—it was for a junior or senior course in college. It had Fourier series , Bessel functions , determinants , elliptic functions —all kinds of wonderful stuff that I didn't know anything about. That book also showed how to differentiate parameters under the integral sign—it's a certain operation. It turns out that's not taught very much in the universities; they don't emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. So because I was self-taught using that book, I had peculiar methods of doing integrals. The result was, when guys at MIT or Princeton had trouble doing a certain integral, it was because they couldn't do it with the standard methods they had learned in school. If it was contour integration, they would have found it; if it was a simple series expansion, they would have found it. Then I come along and try differentiating under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box of tools was different from everybody else's, and they had tried all their tools on it before giving the problem to me. | https://en.wikipedia.org/wiki/Leibniz_integral_rule |
The Leibniz–Clarke correspondence was a scientific , theological and philosophical debate conducted in an exchange of letters between the German thinker Gottfried Wilhelm Leibniz and Samuel Clarke , an English supporter of Isaac Newton during the years 1715 and 1716. The exchange began because of a letter Leibniz wrote to Caroline of Ansbach , in which he remarked that Newtonian physics was detrimental to natural theology . Eager to defend the Newtonian view, Clarke responded, and the correspondence continued until the death of Leibniz in 1716. [ 1 ]
Although a variety of subjects are touched on in the letters, the main interest for modern readers is in the dispute between the absolute theory of space favoured by Newton and Clarke, and Leibniz's relational approach. Also important is the conflict between Clarke's and Leibniz's opinions on free will and whether God must create the best of all possible worlds . [ 2 ]
Leibniz had published only one book on moral matters, the Théodicée (1710), and his more metaphysical views had never been exposed to a sufficient extent, so the collected letters were met with interest by their contemporaries. The primary dispute between Leibniz and Newton about calculus was still fresh in the public's mind and it was taken as a matter of course that it was Newton himself who stood behind Clarke's replies.
The Leibniz-Clarke letters were first published under Clarke's name in the year following Leibniz's death. [ 3 ] Clarke wrote a preface, took care of the translation from French, added notes and some of his own writing. In 1720 Pierre Desmaizeaux published a similar volume in a French translation, [ 4 ] including quotes from Newton's work. It is quite certain that for both editions the opinion of Newton himself has been sought and Leibniz left at a disadvantage. [ 5 ] However the German translation of the correspondence published by Kohler, also in 1720, [ 6 ] contained a reply to Clarke's last letter which Leibniz had not been able to answer, due to his death. The letters have been reprinted in most collections of Leibniz's works and regularly published in stand-alone editions. [ 7 ] | https://en.wikipedia.org/wiki/Leibniz–Clarke_correspondence |
In the history of calculus , the calculus controversy ( German : Prioritätsstreit , lit. 'priority dispute') was an argument between mathematicians Isaac Newton and Gottfried Wilhelm Leibniz over who had first discovered calculus . The question was a major intellectual controversy, beginning in 1699 and reaching its peak in 1712. Leibniz had published his work on calculus first, but Newton's supporters accused Leibniz of plagiarizing Newton's unpublished ideas. The modern consensus is that the two men independently developed their ideas. Their creation of calculus has been called "the greatest advance in mathematics that had taken place since the time of Archimedes ." [ 1 ]
Newton stated he had begun working on a form of calculus (which he called " The Method of Fluxions and Infinite Series ") in 1666, at the age of 23, but did not publish it until 1737 as a minor annotation in the back of one of his works decades later (a relevant Newton manuscript of October 1666 is now published among his mathematical papers [ 2 ] ). [ 3 ] Gottfried Leibniz began working on his variant of calculus in 1674, and in 1684 published his first paper employing it, " Nova Methodus pro Maximis et Minimis ". L'Hôpital published a text on Leibniz's calculus in 1696 (in which he recognized that Newton's Principia of 1687 was "nearly all about this calculus"). Meanwhile, Newton, though he explained his (geometrical) form of calculus in Section I of Book I of the Principia of 1687, [ 4 ] did not explain his eventual fluxional notation for the calculus [ 5 ] in print until 1693 (in part) and 1704 (in full).
The prevailing opinion in the 18th century was against Leibniz (in Britain, not in the German-speaking world). Today, the consensus is Leibniz and Newton independently invented and described calculus in Europe in the 17th century, with their work noted to be more than just a "synthesis of previously distinct pieces of mathematical technique, but it was certainly this in part". [ 6 ]
It was certainly Isaac Newton who first devised a new infinitesimal calculus and elaborated it into a widely extensible algorithm, whose potentialities he fully understood; of equal certainty, differential and integral calculus , the fount of great developments flowing continuously from 1684 to the present day, was created independently by Gottfried Leibniz.
One author has identified the dispute as being about "profoundly different" methods:
Despite ... points of resemblance, the methods [of Newton and Leibniz] are profoundly different, so making the priority row a nonsense.
On the other hand, other authors have emphasized the equivalences and mutual translatability of the methods: here N Guicciardini (2003) appears to confirm L'Hôpital (1696) (already cited):
the Newtonian and Leibnizian schools shared a common mathematical method. They adopted two algorithms, the analytical method of fluxions, and the differential and integral calculus, which were translatable one into the other.
In the 17th century the question of scientific priority was of great importance to scientists; however, during this period, scientific journals had just begun to appear, and the generally accepted mechanism for fixing priority when publishing information about discoveries had not yet been formed. Among the methods used by scientists were anagrams , sealed envelopes placed in a safe place, correspondence with other scientists, or a private message. A letter to the founder of the French Academy of Sciences , Marin Mersenne for a French scientist, or to the secretary of the Royal Society of London , Henry Oldenburg for English, had essentially the status of a published article. The discoverer could "time-stamp" the moment of his discovery, and prove that he knew of it at the point the letter was sealed, and had not copied it from anything subsequently published; nevertheless, where an idea was subsequently published in conjunction with its use in a particularly valuable context, this might take priority over an earlier discoverer's work, which had no obvious application. Further, a mathematician's claim could be undermined by counter-claims that he had not truly invented an idea, but merely improved on someone else's idea, an improvement that required little skill, and was based on facts that were already known. [ 8 ]
A series of high-profile disputes about the scientific priority of the 17th century—the era that the American science historian D. Meli called "the golden age of the mud-slinging priority disputes"—is associated with Leibniz . The first of them occurred at the beginning of 1673, during his first visit to London, when in the presence of the famous mathematician John Pell he presented his method of approximating series by differences . To Pell's remark this discovery had already been made by François Regnaud and published in 1670 in Lyon by Gabriel Mouton , Leibniz answered the next day. [ 9 ] [ 10 ] In a letter to Oldenburg, he wrote that, having looked at Mouton's book, Pell was correct, but he can provide his draft notes, which contain nuances not found by Renault and Mouton. Thus, the integrity of Leibniz was proved, but in this case, was recalled later. [ 11 ] [ 12 ] On the same visit to London, Leibniz was found in the opposite position. February 1, 1673, at a meeting of the Royal Society of London, he demonstrated his mechanical calculator . The curator of the experiments of the Society, Robert Hooke , carefully examined the device and even removed the back cover. A few days later, in the absence of Leibniz, Hooke criticized the German scientist's machine, saying that he could make a simpler model. Leibniz, who learned about this, returned to Paris and categorically rejected Hooke's claim in a letter to Oldenburg and formulated principles of correct scientific behaviour: "We know that respectable and modest people prefer it when they think of something that is consistent with what someone's done other discoveries, ascribe their own improvements and additions to the discoverer, so as not to arouse suspicions of intellectual dishonesty, and the desire for true generosity should pursue them, instead of the lying thirst for dishonest profit." To illustrate the proper behaviour, Leibniz gives an example of Nicolas-Claude Fabri de Peiresc and Pierre Gassendi , who performed astronomical observations similar to those made earlier by Galileo Galilei and Johannes Hevelius , respectively. Learning they did not make their discoveries first, the French scientists passed on their data to the discoverers. [ 13 ]
Newton's approach to the priority problem can be illustrated by the example of the discovery of the inverse-square law as applied to the dynamics of bodies moving under the influence of gravity . Based on an analysis of Kepler's laws and his own calculations, Robert Hooke made the assumption that motion under such conditions should occur along orbits similar to elliptical . Unable to rigorously prove this claim, he reported it to Newton. Without further entering into correspondence with Hooke, Newton solved this problem, as well as the inverse to it, proving that the law of inverse-squares follows from the ellipticity of the orbits. This discovery was set forth in his famous work Philosophiæ Naturalis Principia Mathematica without mentioning Hooke. At the insistence of astronomer Edmund Halley , to whom the manuscript was handed over for editing and publication, the phrase was included in the text that the compliance of Kepler's first law with the law of inverse squares was "independently approved by Wren , Hooke and Halley." [ 14 ]
According to the remark of Vladimir Arnold , Newton, choosing between refusal to publish his discoveries and constant struggle for priority, chose both of them. [ 15 ]
By the time of Newton and Leibniz, European mathematicians had already made a significant contribution to the formation of the ideas of mathematical analysis. The Dutchman Simon Stevin (1548–1620), the Italian Luca Valerio (1553–1618), the German Johannes Kepler (1571–1630) were engaged in the development of the ancient " method of exhaustion " for calculating areas and volumes. The latter's ideas, apparently, influenced – directly or through Galileo Galilei – on the " method of indivisibles " developed by Bonaventura Cavalieri (1598–1647). [ 16 ]
The last years of Leibniz's life, 1710–1716, were embittered by a long controversy with John Keill , Newton, and others, over whether Leibniz had discovered calculus independently of Newton, or whether he had merely invented another notation for ideas that were fundamentally Newton's. No participant doubted that Newton had already developed his method of fluxions when Leibniz began working on the differential calculus, yet there was seemingly no proof beyond Newton's word. He had published a calculation of a tangent with the note: "This is only a special case of a general method whereby I can calculate curves and determine maxima, minima, and centers of gravity." How this was done he explained to a pupil a 20 years later, when Leibniz's articles were already well-read. Newton's manuscripts came to light only after his death.
The infinitesimal calculus can be expressed either in the notation of fluxions or in that of differentials , or, as noted above, it was also expressed by Newton in geometrical form, as in the Principia of 1687. Newton employed fluxions as early as 1666, but did not publish an account of his notation until 1693. The earliest use of differentials in Leibniz's notebooks may be traced to 1675. He employed this notation in a 1677 letter to Newton. The differential notation also appeared in Leibniz's memoir of 1684.
The claim that Leibniz invented the calculus independently of Newton rests on the basis that Leibniz:
According to Leibniz's detractors, the fact that Leibniz's claim went unchallenged for some years is immaterial. To rebut this case it is sufficient to show that he:
No attempt was made to rebut #4, which was not known at the time, but which provides the strongest of the evidence that Leibniz came to the calculus independently from Newton. This evidence, however, is still questionable based on the discovery, in the inquest and after, that Leibniz both back-dated and changed fundamentals of his "original" notes, not only in this intellectual conflict, but in several others. [ 18 ] He also published "anonymous" slanders of Newton regarding their controversy which he tried, initially, to claim he was not author of. [ 18 ]
If good faith is nevertheless assumed, however, Leibniz's notes as presented to the inquest came first to integration , which he saw as a generalization of the summation of infinite series, whereas Newton began from derivatives. However, to view the development of calculus as entirely independent between the work of Newton and Leibniz misses that both had some knowledge of the methods of the other (though Newton did develop most fundamentals before Leibniz began) and worked together on a few aspects, in particular power series , as is shown in a letter to Henry Oldenburg dated 24 October 1676, where Newton remarks that Leibniz had developed a number of methods, one of which was new to him. [ 19 ] Both Leibniz and Newton could see the other was far along towards inventing calculus (Leibniz in particular mentions it) but only Leibniz was prodded thereby into publication.
That Leibniz saw some of Newton's manuscripts had always been likely. In 1849, C. I. Gerhardt , while going through Leibniz's manuscripts, found extracts from Newton's De Analysi per Equationes Numero Terminorum Infinitas (published in 1704 as part of the De Quadratura Curvarum but also previously circulated among mathematicians starting with Newton giving a copy to Isaac Barrow in 1669 and Barrow sending it to John Collins [ 20 ] ) in Leibniz's handwriting, the existence of which had been previously unsuspected, along with notes re-expressing the content of these extracts in Leibniz's differential notation. Hence when these extracts were made becomes all-important. It is known that a copy of Newton's manuscript had been sent to Ehrenfried Walther von Tschirnhaus in May 1675, a time when he and Leibniz were collaborating; it is not impossible that these extracts were made then. It is also possible that they may have been made in 1676, when Leibniz discussed analysis by infinite series with Collins and Oldenburg. It is probable that they would have then shown him Newton's manuscript on the subject, a copy of which one or both of them surely possessed. On the other hand, it may be supposed that Leibniz made the extracts from the printed copy in or after 1704. Shortly before his death, Leibniz admitted in a letter to Abbé Antonio Schinella Conti , that in 1676 Collins had shown him some of Newton's papers, but Leibniz also implied that they were of little or no value. Presumably he was referring to Newton's letters of 13 June and 24 October 1676, and to the letter of 10 December 1672, on the method of tangents , extracts from which accompanied the letter of 13 June.
Whether Leibniz made use of the manuscript from which he had copied extracts, or whether he had previously invented the calculus, are questions on which no direct evidence is available at present. It is, however, worth noting that the unpublished Portsmouth Papers show that when Newton entered into the dispute in 1711, he picked this manuscript as the one which had likely fallen into Leibniz's hands. At that time there was no direct evidence that Leibniz had seen Newton's manuscript before it was printed in 1704; hence Newton's conjecture was not published. But Gerhardt's discovery of a copy made by Leibniz appears to confirm its accuracy. Those who question Leibniz's good faith allege that to a man of his ability, the manuscript, especially if supplemented by the letter of 10 December 1672, sufficed to give him a clue as to the methods of the calculus. Since Newton's work at issue did employ the fluxional notation, anyone building on that work would have to invent a notation, but some deny this.
The quarrel was a retrospective affair. In 1696, already some years later than the events that became the subject of the quarrel, the position still looked potentially peaceful: Newton and Leibniz had each made limited acknowledgements of the other's work, and L'Hôpital's 1696 book about the calculus from a Leibnizian point of view had also acknowledged Newton's published work of the 1680s as "nearly all about this calculus" (" presque tout de ce calcul "), while expressing preference for the convenience of Leibniz's notation . [ 5 ]
At first, there was no reason to suspect Leibniz's good faith. In 1699, Nicolas Fatio de Duillier , a Swiss mathematician known for his work on the zodiacal light problem, publicly accused Leibniz of plagiarizing Newton, [ 21 ] although he privately had accused Leibniz of plagiarism twice in letters to Christiaan Huygens in 1692. [ 22 ] It was not until the 1704 publication of an anonymous review of Newton's tract on quadrature , which implied Newton had borrowed the idea of the fluxional calculus from Leibniz, that any responsible mathematician doubted Leibniz had invented the calculus independently of Newton. With respect to the review of Newton's quadrature work, all admit that there was no justification or authority for the statements made therein, which were rightly attributed to Leibniz. But the subsequent discussion led to a critical examination of the whole question, and doubts emerged: "Had Leibniz derived the fundamental idea of the calculus from Newton?" The case against Leibniz, as it appeared to Newton's friends, was summed up in the Commercium Epistolicum of 1712, which referenced all allegations. This document was thoroughly machined by Newton.
No such summary (with facts, dates, and references) of the case for Leibniz was issued by his friends; but Johann Bernoulli attempted to indirectly weaken the evidence by attacking the personal character of Newton in a letter dated 7 June 1713. When pressed for an explanation, Bernoulli most solemnly denied having written the letter. In accepting the denial, Newton added in a private letter to Bernoulli the following remarks, Newton's claimed reasons for why he took part in the controversy. He said, "I have never grasped at fame among foreign nations, but I am very desirous to preserve my character for honesty, which the author of that epistle, as if by the authority of a great judge, had endeavoured to wrest from me. Now that I am old, I have little pleasure in mathematical studies, and I have never tried to propagate my opinions over the world, but I have rather taken care not to involve myself in disputes on account of them."
Leibniz explained his silence as follows, in a letter to Conti dated 9 April 1716:
In order to respond point by point to all the work published against me, I would have to go into much minutiae that occurred thirty, forty years ago, of which I remember little: I would have to search my old letters, of which many are lost. Moreover, in most cases, I did not keep a copy, and when I did, the copy is buried in a great heap of papers, which I could sort through only with time and patience. I have enjoyed little leisure, being so weighted down of late with occupations of a totally different nature.
In any event, a bias favouring Newton tainted the whole affair from the outset. The Royal Society , of which Isaac Newton was president at the time, set up a committee to pronounce on the priority dispute, in response to a letter it had received from Leibniz. That committee never asked Leibniz to give his version of the events. The report of the committee, finding in favour of Newton, was written and published as "Commercium Epistolicum" (mentioned above) by Newton early in 1713. But Leibniz did not see it until the autumn of 1714.
Leibniz never agreed to acknowledge Newton's priority in inventing calculus. He attempted to write his own version of the history of differential calculus, but, as in the case of the history of the rulers of Braunschweig, he did not complete the matter. [ 23 ] At the end of 1715, Leibniz accepted Johann Bernoulli 's offer to organize another mathematical competition, in which different approaches had to prove their worth. This time the problem was taken from the area later called the calculus of variations – it was required to construct a tangent line to a family of curves. A letter was written on 25 November and transmitted in London to Newton through Abate Conti . The problem was formulated in unclear terms, and only later it became evident that it was required to find a general, and not a particular, as Newton understood, solution. After the British side published their decision, Leibniz published his, more general, and, thus, formally won this competition. [ 24 ] For his part, Newton stubbornly sought to destroy his opponent. Not having achieved this with the "Report", he continued his research, spending hundreds of hours on it. His next study, entitled "Observations upon the preceding Epistle", was inspired by a letter from Leibniz to Conti in March 1716, which criticized Newton's philosophical views; no new facts were given in this document. [ 25 ] | https://en.wikipedia.org/wiki/Leibniz–Newton_calculus_controversy |
The Leiden Bio Science Park (LBSP) is the largest life sciences cluster in the Netherlands [ 1 ] and ranks in the top five of the most successful science parks in Europe. [ 2 ] It is part of Leiden and Oegstgeest and focuses on companies and universities in the Biotechnology sector.
The park comprises 110 hectares (270 acres) with over 215 organisations, including 150 Life Sciences & Health companies. [ 3 ] The park is located mostly in Leiden and lies between Wassenaarseweg on the north and the Plesmanlaan on the south.
The park focuses mostly on the use of biotechnology for medical and biopharmaceutical applications.
The LBSP was founded in 1984 in the Leeuwenhoek area west of Leiden Central Station , between the Faculty of Science of the Leiden University and the former Academic Medical Hospital Leiden, now known as the Leiden University Medical Center (LUMC). The municipality decided that this area should primarily be focused on biotechnology.
In 2005, the foundation Leiden Life meets Science was founded by the Leiden University, the municipality Leiden , the LUMC, the Netherlands Organisation for Applied Scientific Research (TNO), the Naturalis Biodiversity Center , Chamber of Commerce , the province South Holland , the University of Applied Sciences Leiden , and the ROC Leiden, with the purpose of growth the park in size and quality.
This South Holland location article is a stub . You can help Wikipedia by expanding it .
This article about a scientific organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Leiden_Bio_Science_Park |
Leiden Classical [ 1 ] was a volunteer computing project run by the Theoretical Chemistry Department of the Leiden Institute of Chemistry at Leiden University . Leiden Classical used the BOINC system, and enabled scientists or science students to submit their own test simulations of various molecules and atoms in a classical mechanics environment. ClassicalDynamics is a program (and with it a library) completely written in C++. The library is covered by the LGPL license and the main program is covered by the GPL . [ 2 ] The project shut down on June 5, 2018. [ 3 ]
Participation was possible via the BOINC manager. Using this software one was once able to create an account in the project. Then someone can make a model of a dynamic system and simulation participating run. There are several models possible, to interactions between molecules or planets.
To create a personal calculation, a user's model had to have six defined variables:
This computer science article is a stub . You can help Wikipedia by expanding it .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Leiden_Classical |
The Leidenfrost effect or film boiling is a physical phenomenon in which a liquid, close to a solid surface of another body that is significantly hotter than the liquid's boiling point , produces an insulating vapor layer that keeps the liquid from boiling rapidly. Because of this repulsive force, a droplet hovers over the surface, rather than making physical contact with it. The effect is named after the German doctor Johann Gottlob Leidenfrost , who described it in A Tract About Some Qualities of Common Water . [ 1 ]
This is most commonly seen when cooking , when drops of water are sprinkled onto a hot pan. If the pan's temperature is at or above the Leidenfrost point, which is approximately 193 °C (379 °F) for water, the water skitters across the pan and takes longer to evaporate than it would take if the water droplets had been sprinkled onto a cooler pan.
The effect can be seen as drops of water are sprinkled onto a pan at various times as it heats up. Initially, as the temperature of the pan is just below 100 °C (212 °F), the water flattens out and slowly evaporates, or if the temperature of the pan is well below 100 °C (212 °F), the water stays liquid. As the temperature of the pan rises above 100 °C (212 °F), the water droplets hiss when touching the pan, and these droplets evaporate quickly. When the temperature exceeds the Leidenfrost point, the Leidenfrost effect appears. On contact with the pan, the water droplets bunch up into small balls of water and skitter around, lasting much longer than when the temperature of the pan was lower. This effect works until a much higher temperature causes any further drops of water to evaporate too quickly to cause this effect.
The effect happens because, at temperatures at or above the Leidenfrost point, the bottom part of the water droplet vaporizes immediately on contact with the hot pan. The resulting gas suspends the rest of the water droplet just above it, preventing any further direct contact between the liquid water and the hot pan. As steam has much poorer thermal conductivity than the metal pan, further heat transfer between the pan and the droplet is slowed down dramatically. This also results in the drop being able to skid around the pan on the layer of gas just under it.
The temperature at which the Leidenfrost effect appears is difficult to predict. Even if the volume of the drop of liquid stays the same, the Leidenfrost point may be quite different, with a complicated dependence on the properties of the surface, as well as any impurities in the liquid. Some research has been conducted into a theoretical model of the system, but it is quite complicated. [ 2 ]
The effect was also described by the Victorian steam boiler designer, William Fairbairn , in reference to its effect on massively reducing heat transfer from a hot iron surface to water, such as within a boiler. In a pair of lectures on boiler design, [ 3 ] he cited the work of Pierre Hippolyte Boutigny (1798–1884) and Professor Bowman of King's College, London , in studying this. A drop of water that was vaporized almost immediately at 168 °C (334 °F) persisted for 152 seconds at 202 °C (396 °F). Lower temperatures in a boiler firebox might evaporate water more quickly as a result; compare Mpemba effect . An alternative approach was to increase the temperature beyond the Leidenfrost point. Fairbairn considered this, too, and may have been contemplating the flash steam boiler , but considered the technical aspects insurmountable for the time.
The Leidenfrost point may also be taken to be the temperature for which the hovering droplet lasts longest. [ 4 ]
It has been demonstrated that it is possible to stabilize the Leidenfrost vapor layer of water by exploiting superhydrophobic surfaces. In this case, once the vapor layer is established, cooling never collapses the layer, and no nucleate boiling occurs; the layer instead slowly relaxes until the surface is cooled. [ 5 ]
Droplets of different liquids with different boiling temperatures will also exhibit a Leidenfrost effect with respect to each other and repel each other. [ 6 ]
The Leidenfrost effect has been used for the development of high sensitivity ambient mass spectrometry. Under the influence of the Leidenfrost condition, the levitating droplet does not release molecules, and the molecules are enriched inside the droplet. At the last moment of droplet evaporation, all the enriched molecules release in a short time period and thereby increase the sensitivity. [ 7 ]
A heat engine based on the Leidenfrost effect has been prototyped; it has the advantage of extremely low friction. [ 8 ]
The effect also applies when the surface is at room temperature but the liquid is cryogenic , allowing liquid nitrogen droplets to harmlessly roll off exposed skin. [ 9 ] Conversely, the inverse Leidenfrost effect lets drops of relatively warm liquid levitate on a bath of liquid nitrogen. [ 10 ]
The Leidenfrost point signifies the onset of stable film boiling. It represents the point on the boiling curve where the heat flux is at the minimum and the surface is completely covered by a vapor blanket. Heat transfer from the surface to the liquid occurs by conduction and radiation through the vapour. In 1756, Leidenfrost observed that water droplets supported by the vapor film slowly evaporate as they move about on the hot surface. As the surface temperature is increased, radiation through the vapor film becomes more significant and the heat flux increases with increasing excess temperature.
The minimum heat flux for a large horizontal plate can be derived from Zuber's equation, [ 4 ]
q A m i n = C h f g ρ v [ σ g ( ρ L − ρ v ) ( ρ L + ρ v ) 2 ] 1 ╱ 4 {\displaystyle {{\frac {q}{A}}_{min}}=C{{h}_{fg}}{{\rho }_{v}}{{\left[{\frac {\sigma g\left({{\rho }_{L}}-{{\rho }_{v}}\right)}{{\left({{\rho }_{L}}+{{\rho }_{v}}\right)}^{2}}}\right]}^{{}^{1}\!\!\diagup \!\!{}_{4}\;}}}
where the properties are evaluated at saturation temperature. Zuber's constant, C {\displaystyle C} , is approximately 0.09 for most fluids at moderate pressures.
The heat transfer coefficient may be approximated using Bromley's equation, [ 4 ]
h = C [ k v 3 ρ v g ( ρ L − ρ v ) ( h f g + 0.4 c p v ( T s − T s a t ) ) D o μ v ( T s − T s a t ) ] 1 ╱ 4 {\displaystyle h=C{{\left[{\frac {k_{v}^{3}{{\rho }_{v}}g\left({{\rho }_{L}}-{{\rho }_{v}}\right)\left({{h}_{fg}}+0.4{{c}_{pv}}\left({{T}_{s}}-{{T}_{sat}}\right)\right)}{{{D}_{o}}{{\mu }_{v}}\left({{T}_{s}}-{{T}_{sat}}\right)}}\right]}^{{}^{1}\!\!\diagup \!\!{}_{4}\;}}}
where D o {\displaystyle {{D}_{o}}} is the outside diameter of the tube. The correlation constant C is 0.62 for horizontal cylinders and vertical plates, and 0.67 for spheres. Vapor properties are evaluated at film temperature.
For stable film boiling on a horizontal surface, Berenson has modified Bromley's equation to yield, [ 11 ]
h = 0.425 [ k v f 3 ρ v f g ( ρ L − ρ v ) ( h f g + 0.4 c p v ( T s − T s a t ) ) μ v f ( T s − T s a t ) σ / g ( ρ L − ρ v ) ] 1 ╱ 4 {\displaystyle h=0.425{{\left[{\frac {k_{vf}^{3}{{\rho }_{vf}}g\left({{\rho }_{L}}-{{\rho }_{v}}\right)\left({{h}_{fg}}+0.4{{c}_{pv}}\left({{T}_{s}}-{{T}_{sat}}\right)\right)}{{{\mu }_{vf}}\left({{T}_{s}}-{{T}_{sat}}\right){\sqrt {\sigma /g\left({{\rho }_{L}}-{{\rho }_{v}}\right)}}}}\right]}^{{}^{1}\!\!\diagup \!\!{}_{4}\;}}}
For vertical tubes, Hsu and Westwater have correlated the following equation, [ 11 ]
h [ μ v 2 g ρ v ( ρ L − ρ v ) k v 3 ] 1 ╱ 3 = 0.0020 [ 4 m π D v μ v ] 0.6 {\displaystyle h{{\left[{\frac {\mu _{v}^{2}}{g{{\rho }_{v}}\left({{\rho }_{L}}-{{\rho }_{v}}\right)k_{v}^{3}}}\right]}^{{}^{1}\!\!\diagup \!\!{}_{3}\;}}=0.0020{{\left[{\frac {4m}{\pi {{D}_{v}}{{\mu }_{v}}}}\right]}^{0.6}}}
where m is the mass flow rate in l b m / h r {\displaystyle l{{b}_{m}}/hr} at the upper end of the tube.
At excess temperatures above that at the minimum heat flux, the contribution of radiation becomes appreciable, and it becomes dominant at high excess temperatures. The total heat transfer coefficient is thus a combination of the two. Bromley has suggested the following equations for film boiling from the outer surface of horizontal tubes:
h 4 ╱ 3 = h c o n v 4 ╱ 3 + h r a d h 1 ╱ 3 {\displaystyle {{h}^{{}^{4}\!\!\diagup \!\!{}_{3}\;}}={{h}_{conv}}^{{}^{4}\!\!\diagup \!\!{}_{3}\;}+{{h}_{rad}}{{h}^{{}^{1}\!\!\diagup \!\!{}_{3}\;}}}
If h r a d < h c o n v {\displaystyle {{h}_{rad}}<{{h}_{conv}}} ,
h = h c o n v + 3 4 h r a d {\displaystyle h={{h}_{conv}}+{\frac {3}{4}}{{h}_{rad}}}
The effective radiation coefficient, h r a d {\displaystyle {{h}_{rad}}} can be expressed as,
h r a d = ε σ ( T s 4 − T s a t 4 ) ( T s − T s a t ) {\displaystyle {{h}_{rad}}={\frac {\varepsilon \sigma \left(T_{s}^{4}-T_{sat}^{4}\right)}{\left({{T}_{s}}-{{T}_{sat}}\right)}}}
where ε {\displaystyle \varepsilon } is the emissivity of the solid and σ {\displaystyle \sigma } is the Stefan–Boltzmann constant.
The equation for the pressure field in the vapor region between the droplet and the solid surface can be solved for using the standard momentum and continuity equations using a Boundary layer model . In this model for the sake of simplicity in solving, a linear temperature profile and a parabolic velocity profile are assumed within the vapor phase . The heat transfer within the vapor phase is assumed to be through conduction . With these approximations, the Navier–Stokes equations can be solved to get the pressure field. [ 12 ]
The Leidenfrost temperature is the property of a given set of solid–liquid pair. The temperature of the solid surface beyond which the liquid undergoes the Leidenfrost phenomenon is termed the Leidenfrost temperature. Calculation of the Leidenfrost temperature involves the calculation of the minimum film boiling temperature of a fluid. Berenson [ 13 ] obtained a relation for the minimum film boiling temperature from minimum heat flux arguments. While the equation for the minimum film boiling temperature, which can be found in the reference above, is quite complex, the features of it can be understood from a physical perspective. One critical parameter to consider is the surface tension . The proportional relationship between the minimum film boiling temperature and surface tension is to be expected, since fluids with higher surface tension need higher quantities of heat flux for the onset of nucleate boiling . Since film boiling occurs after nucleate boiling, the minimum temperature for film boiling should have a proportional dependence on the surface tension.
Henry developed a model for Leidenfrost phenomenon which includes transient wetting and microlayer evaporation. [ 14 ] Since the Leidenfrost phenomenon is a special case of film boiling, the Leidenfrost temperature is related to the minimum film boiling temperature via a relation which factors in the properties of the solid being used. While the Leidenfrost temperature is not directly related to the surface tension of the fluid, it is indirectly dependent on it through the film boiling temperature. For fluids with similar thermophysical properties, the one with higher surface tension usually has a higher Leidenfrost temperature. A related phenomenon was observed during the solidification of paraffin wax droplets, which develop a characteristic apple-like shape with a central dimple due to the combined effects of gravity, viscosity increase, and heat conduction through a mushy zone. [ 15 ]
For example, for a saturated water–copper interface, the Leidenfrost temperature is 257 °C (495 °F). The Leidenfrost temperatures for glycerol and common alcohols are significantly smaller because of their lower surface tension values (density and viscosity differences are also contributing factors.)
Non-volatile materials were discovered in 2015 to also exhibit a 'reactive Leidenfrost effect', whereby solid particles were observed to float above hot surfaces and skitter around erratically. [ 16 ] Detailed characterization of the reactive Leidenfrost effect was completed for small particles of cellulose (~0.5 mm) on high temperature polished surfaces by high speed photography. Cellulose was shown to decompose to short-chain oligomers which melt and wet smooth surfaces with increasing heat transfer associated with increasing surface temperature. Above 675 °C (1,247 °F), cellulose was observed to exhibit transition boiling with violent bubbling and associated reduction in heat transfer. Liftoff of the cellulose droplet (depicted at the right) was observed to occur above about 750 °C (1,380 °F), associated with a dramatic reduction in heat transfer. [ 16 ]
High speed photography of the reactive Leidenfrost effect of cellulose on porous surfaces (macroporous alumina ) was also shown to suppress the reactive Leidenfrost effect and enhance overall heat transfer rates to the particle from the surface. The new phenomenon of a 'reactive Leidenfrost (RL) effect' was characterized by a dimensionless quantity, (φ RL = τ conv /τ rxn ), which relates the time constant of solid particle heat transfer to the time constant of particle reaction, with the reactive Leidenfrost effect occurring for 10 −1 < φ RL < 10 +1 . The reactive Leidenfrost effect with cellulose will occur in numerous high temperature applications with carbohydrate polymers, including biomass conversion to biofuels , preparation and cooking of food, and tobacco use. [ 16 ]
The Leidenfrost effect has also been used as a means to promote chemical change of various organic liquids through their conversion by thermal decomposition into various products. Examples include decomposition of ethanol, [ 17 ] diethyl carbonate, [ 18 ] and glycerol. [ 19 ]
In Jules Verne 's 1876 book Michael Strogoff , the protagonist is saved from being blinded with a hot blade by evaporating tears. [ 20 ]
In the 2009 season 7 finale of MythBusters , " Mini Myth Mayhem ", the team demonstrated that a person can wet their hand and briefly dip it into molten lead without injury, using the Leidenfrost effect as the scientific basis. [ 21 ] | https://en.wikipedia.org/wiki/Leidenfrost_effect |
In atmospheric chemistry , the Leighton relationship is an equation that determines the concentration of tropospheric ozone in areas polluted by the presence of nitrogen oxides . Ozone in the troposphere is primarily produced through the photolysis of nitrogen dioxide by photons with wavelengths (λ) less than 420 nanometers , [ 1 ] which are able to reach the lowest levels of the atmosphere , through the following mechanism: [ 2 ] : pg. 22
The symbol M represents a "third body", an unspecified molecular species that must interact with the reactants in order to carry away energy from the exothermic reaction. The 3 P designation on the atomic O species is the term symbol for its electronic state, indicating that it is in a spin triplet state , which is the ground electronic state of atomic O. This series of reactions creates a null cycle , in which there is no net production or loss of any species involved. Since O( 3 P) is very reactive and O 2 is abundant, O( 3 P) can be assumed to be in steady state , and thus an equation linking the concentrations of the species involved can be derived, giving the Leighton relationship: [ 2 ] [ 3 ]
This equation shows how production of ozone is directly related to the solar intensity, and hence to the zenith angle , due to the reliance on photolysis of NO 2 . The yield of ozone will therefore be greatest during the day, especially at noon and during the summer season. This relationship also demonstrates how high concentrations of both ozone and nitric oxide are unfeasible. [ 4 ] However, NO can react with peroxyl radicals to produce NO 2 without loss of ozone:
thus providing another pathway to allow for the buildup of ozone by breaking the above null cycle.
This relationship is named after Philip Leighton, author of the 1961 book Photochemistry of Air Pollution , in recognition of his contributions in the understanding of tropospheric chemistry. [ 2 ] : pg. 22 Computer models of atmospheric chemistry utilize the Leighton relationship to minimize complexity by deducing the concentration of one of ozone, nitrogen dioxide, and nitric oxide when the concentrations of the other two are known. [ 1 ] | https://en.wikipedia.org/wiki/Leighton_relationship |
The Leimgruber–Batcho indole synthesis is a series of organic reactions that produce indoles from o-nitro toluenes 1 . [1] [2] [3] The first step is the formation of an enamine 2 using N,N-dimethylformamide dimethyl acetal and pyrrolidine . [4] The desired indole 3 is then formed in a second step by reductive cyclisation.
In the above scheme, the reductive cyclisation is affected by Raney nickel and hydrazine . Palladium-on-carbon and hydrogen , stannous chloride , sodium hydrosulfite [5] , or iron in acetic acid [6] are also effective reducing agents .
In the initial enamine formation, dimethylamine (a gas) is displaced by pyrrolidine from the dimethylformamide dimethylacetal, producing a more reactive reagent . The mildly acidic hydrogens of the methyl group in the nitrotoluene can be deprotonated under the basic conditions, and the resultant carbanion attacks to produce the enamine shown, with loss of methanol . The sequence can also be performed without pyrrolidine, via the N,N-dimethyl enamine, though reaction times may be much longer in some cases. In the second step the nitro group is reduced to -NH 2 using hydrogen and a Raney nickel catalyst, followed by cyclisation then elimination of the pyrrolidine. The hydrogen is often generated in situ by the spontaneous decomposition of hydrazine hydrate to H 2 and N 2 in the presence of the nickel.
The reaction is a good example of a reaction that was widely used in industry before any procedures were published in the mainstream scientific literature. Many indoles are pharmacologically active , so a good indole synthesis is important for the pharmaceutical industry . The process has become a popular alternative to the Fischer indole synthesis because many starting ortho-nitrotoluenes are commercially available or easily made. In addition, the reactions proceed in high chemical yield under mild conditions.
The intermediate enamines are electronically related to push–pull olefins , having an electron-withdrawing nitro group conjugated to an electron-donating group. The extended conjugation means that these compounds are usually an intense red color.
The reductive cyclization of dinitrostyrenes ( 2 ) has proven itself effective when other more common methods have failed. [7]
Most of the standard reduction methods listed above are successful with this reaction. | https://en.wikipedia.org/wiki/Leimgruber–Batcho_indole_synthesis |
Leiv Kristen Sydnes (born 9 July 1948) is a Norwegian chemist, specializing in organic chemistry .
He was born in Haugesund , and took his education at the University of Oslo . He has the dr.philos. degree from 1978. He was hired as an associate professor at the University of Tromsø in 1978, and was later promoted to professor. In 1993 he moved to the University of Bergen . He presided over the Norwegian Chemical Society from 1992 to 1996 and the International Union of Pure and Applied Chemistry (IUPAC) from 2004 to 2005. [ 1 ] He is a member of the Norwegian Academy of Science and Letters [ 2 ] and the Norwegian Academy of Technological Sciences . [ 3 ]
Sydnes stood for election as rector of the University of Bergen in 2005, but lost the election to Sigmund Grønmo . In 2009 he applied for the position as rector of the Norwegian University of Science and Technology ; [ 4 ] here the rectors are hired rather than elected. [ 5 ]
This biographical article about a Norwegian academic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Leiv_Kristen_Sydnes |
The lek paradox is a conundrum in evolutionary biology that addresses the persistence of genetic variation in male traits within lek mating systems, despite strong sexual selection through female choice . This paradox arises from the expectation that consistent female preference for particular male traits should erode genetic diversity, theoretically leading to a loss of the benefits of choice. The lek paradox challenges our understanding of how genetic variation is maintained in populations subject to intense sexual selection, particularly in species where males provide only genes to their offspring. Several hypotheses have been proposed to resolve this paradox, including the handicap principle , condition-dependent trait expression, and parasite resistance models.
A lek is a type of mating system characterized by the aggregation of male animals for the purpose of competitive courtship displays . This behavior, known as lekking , is a strategy employed by various species to attract females for mating. Leks are most commonly observed in birds , but also occur in other vertebrates such as some bony fish , amphibians , reptiles , and mammals , as well as certain arthropods including crustaceans and insects . In a typical lek, males gather in a specific area to perform elaborate displays, while females visit to select a mate. This system is notable for its strong female choice in mate selection and the absence of male parental care. Leks can be classified as classical (where male territories are in close proximity) or exploded (with more widely separated territories).
The lek paradox is the conundrum of how additive or beneficial genetic variation is maintained in lek mating species in the face of consistent sexual selection based on female preferences. While many studies have attempted to explain how the lek paradox fits into Darwinian theory , the paradox remains. Persistent female choice for particular male trait values should erode genetic diversity in male traits and thereby remove the benefits of choice, yet choice persists. [ 1 ] This paradox can be somewhat alleviated by the occurrence of mutations introducing potential differences, as well as the possibility that traits of interest have more or less favorable recessive alleles .
The basis of the lek paradox is continuous genetic variation in spite of strong female preference for certain traits. There are two conditions in which the lek paradox arises. The first is that males contribute only genes and the second is that female preference does not affect fecundity. [ 2 ] Female choice should lead to directional runaway selection , resulting in a greater prevalence for the selected traits. Stronger selection should lead to impaired survival, as it decreases genetic variance and ensures that more offspring have similar traits. [ 3 ] However, lekking species do not exhibit runaway selection.
In a lekking reproductive system, what male sexual characteristics can signal to females is limited, as the males provide no resources to females or parental care to their offspring. [ 4 ] This implies that a female gains indirect benefits from her choice in the form of "good genes" for her offspring. [ 5 ] Hypothetically, in choosing a male that excels at courtship displays, females gain genes for their offspring that increase their survival or reproductive fitness .
Amotz Zahavi declared that male sexual characteristics only convey useful information to the females if these traits confer a handicap on the male. [ 6 ] Otherwise, males could simply cheat: if the courtship displays have a neutral effect on survival, males could all perform equally and it would signify nothing to the females. But if the courtship display is somehow deleterious to the male’s survival—such as increased predator risk or time and energy expenditure—it becomes a test by which females can assess male quality. Under the handicap principle , males who excel at the courtship displays prove that they are of better quality and genotype, as they have already withstood the costs to having these traits. [ 6 ] Resolutions have been formed to explain why strong female mate choice does not lead to runaway selection. The handicap principle describes how costly male ornaments provide females with information about the male’s inheritable fitness. [ 7 ] The handicap principle may be a resolution to the lek paradox, for if females select for the condition of male ornaments, then their offspring have better fitness .
One potential resolution to the lek paradox is Rowe and Houle's theory of condition-dependent expression of male sexually selected traits. Similar to the handicap principle, Rowe and Houle argue that sexually selected traits depend on physical condition. Condition, in turn, summarizes a large number of genetic loci, including those involved in metabolism , muscular mass, nutrition, etc. Rowe and Houle claim that condition dependence maintains genetic variation in the face of persistent female choice, as the male trait is correlated with abundant genetic variation in condition. [ 5 ] This is the genic capture hypothesis, which describes how a significant amount of the genome is involved in shaping the traits that are sexually selected. [ 4 ] There are two criteria in the genic capture hypothesis: the first is that sexually selected traits are dependent upon condition and the second is that general condition is attributable to high genetic variance. [ 5 ]
Genetic variation in condition-dependent traits may be further maintained through mutations and environmental effects. Genotypes may be more effective in developing condition dependent sexual characteristics in different environments, while mutations may be deleterious in one environment and advantageous in another. [ 4 ] Thus genetic variance remains in populations through gene flow across environments or generation overlap. According to the genic capture hypothesis, female selection does not deplete the genetic variance, as sexual selection operates on condition dependence traits, thereby accumulating genetic variance within the selected for trait. [ 5 ] Therefore, females are actually selecting for high genetic variance.
In an alternate but non-exclusionary hypothesis, W. D. Hamilton and M. Zuk proposed that successful development of sexually selected traits signal resistance to parasites . [ 8 ] Parasites can significantly stress their hosts so that they are unable to develop sexually selected traits as well as healthy males. According to this theory, a male who vigorously displays demonstrates that he has parasite-resistant genes to the females. In support of this theory, Hamilton and Zuk found that male sexual ornaments were significantly correlated with levels of incidence of six blood diseases in North American passerine bird species. The Hamilton and Zuk model addresses the lek paradox, arguing that the cycles of co-adaptation between host and parasite resist a stable equilibrium point. Hosts continue to evolve resistance to parasites and parasites continue to bypass resistant mechanisms, continuously generating genetic variation. [ 8 ] The genic capture and parasite resistance hypotheses could logically co-occur in the same population.
One resolution to the lek paradox involves female preferences and how preference alone does not cause a drastic enough directional selection to diminish the genetic variance in fitness. [ 9 ] Another conclusion is that the preferred trait is not naturally selected for or against and the trait is maintained because it implies increased attractiveness to the male. [ 2 ] Thus, there may be no paradox. | https://en.wikipedia.org/wiki/Lek_paradox |
Leland C. Clark Jr. (December 4, 1918 – September 25, 2005) was an American biochemist born in Rochester, New York. [ 1 ] He is most well known as the inventor of the Clark electrode , a device used for measuring oxygen in blood, water and other liquids. [ 2 ] Clark is considered the "father of biosensors ", and the modern-day glucose sensor used daily by millions of diabetics is based on his research. He conducted pioneering research on heart-lung machines in the 1940s and '50s and was holder of more than 25 patents. Although he developed a fluorocarbon -based liquid that could be breathed successfully by mice in place of air, his lifelong goal of developing artificial blood remained unfulfilled at the time of his death. He is the inventor of Oxycyte , a third-generation perfluorocarbon (PFC) therapeutic oxygen carrier designed to enhance oxygen delivery to damaged tissues. [ 3 ]
Clark received his B.S. degree in chemistry from Antioch College in 1941 and his Ph.D. in biochemistry and physiology from the University of Rochester in 1944. Clark began his professional career as an assistant professor of biochemistry at his alma mater, Antioch College, in Yellow Springs, Ohio. When he left Antioch in 1958, he was head of the department. From 1955 to 1958, he held a simultaneous appointment the University of Cincinnati College of Medicine as a Senior Research Associate in Pediatrics and Surgery. In 1958, Clark moved to Alabama to join the Department of Surgery, University of Alabama Medical College as an associate professor of biochemistry. He later became professor of biochemistry in the same department.
In 1962, he invented the first biosensor with Champ Lyons. [ 4 ] [ 5 ] Clark later became professor of research pediatrics at the Cincinnati Children’s Hospital Research Foundation in 1968 and remained there until he retired in 1991. Afterwards, he helped to found the company Synthetic Blood International, now known as Oxygen Biotherapeutics, Inc., which markets his invention Oxycyte .
Other Clark inventions were put into production and marketed by Yellow Springs Instrument Company. [ 6 ]
He was a founding member of the Editorial Board of the scientific journal Biosensors & Bioelectronics in 1985.
Clark was known as "Lee" to his friends. He met Eleanor Wyckoff while an undergraduate student at Antioch and they were married in 1939. She assisted him in his research throughout his career. They had four daughters. [ 7 ]
Dr. Clark received the following recognition for his work: National Research Council Fellowship (1941). NIH Research Career Award (1962).
Distinguished Lecturer Award, American College of Chest Physicians (1975).
Honorary Doctor of Science, University of Rochester School of Medicine and Dentistry (1984).
Horace Mann Award for Service to Humanity, Antioch College (1984).
Heyrovsky Award in Recognition of the Invention of the Membrane-Covered Polarographic Oxygen Electrode (1985). American Association for Clinical Chemistry Award for Outstanding Contributions to Clinical Chemistry (1989). American Heart Association Samuel Kaplan Visionary Award (1991).
Enshrinement into the Engineering and Science Hall of Fame (1991).
Pharmacia Biosensor’s Sensational Contributions to the Advancement of Biosensor Technology Award (1992).
Daniel Drake Award for Outstanding Achievements in Research, University of Cincinnati College of Medicine (1993).
Elected to the National Academy of Engineering (1995).
National Academy of Engineering Fritz J. and Dolores H. Russ Prize (2005). | https://en.wikipedia.org/wiki/Leland_Clark |
The Lely method , also known as the Lely process or Lely technique , is a crystal growth technology used for producing silicon carbide crystals for the semiconductor industry . The patent for this method was filed in the Netherlands in 1954 and in the United States in 1955 by Jan Anthony Lely of Philips Electronics . [ 1 ] The patent was subsequently granted on 30 September 1958, then was refined by D. R. Hamilton et al. in 1960, and by V. P. Novikov and V. I. Ionov in 1968. [ 2 ]
The Lely method produces bulk silicon carbide crystals through the process of sublimation . Silicon carbide powder is loaded into a graphite crucible , which is purged with argon gas and heated to approximately 2,500 °C (4,530 °F). The silicon carbide near the outer walls of the crucible sublimes and is deposited on a graphite rod near the center of the crucible, which is at a lower temperature. [ 2 ]
Several modified versions of the Lely process exist, most commonly the silicon carbide is heated from the bottom end rather than the walls of the crucible, and deposited on the lid. Other modifications include varying the temperature, temperature gradient , argon pressure, and geometry of the system. Typically, an induction furnace is used to achieve the required temperatures of 1,800–2,600 °C (3,270–4,710 °F). [ 2 ] : 195
This crystallography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lely_method |
Lem (also called BRITE-PL ) is the first Polish governmental artificial satellite . [ a ] It was launched in November 2013 as part of the Bright-star Target Explorer (BRITE) programme. The spacecraft was launched aboard a Dnepr rocket . Named after the Polish science fiction writer Stanisław Lem , it is an optical astronomy spacecraft built by the Space Research Centre of the Polish Academy of Sciences and operated by Centrum Astronomiczne im. Mikołaja Kopernika PAN; one of two Polish contributions to the BRITE constellation along with the Heweliusz satellite.
Lem is the first Polish scientific satellite, and the second (after PW-Sat ) ever launched. Along with Heweliusz , TUGSAT-1 , UniBRITE-1 and BRITE-Toronto , it is one from a constellation of six nanosatellites of the BRIght-star Target Explorer project, operated by a consortium of universities from Canada, Austria and Poland. [ 2 ]
Lem was developed and manufactured by the Space Research Centre of the Polish Academy of Sciences in 2011, [ 3 ] based around the Generic Nanosatellite Bus , and had a mass at launch of 7 kilograms or 15 pounds (plus another 7 kg for the XPOD separation system). [ 4 ] The satellite is used, along with four other operating spacecraft, [ b ] to conduct photometric observations of stars with an apparent magnitude brighter than 4.0 as seen from Earth. [ 6 ] Lem was one of two Polish BRITE satellites launched, along with the Heweliusz spacecraft. Four more satellites—two Austrian and two Canadian—were launched at different dates.
Lem observes the stars in the blue color range , whereas Heweliusz does it in red . Due to the multicolour option, geometrical and thermal effects in the analysis of the observed phenomena are separated. None of the much larger satellites, such as MOST and CoRoT , has this colour option; this is crucial in the diagnosis of the internal structure of stars. [ 7 ] Lem photometrically measures low-level oscillations and temperature variations in stars brighter than visual magnitude (4.0), with unprecedented precision and temporal coverage not achievable through terrestrial based methods. [ 4 ]
The Lem satellite was launched from the Russian Yasny air base aboard a Dnepr through the BRITE-PL Project satellite launch programme established in 2009 by the Space Research Centre of the Polish Academy of Sciences and The Nicolaus Copernicus Astronomical Centre of the Polish Academy of Sciences in cooperation with University of Toronto. [ 8 ] The launch was subcontracted to the Russian Ministry of Defence which launched the satellites using Dnepr rocket from the Yasny air base along with 33 other satellites. The launch took place at 07:10 ( UTC ) on 21 November 2013, and the rocket deployed all of its payloads successfully. [ 9 ] | https://en.wikipedia.org/wiki/Lem_(satellite) |
Lemaître coordinates are a particular set of coordinates for the Schwarzschild metric —a spherically symmetric solution to the Einstein field equations in vacuum—introduced by Georges Lemaître in 1932. [ 1 ] Changing from Schwarzschild to Lemaître coordinates removes the coordinate singularity at the Schwarzschild radius .
The original Schwarzschild coordinate expression of the Schwarzschild metric, in natural units ( c = G = 1 ), is given as
where
This metric has a coordinate singularity at the Schwarzschild radius r = r s {\displaystyle r=r_{s}} .
Georges Lemaître was the first to show that this is not a real physical singularity but simply a manifestation of the fact that the static Schwarzschild coordinates cannot be realized with material bodies inside the Schwarzschild radius. Indeed, inside the Schwarzschild radius everything falls towards the centre and it is impossible for a physical body to keep a constant radius.
A transformation of the Schwarzschild coordinate system from { t , r } {\displaystyle \{t,r\}} to the new coordinates { τ , ρ } , {\displaystyle \{\tau ,\rho \},}
(the numerator and denominator are switched inside the square-roots), leads to the Lemaître coordinate expression of the metric,
where
The metric in Lemaître coordinates is non-singular at the Schwarzschild radius r = r s {\displaystyle r=r_{s}} . This corresponds to the point 3 2 ( ρ − τ ) = r s {\displaystyle {\frac {3}{2}}(\rho -\tau )=r_{s}} . There remains a genuine gravitational singularity at the center, where ρ − τ = 0 {\displaystyle \rho -\tau =0} , which cannot be removed by a coordinate change.
The time coordinate used in the Lemaître coordinates is identical to the "raindrop" time coordinate used in the Gullstrand–Painlevé coordinates . The other three: the radial and angular coordinates r , θ , ϕ {\displaystyle r,\theta ,\phi } of the Gullstrand–Painlevé coordinates are identical to those of the Schwarzschild chart. That is, Gullstrand–Painlevé applies one coordinate transform to go from the Schwarzschild time t {\displaystyle t} to the raindrop coordinate t r = τ {\displaystyle t_{r}=\tau } . Then Lemaître applies a second coordinate transform to the radial component, so as to get rid of the off-diagonal entry in the Gullstrand–Painlevé chart.
The notation τ {\displaystyle \tau } used in this article for the time coordinate should not be confused with the proper time . It is true that τ {\displaystyle \tau } gives the proper time for radially infalling observers; it does not give the proper time for observers traveling along other geodesics.
The trajectories with ρ constant are timelike geodesics with τ the proper time along these geodesics. They represent the motion of freely falling particles which start out with zero velocity at infinity. At any point their speed is just equal to the escape velocity from that point.
The Lemaître coordinate system is synchronous , that is, the global time coordinate of the metric defines the proper time of co-moving observers. The radially falling bodies reach the Schwarzschild radius and the centre within finite proper time.
Radial null geodesics correspond to d s 2 = 0 {\displaystyle ds^{2}=0} , which have solutions d τ = ± β d ρ {\displaystyle d\tau =\pm \beta d\rho } . Here, β {\displaystyle \beta } is just a short-hand for
The two signs correspond to outward-moving and inward-moving light rays, respectively. Re-expressing this in terms of the coordinate r {\displaystyle r} gives
Note that d r < 0 {\displaystyle dr<0} when r < r s {\displaystyle r<r_{s}} . This is interpreted as saying that no signal can escape from inside the Schwarzschild radius, with light rays emitted radially either inwards or outwards both end up at the origin as the proper time τ {\displaystyle \tau } increases.
The Lemaître coordinate chart is not geodesically complete . This can be seen by tracing outward-moving radial null geodesics backwards in time. The outward-moving geodesics correspond to the plus sign in the above. Selecting a starting point r > r s {\displaystyle r>r_{s}} at τ = 0 {\displaystyle \tau =0} , the above equation integrates to r → + ∞ {\displaystyle r\to +\infty } as τ → + ∞ {\displaystyle \tau \to +\infty } . Going backwards in proper time, one has r → r s {\displaystyle r\to r_{s}} as τ → − ∞ {\displaystyle \tau \to -\infty } . Starting at r < r s {\displaystyle r<r_{s}} and integrating forward, one arrives at r = 0 {\displaystyle r=0} in finite proper time. Going backwards, one has, once again that r → r s {\displaystyle r\to r_{s}} as τ → − ∞ {\displaystyle \tau \to -\infty } . Thus, one concludes that, although the metric is non-singular at r = r s {\displaystyle r=r_{s}} , all outward-traveling geodesics extend to r = r s {\displaystyle r=r_{s}} as τ → − ∞ {\displaystyle \tau \to -\infty } . | https://en.wikipedia.org/wiki/Lemaître_coordinates |
In physics, the Lemaître–Tolman metric , also known as the Lemaître–Tolman–Bondi metric or the Tolman metric , is a Lorentzian metric based on an exact solution of Einstein's field equations ; it describes an isotropic and expanding (or contracting) universe which is not homogeneous , [ 1 ] [ 2 ] and is thus used in cosmology as an alternative to the standard Friedmann–Lemaître–Robertson–Walker metric to model the expansion of the universe . [ 3 ] [ 4 ] [ 5 ] It has also been used to model a universe which has a fractal distribution of matter to explain the accelerating expansion of the universe . [ 6 ] It was first found by Georges Lemaître in 1933 [ 7 ] and Richard Tolman in 1934 [ 1 ] and later investigated by Hermann Bondi in 1947. [ 8 ]
In a synchronous reference system where g 00 = 1 {\displaystyle g_{00}=1} and g 0 α = 0 {\displaystyle g_{0\alpha }=0} , the time coordinate x 0 = t {\displaystyle x^{0}=t} (we set G = c = 1 {\displaystyle G=c=1} ) is also the proper time τ = g 00 x 0 {\displaystyle \tau ={\sqrt {g_{00}}}x^{0}} and clocks at all points can be synchronized. For a dust-like medium where the pressure is zero, dust particles move freely i.e., along the geodesics and thus the synchronous frame is also a comoving frame wherein the components of four velocity u i = d x i / d s {\displaystyle u^{i}=dx^{i}/ds} are u 0 = 1 , u α = 0 {\displaystyle u^{0}=1,\,u^{\alpha }=0} . The solution of the field equations yield [ 9 ]
where r {\displaystyle r} is the radius or luminosity distance in the sense that the surface area of a sphere with radius r {\displaystyle r} is 4 π r 2 {\displaystyle 4\pi r^{2}} and R {\displaystyle R} is just interpreted as the Lagrangian coordinate and
subjected to the conditions 1 + f > 0 {\displaystyle 1+f>0} and F > 0 {\displaystyle F>0} , where f ( R ) {\displaystyle f(R)} and F ( R ) {\displaystyle F(R)} are arbitrary functions, ρ {\displaystyle \rho } is the matter density and finally primes denote differentiation with respect to R {\displaystyle R} . We can also assume F ′ > 0 {\displaystyle F'>0} and r ′ > 0 {\displaystyle r'>0} that excludes cases resulting in crossing of material particles during its motion. To each particle there corresponds a value of R {\displaystyle R} , the function r ( τ , R ) {\displaystyle r(\tau ,R)} and its time derivative respectively provides its law of motion and radial velocity. An interesting property of the solution described above is that when f ( R ) {\displaystyle f(R)} and F ( R ) {\displaystyle F(R)} are plotted as functions of R {\displaystyle R} , the form of these functions plotted for the range R ∈ [ 0 , R 0 ] {\displaystyle R\in [0,R_{0}]} is independent of how these functions will be plotted for R > R 0 {\displaystyle R>R_{0}} . This prediction is evidently similar to the Newtonian theory. The total mass within the sphere R = R 0 {\displaystyle R=R_{0}} is given by
which implies that Schwarzschild radius is given by r s = 2 m = F ( R 0 ) {\displaystyle r_{s}=2m=F(R_{0})} .
The function r ( τ , R ) {\displaystyle r(\tau ,R)} can be obtained upon integration and is given in a parametric form with a parameter η {\displaystyle \eta } with three possibilities,
where τ 0 ( R ) {\displaystyle \tau _{0}(R)} emerges as another arbitrary function. However, we know that centrally symmetric matter distribution can be described by at most two functions, namely their density distribution and the radial velocity of the matter. This means that of the three functions f , F , τ 0 {\displaystyle f,F,\tau _{0}} , only two are independent. In fact, since no particular selection has been made for the Lagrangian coordinate R {\displaystyle R} yet that can be subjected to arbitrary transformation, we can see that only two functions are arbitrary. [ 10 ] For the dust-like medium, there exists another solution where r = r ( τ ) {\displaystyle r=r(\tau )} and independent of R {\displaystyle R} , although such solution does not correspond to collapse of a finite body of matter. [ 11 ]
When F = r s = {\displaystyle F=r_{s}=} const., ρ = 0 {\displaystyle \rho =0} and therefore the solution corresponds to empty space with a point mass located at the center. Further by setting f = 0 {\displaystyle f=0} and τ 0 = R {\displaystyle \tau _{0}=R} , the solution reduces to Schwarzschild solution expressed in Lemaître coordinates .
The gravitational collapse occurs when τ {\displaystyle \tau } reaches τ 0 ( R ) {\displaystyle \tau _{0}(R)} with τ 0 ′ > 0 {\displaystyle \tau _{0}'>0} . The moment τ = τ 0 ( R ) {\displaystyle \tau =\tau _{0}(R)} corresponds to the arrival of matter denoted by its Lagrangian coordinate R {\displaystyle R} to the center. In all three cases, as τ → τ 0 ( R ) {\displaystyle \tau \rightarrow \tau _{0}(R)} , the asymptotic behaviors are given by
in which the first two relations indicate that in the comoving frame, all radial distances tend to infinity and tangential distances approaches zero like τ − τ 0 {\displaystyle \tau -\tau _{0}} , whereas the third relation shows that the matter density increases like 1 / ( τ 0 − τ ) . {\displaystyle 1/(\tau _{0}-\tau ).} In the special case τ 0 ( R ) = {\displaystyle \tau _{0}(R)=} constant where the time of collapse of all the material particle is the same, the asymptotic behaviors are different,
Here both the tangential and radial distances goes to zero like ( τ 0 − τ ) 2 / 3 {\displaystyle (\tau _{0}-\tau )^{2/3}} , whereas the matter density increases like 1 / ( τ 0 − τ ) 2 . {\displaystyle 1/(\tau _{0}-\tau )^{2}.} | https://en.wikipedia.org/wiki/Lemaître–Tolman_metric |
The Lemberg Medal , named after Max Rudolf Lemberg , the first president of the Australian Society for Biochemistry and Molecular Biology (ASBMB), is awarded annually to a scientist who has been a member for five or more years and who has "demonstrated excellence in Biochemistry and Molecular Biology and who has made significant contributions to the scientific community". [ 1 ] The winner presents the Lemberg Lecture at the following ASBMB annual conference. [ 2 ]
Source: Lemberg Medallists, Australian Society for Biochemistry and Molecular Biology [ 3 ] | https://en.wikipedia.org/wiki/Lemberg_Medal |
The Lemieux–Johnson or Malaprade–Lemieux–Johnson oxidation is a chemical reaction in which an olefin undergoes oxidative cleavage to form two aldehyde or ketone units. The reaction is named after its inventors, Raymond Urgel Lemieux and William Summer Johnson , who published it in 1956. [ 1 ] The reaction proceeds in a two step manner, beginning with dihydroxylation of the alkene by osmium tetroxide , followed by a Malaprade reaction to cleave the diol using periodate . [ 2 ] Periodate also serves to regenerate the osmium tetroxide. This means only a catalytic amount of the osmium reagent is needed and also that the two consecutive reactions can be performed as a single tandem reaction process. The Lemieux–Johnson reaction ceases at the aldehyde stage of oxidation and therefore produces the same results as ozonolysis .
The classical Lemieux–Johnson oxidation often generates many side products, resulting in low reaction yields; however the addition of non-nucleophilic bases, such as 2,6-lutidine , can improve on this. [ 3 ] OsO 4 may be replaced with a number of other Osmium compounds. [ 4 ] [ 5 ] Periodate may also be replaced with other oxidising agents, such as oxone . [ 6 ]
The development of the Lemieux–Johnson oxidation was preceded by an analogous process, developed by Lemieux and Ernst Von Rudloff (sometimes called the Lemieux-Von Rudloff reaction), which used an aqueous solution of sodium periodate with a low (catalytic) concentration of potassium permanganate . [ 7 ] This mixture became known as Lemieux reagent [ 8 ] [ 9 ] and has been used to determine the position of double bonds and for preparing carbonyl compounds. [ 10 ] Unlike the Lemieux–Johnson oxidation, which normally stops at the aldehyde, this older method could continue to give a mixture of aldehydes and carboxylic acids. [ 1 ] | https://en.wikipedia.org/wiki/Lemieux–Johnson_oxidation |
The Lemke–Howson algorithm is an algorithm that computes a Nash equilibrium of a bimatrix game , named after its inventors, Carlton E. Lemke and J. T. Howson . [ 1 ] It is said to be "the best known among the combinatorial algorithms for finding a Nash equilibrium", [ 2 ] although more recently the Porter-Nudelman-Shoham algorithm [ 3 ] has outperformed on a number of benchmarks. [ 4 ] [ 5 ]
The input to the algorithm is a 2-player game G . Here, G is represented by two m × n game matrices A and B , containing the payoffs for players 1 and 2 respectively, who have m and n pure strategies respectively. In the following, one assumes that all payoffs are positive. (By rescaling, any game can be transformed into a strategically equivalent game with positive payoffs.)
G has two corresponding polytopes (called the best-response polytopes ) P 1 and P 2 , in m dimensions and n dimensions respectively, defined as follows:
Here, P 1 represents the set of unnormalized probability distributions over player 1's m pure strategies, such that player 2's expected payoff is at most 1. The first m constraints require the probabilities to be non-negative, and the other n constraints require each of the n pure strategies of player 2 to have an expected payoff of at most 1. P 2 has a similar meaning, reversing the roles of the players.
Each vertex v of P 1 is associated with a set of labels from the set {1,..., m + n } as follows. For i ∈ {1, ..., m }, vertex v gets the label i if x i = 0 at vertex v .
For j ∈ {1, ..., n } , vertex v gets the label m + j if B 1 , j x 1 + ⋯ + B m , j x m = 1. {\displaystyle B_{1,j}x_{1}+\dots +B_{m,j}x_{m}=1.} Assuming that P 1 is nondegenerate, each vertex is incident to m facets of P 1 and has m labels. Note that the origin, which is a vertex of P 1 , has the labels {1, ..., m } .
Each vertex w of P 2 is associated with a set of labels from the set {1, ..., m + n } as follows. For j ∈ {1, ..., n } , vertex w gets the label m + j if x m + j = 0 at vertex w . For i ∈ {1, ..., m } , vertex w gets the label i if A i , 1 x m + 1 + ⋯ + A i , n x m + n = 1. {\displaystyle A_{i,1}x_{m+1}+\dots +A_{i,n}x_{m+n}=1.} Assuming that P 2 is nondegenerate, each vertex is incident to n facets of P 2 and has n labels. Note that the origin, which is a vertex of P 2 , has the labels { m + 1, ..., m + n } .
Consider pairs of vertices ( v , w ) , v ∈ P 1 , w ∈ P 2 . The pairs of vertices ( v , w ) is said to be completely labeled if the sets associated with v and w contain all labels {1, ..., m + n } . Note that if v and w are the origins of R m and R n respectively, then ( v , w ) is completely labeled. The pairs of vertices ( v , w ) is said to be almost completely labeled (with respect to some missing label g ) if the sets associated with v and w contain all labels in {1, ..., m + n } other than g . Note that in this case, there will be a duplicate label that is associated with both v and w .
A pivot operation consists of taking some pair ( v , w ) and replacing v with some
vertex adjacent to v in P 1 , or alternatively replacing w with some vertex adjacent to w in P 2 . This has the effect (in the case that v is replaced) of replacing some label of v with some other label. The replaced label is said to be dropped . Given any label of v , it is possible to drop that label by moving to a vertex adjacent to v that does not contain the hyperplane associated with that label.
The algorithm starts at the completely labeled pair ( v , w ) consisting of the pair of origins. An arbitrary label g is dropped via a pivot operation, taking us to an almost completely labeled pair ( v ′ , w ′ ) . Any almost completely labeled pair admits two pivot operations corresponding to dropping one or other copy of its duplicated label, and each of these operations may result in another almost completely labeled pair, or a completely labeled pair. Eventually, the algorithm finds a
completely labeled pair ( v * , w * ) , which is not the origin. ( v * , w * ) corresponds to a pair of unnormalised probability distributions in which every strategy i of player 1 either pays that player 1, or pays less than 1 and is played with probability 0 by that player (and a similar observation holds for player 2). Normalizing these values to probability distributions, one has a Nash equilibrium (whose payoffs to the players are the inverses of the normalization factors).
The algorithm can find at most n + m different Nash equilibria. Any choice of initially-dropped label determines the equilibrium that is eventually found by the algorithm.
The Lemke–Howson algorithm is equivalent to the following homotopy -based approach. Modify G by selecting an arbitrary pure strategy g , and giving the player who owns that strategy, a large payment B to play it. In the modified game, the strategy g is played with probability 1, and the other player will play his best response to g with probability 1. Consider the continuum of games in which B is continuously reduced to 0. There exists a path of Nash equilibria connecting the unique equilibrium of the modified game, to an equilibrium of G . The pure strategy g chosen to receive the bonus B corresponds to the initially dropped label. [ 6 ] While the algorithm is efficient in practice, in the worst case the number of pivot operations may need to be exponential in the number of pure strategies in the game. [ 7 ] Subsequently, it has been shown that it is PSPACE-complete to find any of the solutions
that can be obtained with the Lemke–Howson algorithm. [ 8 ] | https://en.wikipedia.org/wiki/Lemke–Howson_algorithm |
In mathematics and other fields, [ a ] a lemma ( pl. : lemmas or lemmata ) is a generally minor, proven proposition which is used to prove a larger statement. For that reason, it is also known as a "helping theorem " or an "auxiliary theorem". [ 3 ] [ 4 ] In many cases, a lemma derives its importance from the theorem it aims to prove ; however, a lemma can also turn out to be more important than originally thought. [ 5 ]
From the Ancient Greek λῆμμα, (perfect passive εἴλημμαι) something received or taken. Thus something taken for granted in an argument. [ 6 ]
There is no formal distinction between a lemma and a theorem , only one of intention (see Theorem terminology ). However, a lemma can be considered a minor result whose sole purpose is to help prove a more substantial theorem – a step in the direction of proof. [ 5 ]
Some powerful results in mathematics are known as lemmas, first named for their originally minor purpose. These include, among others:
While these results originally seemed too simple or too technical to warrant independent interest, they have eventually turned out to be central to the theories in which they occur.
This article incorporates material from Lemma on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Lemma_(mathematics) |
In algebraic geometry , a lemniscate ( / l ɛ m ˈ n ɪ s k ɪ t / or / ˈ l ɛ m n ɪ s ˌ k eɪ t , - k ɪ t / ) [ 1 ] is any of several figure-eight or ∞ -shaped curves . [ 2 ] [ 3 ] The word comes from the Latin lēmniscātus , meaning "decorated with ribbons", [ 4 ] from the Greek λημνίσκος ( lēmnískos ), meaning "ribbon", [ 3 ] [ 5 ] [ 6 ] [ 7 ] or which alternatively may refer to the wool from which the ribbons were made. [ 2 ]
Curves that have been called a lemniscate include three quartic plane curves : the hippopede or lemniscate of Booth , the lemniscate of Bernoulli , and the lemniscate of Gerono . The hippopede was studied by Proclus (5th century), but the term "lemniscate" was not used until the work of Jacob Bernoulli in the late 17th century.
The consideration of curves with a figure-eight shape can be traced back to Proclus , a Greek Neoplatonist philosopher and mathematician who lived in the 5th century AD. Proclus considered the cross-sections of a torus by a plane parallel to the axis of the torus. As he observed, for most such sections the cross section consists of either one or two ovals; however, when the plane is tangent to the inner surface of the torus, the cross-section takes on a figure-eight shape, which Proclus called a horse fetter (a device for holding two feet of a horse together), or "hippopede" in Greek. [ 8 ] The name "lemniscate of Booth" for this curve dates to its study by the 19th-century mathematician James Booth . [ 2 ]
The lemniscate may be defined as an algebraic curve , the zero set of the quartic polynomial ( x 2 + y 2 ) 2 − c x 2 − d y 2 {\displaystyle (x^{2}+y^{2})^{2}-cx^{2}-dy^{2}} when the parameter d is negative (or zero for the special case where the lemniscate becomes a pair of externally tangent circles). For positive values of d one instead obtains the oval of Booth .
In 1680, Cassini studied a family of curves, now called the Cassini oval , defined as follows: the locus of all points, the product of whose distances from two fixed points, the curves' foci , is a constant. Under very particular circumstances (when the half-distance between the points is equal to the square root of the constant) this gives rise to a lemniscate.
In 1694, Johann Bernoulli studied the lemniscate case of the Cassini oval, now known as the lemniscate of Bernoulli (shown above), in connection with a problem of " isochrones " that had been posed earlier by Leibniz . Like the hippopede, it is an algebraic curve, the zero set of the polynomial ( x 2 + y 2 ) 2 − a 2 ( x 2 − y 2 ) {\displaystyle (x^{2}+y^{2})^{2}-a^{2}(x^{2}-y^{2})} . Bernoulli's brother Jacob Bernoulli also studied the same curve in the same year, and gave it its name, the lemniscate. [ 9 ] It may also be defined geometrically as the locus of points whose product of distances from two foci equals the square of half the interfocal distance. [ 10 ] It is a special case of the hippopede (lemniscate of Booth), with d = − c {\displaystyle d=-c} , and may be formed as a cross-section of a torus whose inner hole and circular cross-sections have the same diameter as each other. [ 2 ] The lemniscatic elliptic functions are analogues of trigonometric functions for the lemniscate of Bernoulli, and the lemniscate constants arise in evaluating the arc length of this lemniscate.
Another lemniscate, the lemniscate of Gerono or lemniscate of Huygens, is the zero set of the quartic polynomial y 2 − x 2 ( a 2 − x 2 ) {\displaystyle y^{2}-x^{2}(a^{2}-x^{2})} . [ 12 ] [ 13 ] Viviani's curve , a three-dimensional curve formed by intersecting a sphere with a cylinder, also has a figure eight shape, and has the lemniscate of Gerono as its planar projection. [ 14 ]
Other figure-eight shaped algebraic curves include | https://en.wikipedia.org/wiki/Lemniscate |
The Lempel–Ziv complexity is a measure that was first presented in the article On the Complexity of Finite Sequences (IEEE Trans. On IT-22,1 1976), by two Israeli computer scientists, Abraham Lempel and Jacob Ziv . This complexity measure is related to Kolmogorov complexity , but the only function it uses is the recursive copy (i.e., the shallow copy).
The underlying mechanism in this complexity measure is the starting point for some algorithms for lossless data compression , like LZ77, LZ78 and LZW . Even though it is based on an elementary principle of words copying, this complexity measure is not too restrictive in the sense that it satisfies the main qualities expected by such a measure: sequences with a certain regularity do not have a too large complexity, and the complexity grows as the sequence grows in length and irregularity.
The Lempel–Ziv complexity can be used to measure the repetitiveness of binary sequences and text, like song lyrics or prose. Fractal dimension estimates of real-world data have also been shown to correlate with Lempel–Ziv complexity. [ 1 ] [ 2 ]
Let S be a binary sequence, of length n, for which we have to compute the Lempel–Ziv complexity, denoted C(S). The sequence is read from the left.
Imagine you have a delimiting line, which can be moved in the sequence during the calculation. At first, this line is set just after the first symbol, at the beginning of the sequence. This initial position is called position 1, from where we have to move it to a position 2, which is considered the initial position for the next step (and so on). We have to move the delimiter (starting in position 1) the furthest possible to the right, so that the sub-word between position 1 and the delimiter position be a word of the sequence that starts before the position 1 of the delimiter.
As soon as the delimiter is set on a position where this condition is not met, we stop, move the delimiter to this position, and start again by marking this position as a new initial position (i.e., position 1). Keep iterating until the end of the sequence. The Lempel–Ziv complexity corresponds to the number of iterations needed to finish this procedure.
Said differently, the Lempel–Ziv complexity is the number of different sub-strings (or sub-words) encountered as the binary sequence is viewed as a stream (from left to right).
The method proposed by Lempel and Ziv uses three notions: reproducibility, producibility and exhaustive history of a sequence, that we defined here.
Let S be a binary sequence of length n (i.e., n {\displaystyle n} symbols taking value 0 or 1). Let S ( i , j ) {\displaystyle S(i,j)} , with 1 ≤ i , j ≤ n {\displaystyle 1\leq i,j\leq n} , be the sub-word of S {\displaystyle S} from index i to index j (if j < i , S ( i , j ) {\displaystyle j<i,S(i,j)} is the empty string). The length n of S is denoted l ( S ) {\displaystyle l(S)} , and a sequence Q {\displaystyle Q} is said to be a fixed prefix of S {\displaystyle S} if:
∃ j < l ( S ) , s.t. S ( 1 , j ) = Q . {\displaystyle \exists j<{l(S),{\text{ s.t. }}S(1,j)=Q.}}
On the one hand, a sequence S of length n is said to be reproducible from its prefix S(1,j) when S(j+1,n) is a sub-word of S(1,j). This is denoted S(1,j)→S.
Said differently, S is reproducible from its prefix S(1,j) if the rest of the sequence, S(j+1,n), is nothing but a copy of another sub-word (starting at an index i < j+1) of S(1,n−1).
To prove that the sequence S can be reproduced by one of its prefix S(1,j), you have to show that:
∃ p ≤ j , s.t. S ( j + 1 , n ) = S ( p , l ( S ( j + 1 , n ) ) + p − 1 ) {\displaystyle \exists p\leq j,{\text{ s.t. }}S(j+1,n)=S(p,l(S(j+1,n))+p-1)}
On the other hand, the producibility, is defined from the reproducibility: a sequence S is producible from its prefix S(1,j) if S(1,n−1) is reproducible from S(1,j). This is denoted S(1,j)⇒S. Said differently, S(j+1,n−1) has to be a copy of another sub-word of S(1,n-2). The last symbol of S can be a new symbol (but could not be), possibly leading to the production of a new sub-word (hence the term producibility).
From the definition of productibility, the empty string Λ=S(1,0) ⇒ S(1,1). So by a recursive production process, at step i we have S(1,hi) ⇒ S(1,hi+1), so we can build S from its prefixes. And as S(1,i) ⇒ S(1,i+1) (with hi+1 =hi + 1) is always true, this production process of S takes at most n=l(S) steps. Let m, 1 ≤ m ≤ l ( S ) {\displaystyle 1\leq {\text{m}}\leq l(S)} , be the number of steps necessary for this product process of S. S can be written in a decomposed form, called history of S, and denoted H(S), defined like this:
H ( S ) = S ( 1 , h 1 ) S ( h 1 + 1 , h 2 ) ⋯ S ( h m − 1 + 1 , h m ) {\displaystyle H(S)=S(1,h_{1})S(h_{1}+1,h_{2})\dotsm S(h_{m-1}+1,h_{m})} H i ( S ) = S ( h i − 1 + 1 , h i ) , i = 1 , 2 ⋯ m , where h 0 = 0 , h 1 = 1 , h m = l ( S ) , is called component of H ( S ) . {\displaystyle H_{i}(S)=S(h_{i-1}+1,h_{i}),i=1,2\dotsm m,{\text{where}}\;h_{0}=0,h_{1}=1,h_{m}=l(S),{{\text{ is called component of }}H(S)}.}
A component of S, Hi(S), is said to be exhaustive if S(1,hi) is the longest sequence produced by S(1,hi−1) (i.e., S(1,hi−1) ⇒ S(1,hi)) but so that S(1,hi−1) does not produce S(1,hi) (denoted). S ( 1 , h i − 1 ) ↛ S ( 1 , h i ) {\displaystyle S(1,h_{i}-1)\nrightarrow S(1,h_{i})} The index p which allows to have the longest production is called pointer.
The history of S is said to be exhaustive if all its component are exhaustive, except possibly the last one. From the definition, one can show that any sequence S has only one exhaustive history, and this history is the one with the smallest number of component from all the possible histories of S. Finally, the number of component of this unique exhaustive history of S is called the Lempel–Ziv complexity of S.
Fortunately, there exists a very efficient method for computing this complexity, in a linear number of operation ( O ( n ) {\displaystyle {\mathcal {O}}(n)} for n = l ( S ) {\displaystyle n=l(S)} length of the sequence S).
A formal description of this method is given by the following algorithm : | https://en.wikipedia.org/wiki/Lempel–Ziv_complexity |
Lempel–Ziv–Oberhumer ( LZO ) is a lossless data compression algorithm that is focused on decompression speed. [ 1 ]
The original "lzop" implementation, released in 1996, was developed by Markus Franz Xaver Johannes Oberhumer, based on earlier algorithms by Abraham Lempel and Jacob Ziv . The LZO library implements a number of algorithms with the following characteristics:
LZO supports overlapping compression and in-place decompression. As a block compression algorithm, it compresses and decompresses blocks of data. Block size must be the same for compression and decompression. LZO compresses a block of data into matches (a sliding dictionary) and runs of non-matching literals to produce good results on highly redundant data and deals acceptably with non-compressible data, only expanding incompressible data by a maximum of 1/64 of the original size when measured over a block size of at least 1 kB. [ 2 ]
The reference implementation is written in ANSI C , and it has been made available as free software under the GNU General Public License . The copyright for the code is owned by Markus F. X. J. Oberhumer. It was originally published in 1996. Oberhumer has also written a command-line frontend called lzop .
Versions of LZO are available for the Perl , Python and Java languages. Various LZO implementations are reported to work under AIX , Atari TOS (Atari ST), ConvexOS, IRIX , Linux , Mac OS , Nintendo 64 , Palm OS , PlayStation , Solaris , SunOS , VxWorks , Wii , and Win32 .
FFmpeg's libavutil library includes its own implementation of LZO [ 3 ] as a possible method for lossless video compression. FFmpeg's implementation of the decompressor is also used in OpenConnect in order to support LZO-compressed ESP packets sent by Juniper Networks and Pulse Secure VPN servers. [ 4 ]
The Linux kernel uses its LZO implementation in some of its features: | https://en.wikipedia.org/wiki/Lempel–Ziv–Oberhumer |
Lempel–Ziv–Stac ( LZS , or Stac compression or Stacker compression [ 1 ] ) is a lossless data compression algorithm that uses a combination of the LZ77 sliding-window compression algorithm and fixed Huffman coding . It was originally developed by Stac Electronics for tape compression, and subsequently adapted for hard disk compression and sold as the Stacker disk compression software. It was later specified as a compression algorithm for various network protocols. LZS is specified in the Cisco IOS stack.
LZS compression is standardized as an INCITS (previously ANSI) standard. [ 2 ]
LZS compression is specified for various Internet protocols:
LZS compression and decompression uses an LZ77 type algorithm. It uses the last 2 KB of uncompressed data as a sliding-window dictionary.
An LZS compressor looks for matches between the data to be compressed and the last 2 KB of data. If it finds a match, it encodes an offset/length reference to the dictionary. If no match is found, the next data byte is encoded as a "literal" byte. The compressed data stream ends with an end-marker.
Data is encoded into a stream of variable-bit-width tokens.
A literal byte is encoded as a '0' bit followed by the 8 bits of the byte.
An offset/length reference is encoded as a '1' bit followed by the encoded offset, followed by the encoded length. One exceptional encoding is an end marker, described below.
An offset can have a minimum value of 1 and a maximum value of 2047. A value of 1 refers to the most recent byte in the history buffer, immediately preceding the next data byte to be processed. An offset is encoded as:
A length is encoded as:
N is integer result of (length + 7) / 15, and xxxx is length - (N*15 − 7)
An end marker is encoded as the 9-bit token 110000000. Following the end marker, 0 to 7 extra '0' bits are appended as needed, to pad the stream to the next byte boundary.
Stac Electronics' spin-off Hifn has held several patents for LZS compression. [ 3 ] [ 4 ] These patents lapsed due to non-payment of fees and attempts to reinstate them in 2007 failed.
In 1993–94, Stac Electronics successfully sued Microsoft for infringement of LZS patents in the DoubleSpace disk compression program included with MS-DOS 6.0 . [ 5 ] | https://en.wikipedia.org/wiki/Lempel–Ziv–Stac |
Lempel–Ziv–Storer–Szymanski ( LZSS ) is a lossless data compression algorithm , a derivative of LZ77 , that was created in 1982 by James A. Storer and Thomas Szymanski . LZSS was described in article "Data compression via textual substitution" published in Journal of the ACM (1982, pp. 928–951). [ 1 ]
LZSS is a dictionary coding technique. It attempts to replace a string of symbols with a reference to a dictionary location of the same string.
The main difference between LZ77 and LZSS is that in LZ77 the dictionary reference could actually be longer than the string it was replacing. In LZSS, such references are omitted if the length is less than the "break even" point. Furthermore, LZSS uses one-bit flags to indicate whether the next chunk of data is a literal (byte) or a reference to an offset/length pair.
Here is the beginning of Dr. Seuss's Green Eggs and Ham , with character numbers at the beginning of lines for convenience. Green Eggs and Ham is a good example to illustrate LZSS compression because the book itself only contains 50 unique words, despite having a word count of 170. [ 2 ] Thus, words are repeated, however not in succession.
This text takes 177 bytes in uncompressed form. Assuming a break even point of 2 bytes (and thus 2 byte pointer/offset pairs), and one byte newlines, this text compressed with LZSS becomes 95 bytes long:
Note: this does not include the 11 bytes of flags indicating whether the next chunk of text is a pointer or a literal. Adding it, the text becomes 106 bytes long, which is still shorter than the original 177 bytes.
Many popular archivers like ARJ , RAR , ZOO , LHarc use LZSS rather than LZ77 as the primary compression algorithm; the encoding of literal characters and of length-distance pairs varies, with the most common option being Huffman coding . Most implementations stem from a public domain 1989 code by Haruhiko Okumura . [ 3 ] [ 4 ] Version 4 of the Allegro library can encode and decode an LZSS format, [ 5 ] but the feature was cut from version 5. The Game Boy Advance BIOS can decode a slightly modified LZSS format. [ 6 ] Apple's macOS uses LZSS as one of the compression methods for kernel code. [ 7 ] | https://en.wikipedia.org/wiki/Lempel–Ziv–Storer–Szymanski |
Lempel–Ziv–Welch ( LZW ) is a universal lossless data compression algorithm created by Abraham Lempel , Jacob Ziv , and Terry Welch . It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978. The algorithm is simple to implement and has the potential for very high throughput in hardware implementations. [ 1 ] It is the algorithm of the Unix file compression utility compress and is used in the GIF image format.
The scenario described by Welch's 1984 paper [ 1 ] encodes sequences of 8-bit data as fixed-length 12-bit codes. The codes from 0 to 255 represent 1-character sequences consisting of the corresponding 8-bit character, and the codes 256 through 4095 are created in a dictionary for sequences encountered in the data as it is encoded. At each stage in compression, input bytes are gathered into a sequence until the next character would make a sequence with no code yet in the dictionary. The code for the sequence (without that character) is added to the output, and a new code (for the sequence with that character) is added to the dictionary.
The idea was quickly adapted to other situations. In an image based on a color table, for example, the natural character alphabet is the set of color table indexes, and in the 1980s, many images had small color tables (on the order of 16 colors). For such a reduced alphabet, the full 12-bit codes yielded poor compression unless the image was large, so the idea of a variable-width code was introduced: codes typically start one bit wider than the symbols being encoded, and as each code size is used up, the code width increases by 1 bit, up to some prescribed maximum (typically 12 bits). When the maximum code value is reached, encoding proceeds using the existing table, but new codes are not generated for addition to the table.
Further refinements include reserving a code to indicate that the code table should be cleared and restored to its initial state (a "clear code", typically the first value immediately after the values for the individual alphabet characters), and a code to indicate the end of data (a "stop code", typically one greater than the clear code). The clear code lets the table be reinitialized after it fills up, which lets the encoding adapt to changing patterns in the input data. Smart encoders can monitor the compression efficiency and clear the table whenever the existing table no longer matches the input well.
Since codes are added in a manner determined by the data, the decoder mimics building the table as it sees the resulting codes. It is critical that the encoder and decoder agree on the variety of LZW used: the size of the alphabet, the maximum table size (and code width), whether variable-width encoding is used, initial code size, and whether to use the clear and stop codes (and what values they have). Most formats that employ LZW build this information into the format specification or provide explicit fields for them in a compression header for the data.
A high-level view of the encoding algorithm is shown here:
A dictionary is initialized to contain the single-character strings corresponding to all the possible input characters (and nothing else except the clear and stop codes if they're being used). The algorithm works by scanning through the input string for successively longer substrings until it finds one that is not in the dictionary. When such a string is found, the index for the string without the last character (i.e., the longest substring that is in the dictionary) is retrieved from the dictionary and sent to output, and the new string (including the last character) is added to the dictionary with the next available code. The last input character is then used as the next starting point to scan for substrings.
In this way, successively longer strings are registered in the dictionary and available for subsequent encoding as single output values. The algorithm works best on data with repeated patterns, so the initial parts of a message see little compression. As the message grows, however, the compression ratio tends asymptotically to the maximum (i.e., the compression factor or ratio improves on an increasing curve, and not linearly, approaching a theoretical maximum inside a limited time period rather than over infinite time). [ 2 ]
A high-level view of the decoding algorithm is shown here:
The decoding algorithm works by reading a value from the encoded input and outputting the corresponding string from the dictionary. However, the full dictionary is not needed, only the initial dictionary that contains single-character strings (and that is usually hard coded in the program, instead of sent with the encoded data). Instead, the full dictionary is rebuilt during the decoding process the following way: after decoding a value and outputting a string, the decoder concatenates it with the first character of the next decoded string (or the first character of current string, if the next one can't be decoded; since if the next value is unknown, then it must be the value added to the dictionary in this iteration, and so its first character is the same as the first character of the current string), and updates the dictionary with the new string. The decoder then proceeds to the next input (which was already read in the previous iteration) and processes it as before, and so on until it has exhausted the input stream.
If variable-width codes are being used, the encoder and decoder must be careful to change the width at the same points in the encoded data so they don't disagree on boundaries between individual codes in the stream. In the standard version, the encoder increases the width from p to p + 1 when a sequence ω + s is encountered that is not in the table (so that a code must be added for it) but the next available code in the table is 2 p (the first code requiring p + 1 bits). The encoder emits the code for ω at width p (since that code does not require p + 1 bits), and then increases the code width so that the next code emitted is p + 1 bits wide.
The decoder is always one code behind the encoder in building the table, so when it sees the code for ω, it generates an entry for code 2 p − 1. Since this is the point where the encoder increases the code width, the decoder must increase the width here as well—at the point where it generates the largest code that fits in p bits.
Unfortunately, some early implementations of the encoding algorithm increase the code width and then emit ω at the new width instead of the old width, so that to the decoder it looks like the width changes one code too early. This is called "early change"; it caused so much confusion that Adobe now allows both versions in PDF files, but includes an explicit flag in the header of each LZW-compressed stream to indicate whether early change is being used. Of the graphics file formats that support LZW compression, TIFF uses early change, while GIF and most others don't.
When the table is cleared in response to a clear code, both encoder and decoder change the code width after the clear code back to the initial code width, starting with the code immediately following the clear code.
Since the codes emitted typically do not fall on byte boundaries, the encoder and decoder must agree on how codes are packed into bytes. The two common methods are LSB-first (" least significant bit first") and MSB-first (" most significant bit first"). In LSB-first packing, the first code is aligned so that the least significant bit of the code falls in the least significant bit of the first stream byte, and if the code has more than 8 bits, the high-order bits left over are aligned with the least significant bits of the next byte; further codes are packed with LSB going into the least significant bit not yet used in the current stream byte, proceeding into further bytes as necessary. MSB-first packing aligns the first code so that its most significant bit falls in the MSB of the first stream byte, with overflow aligned with the MSB of the next byte; further codes are written with MSB going into the most significant bit not yet used in the current stream byte.
GIF files use LSB-first packing order. TIFF files and PDF files use MSB-first packing order.
The following example illustrates the LZW algorithm in action, showing the status of the output and the dictionary at every stage, both in encoding and decoding the data. This example has been constructed to give reasonable compression on a very short message. In real text data, repetition is generally less pronounced, so longer input streams are typically necessary before the compression builds up efficiency.
The plaintext to be encoded (from an alphabet using only the capital letters) is:
There are 26 symbols in the plaintext alphabet (the capital letters A through Z ). # is used to represent a stop code: a code outside the plaintext alphabet that triggers special handling. We arbitrarily assign these the values 1 through 26 for the letters, and 0 for the stop code '#'. (Most flavors of LZW would put the stop code after the data alphabet, but nothing in the basic algorithm requires that. The encoder and decoder only have to agree what value it has.)
A computer renders these as strings of bits . Five-bit codes are needed to give sufficient combinations to encompass this set of 27 values. The dictionary is initialized with these 27 values. As the dictionary grows, the codes must grow in width to accommodate the additional entries. A 5-bit code gives 2 5 = 32 possible combinations of bits, so when the 33rd dictionary word is created, the algorithm must switch at that point from 5-bit strings to 6-bit strings (for all code values, including those previously output with only five bits). Note that since the all-zero code 00000 is used, and is labeled "0", the 33rd dictionary entry is labeled 32 . (Previously generated output is not affected by the code-width change, but once a 6-bit value is generated in the dictionary, it could conceivably be the next code emitted, so the width for subsequent output shifts to 6 bits to accommodate that.)
The initial dictionary, then, consists of the following entries:
Buffer input characters in a sequence ω until ω + next character is not in the dictionary. Emit the code for ω, and add ω + next character to the dictionary. Start buffering again with the next character. (The string to be encoded is "TOBEORNOTTOBEORTOBEORNOT#".)
Using LZW has saved 29 bits out of 125, reducing the message by more than 23%. If the message were longer, then the dictionary words would begin to represent longer and longer sections of text, sending repeated words very compactly.
To decode an LZW-compressed archive, one needs to know in advance the initial dictionary used, but additional entries can be reconstructed as they are always simply concatenations of previous entries.
At each stage, the decoder receives a code X; it looks X up in the table and outputs the sequence χ it codes, and it conjectures χ + ? as the entry the encoder just added – because the encoder emitted X for χ precisely because χ + ? was not in the table, and the encoder goes ahead and adds it. But what is the missing letter? It is the first letter in the sequence coded by the next code Z that the decoder receives. So the decoder looks up Z, decodes it into the sequence ω and takes the first letter z and tacks it onto the end of χ as the next dictionary entry.
This works as long as the codes received are in the decoder's dictionary, so that they can be decoded into sequences. What happens if the decoder receives a code Z that is not yet in its dictionary? Since the decoder is always just one code behind the encoder, Z can be in the encoder's dictionary only if the encoder just generated it, when emitting the previous code X for χ. Thus Z codes some ω that is χ + ?, and the decoder can determine the unknown character as follows:
This situation occurs whenever the encoder encounters input of the form cScSc , where c is a single character, S is a string and cS is already in the dictionary, but cSc is not. The encoder emits the code for cS , putting a new code for cSc into the dictionary. Next it sees cSc in the input (starting at the second c of cScSc ) and emits the new code it just inserted. The argument above shows that whenever the decoder receives a code not in its dictionary, the situation must look like this.
Although input of form cScSc might seem unlikely, this pattern is fairly common when the input stream is characterized by significant repetition. In particular, long strings of a single character (which are common in the kinds of images LZW is often used to encode) repeatedly generate patterns of this sort.
The simple scheme described above focuses on the LZW algorithm itself. Many applications apply further encoding to the sequence of output symbols. Some package the coded stream as printable characters using some form of binary-to-text encoding ; this increases the encoded length and decreases the compression rate. Conversely, increased compression can often be achieved with an adaptive entropy encoder . Such a coder estimates the probability distribution for the value of the next symbol, based on the observed frequencies of values so far. A standard entropy encoding such as Huffman coding or arithmetic coding then uses shorter codes for values with higher probabilities.
LZW compression became the first widely used universal data compression method on computers. A large English text file can typically be compressed via LZW to about half its original size.
LZW was used in the public-domain program compress , which became a more or less standard utility in Unix systems around 1986. It has since disappeared from many distributions, both because it infringed the LZW patent and because gzip produced better compression ratios using the LZ77-based DEFLATE algorithm, but as of 2008 at least FreeBSD includes both compress and uncompress as a part of the distribution. Several other popular compression utilities also used LZW or closely related methods.
LZW became very widely used when it became part of the GIF image format in 1987. It may also (optionally) be used in TIFF and PDF files. (Although LZW is available in Adobe Acrobat software, Acrobat by default uses DEFLATE for most text and color-table-based image data in PDF files.)
Various patents have been issued in the United States and other countries for LZW and similar algorithms. LZ78 was covered by U.S. patent 4,464,650 by Lempel, Ziv, Cohn, and Eastman, assigned to Sperry Corporation , later Unisys Corporation, filed on August 10, 1981. Two US patents were issued for the LZW algorithm: U.S. patent 4,814,746 by Victor S. Miller and Mark N. Wegman and assigned to IBM , originally filed on June 1, 1983, and U.S. patent 4,558,302 by Welch, assigned to Sperry Corporation, later Unisys Corporation, filed on June 20, 1983.
In addition to the above patents, Welch's 1983 patent also includes citations to several other patents that influenced it, including two 1980 Japanese patents ( JP9343880A and JP17790880A ) from NEC 's Jun Kanatsu, U.S. patent 4,021,782 (1974) from John S. Hoerning, U.S. patent 4,366,551 (1977) from Klaus E. Holtz, and a 1981 German patent ( DE19813118676 ) from Karl Eckhart Heinz. [ 3 ]
In 1993–94, and again in 1999, Unisys Corporation received widespread condemnation when it attempted to enforce licensing fees for LZW in GIF images. The 1993–1994 Unisys-CompuServe controversy ( CompuServe being the creator of the GIF format) prompted a Usenet comp.graphics discussion Thoughts on a GIF-replacement file format , which in turn fostered an email exchange that eventually culminated in the creation of the patent-unencumbered Portable Network Graphics (PNG) file format in 1995.
Unisys's US patent on the LZW algorithm expired on June 20, 2003, [ 4 ] 20 years after it had been filed. Patents that had been filed in the United Kingdom, France, Germany, Italy, Japan and Canada all expired in 2004, [ 4 ] likewise 20 years after they had been filed. | https://en.wikipedia.org/wiki/Lempel–Ziv–Welch |
See text
Lenarviricota is a phylum of RNA viruses that includes all positive-strand RNA viruses that infect prokaryotes . Some members also infect eukaryotes . Most of these viruses do not have capsids , except for the genus Ourmiavirus . [ 1 ] The name of the group is a syllabic abbreviation of the names of founding member families " Le viviridae and Nar naviridae " with the suffix -viricota , denoting a virus phylum . [ 2 ]
Lenarviricota is the first branch of RNA viruses to emerge, since they are the most basal branch. [ 1 ] Most of its members, the leviviruses (class Leviviricetes ), only infect prokaryotes, and their known level of diversity has grown dramatically in recent years, which suggests that the RNA viruses may be more widespread in prokaryotes than previously believed. [ 3 ]
It has been suggested that the origin of Lenarviricota may predate that of the last universal common ancestor (LUCA). [ 3 ] Lenarviricota viruses appear to have arisen from a primordial RdRP of the RNA-protein world that gave rise to leviviruses (class Leviviricetes ). [ 4 ] It has also been suggested that the retroelements of cellular life ( group II introns and retrotransposons ) evolved from a shared ancestor with Lenarviricota . [ 5 ]
The eukaryotic RNA viruses without capsids, Mitoviridae , Narnaviridae and Botourmiaviridae , arose from the leviviruses with the loss of the capsid during the time that eukaryogenesis occurred, when the bacterial endosymbiont became the mitochondria . The genus Ourmiavirus arose by recombination between a non-capsid botourmiavirus and a virus from the family Tombusviridae , which inherited its capsid proteins. [ 1 ] [ 4 ]
The following classes are recognized: [ 6 ] | https://en.wikipedia.org/wiki/Lenarviricota |
In the mathematical theory of probability, Lenglart's inequality was proved by Èrik Lenglart in 1977. [ 1 ] Later slight modifications are also called Lenglart's inequality.
Let X be a non-negative right-continuous F t {\displaystyle {\mathcal {F}}_{t}} - adapted process and let G be a non-negative right-continuous non-decreasing predictable process such that E [ X ( τ ) ∣ F 0 ] ≤ E [ G ( τ ) ∣ F 0 ] < ∞ {\displaystyle \mathbb {E} [X(\tau )\mid {\mathcal {F}}_{0}]\leq \mathbb {E} [G(\tau )\mid {\mathcal {F}}_{0}]<\infty } for any bounded stopping time τ {\displaystyle \tau } . Then | https://en.wikipedia.org/wiki/Lenglart's_inequality |
Length is a measure of distance . In the International System of Quantities , length is a quantity with dimension distance. In most systems of measurement a base unit for length is chosen, from which all other units are derived. In the International System of Units (SI) system, the base unit for length is the metre . [ 1 ]
Length is commonly understood to mean the most extended dimension of a fixed object. [ 1 ] However, this is not always the case and may depend on the position the object is in.
Various terms for the length of a fixed object are used, and these include height , which is vertical length or vertical extent, width, breadth, and depth. Height is used when there is a base from which vertical measurements can be taken. Width and breadth usually refer to a shorter dimension than length . Depth is used for the measure of a third dimension . [ 2 ]
Length is the measure of one spatial dimension, whereas area is a measure of two dimensions (length squared) and volume is a measure of three dimensions (length cubed).
Measurement has been important ever since humans settled from nomadic lifestyles and started using building materials, occupying land and trading with neighbours. As trade between different places increased, the need for standard units of length increased. And later, as society has become more technologically oriented, much higher accuracy of measurement is required in an increasingly diverse set of fields, from micro-electronics to interplanetary ranging. [ 3 ]
Under Einstein 's special relativity , length can no longer be thought of as being constant in all reference frames . Thus a ruler that is one metre long in one frame of reference will not be one metre long in a reference frame that is moving relative to the first frame. This means the length of an object varies depending on the speed of the observer.
In Euclidean geometry, length is measured along straight lines unless otherwise specified and refers to segments on them. Pythagoras's theorem relating the length of the sides of a right triangle is one of many applications in Euclidean geometry. Length may also be measured along other types of curves and is referred to as arclength .
In a triangle , the length of an altitude , a line segment drawn from a vertex perpendicular to the side not passing through the vertex (referred to as a base of the triangle), is called the height of the triangle.
The area of a rectangle is defined to be length × width of the rectangle. If a long thin rectangle is stood up on its short side then its area could also be described as its height × width.
The volume of a solid rectangular box (such as a plank of wood ) is often described as length × height × depth.
The perimeter of a polygon is the sum of the lengths of its sides .
The circumference of a circular disk is the length of the boundary (a circle ) of that disk.
In other geometries, length may be measured along possibly curved paths, called geodesics . The Riemannian geometry used in general relativity is an example of such a geometry. In spherical geometry , length is measured along the great circles on the sphere and the distance between two points on the sphere is the shorter of the two lengths on the great circle, which is determined by the plane through the two points and the center of the sphere.
In an unweighted graph , the length of a cycle , path , or walk is the number of edges it uses. [ 4 ] In a weighted graph , it may instead be the sum of the weights of the edges that it uses. [ 5 ]
Length is used to define the shortest path , girth (shortest cycle length), and longest path between two vertices in a graph.
In measure theory, length is most often generalized to general sets of R n {\displaystyle \mathbb {R} ^{n}} via the Lebesgue measure . In the one-dimensional case, the Lebesgue outer measure of a set is defined in terms of the lengths of open intervals. Concretely, the length of an open interval is first defined as
so that the Lebesgue outer measure μ ∗ ( E ) {\displaystyle \mu ^{*}(E)} of a general set E {\displaystyle E} may then be defined as [ 6 ]
In the physical sciences and engineering, when one speaks of units of length , the word length is synonymous with distance . There are several units that are used to measure length. Historically, units of length may have been derived from the lengths of human body parts, the distance travelled in a number of paces, the distance between landmarks or places on the Earth, or arbitrarily on the length of some common object.
In the International System of Units (SI), the base unit of length is the metre (symbol, m), now defined in terms of the speed of light (about 300 million metres per second ). The millimetre (mm), centimetre (cm) and the kilometre (km), derived from the metre, are also commonly used units. In U.S. customary units , English or imperial system of units , commonly used units of length are the inch (in), the foot (ft), the yard (yd), and the mile (mi). A unit of length used in navigation is the nautical mile (nmi). [ 7 ]
1.609344 km = 1 miles
Units used to denote distances in the vastness of space, as in astronomy , are much longer than those typically used on Earth (metre or kilometre) and include the astronomical unit (au), the light-year , and the parsec (pc).
Units used to denote sub-atomic distances, as in nuclear physics , are much smaller than the millimetre. Examples include the fermi (fm). | https://en.wikipedia.org/wiki/Length |
In computational chemistry , molecular physics , and physical chemistry , the Lennard-Jones potential (also termed the LJ potential or 12-6 potential ; named for John Lennard-Jones ) is an intermolecular pair potential . Out of all the intermolecular potentials , the Lennard-Jones potential is probably the one that has been the most extensively studied. [ 1 ] [ 2 ] It is considered an archetype model for simple yet realistic intermolecular interactions . The Lennard-Jones potential is often used as a building block in molecular models (a.k.a. force fields ) for more complex substances. [ 3 ] Many studies of the idealized "Lennard-Jones substance" use the potential to understand the physical nature of matter.
The Lennard-Jones potential is a simple model that still manages to describe the essential features of interactions between simple atoms and molecules: Two interacting particles repel each other at very close distance, attract each other at moderate distance, and eventually stop interacting at infinite distance, as shown in the Figure. The Lennard-Jones potential is a pair potential, i.e. no three- or multi-body interactions are covered by the potential. [ 3 ] [ 4 ]
The general Lennard-Jones potential combines a repulsive potential, 1 / r n {\displaystyle 1/r^{n}} , with an attractive potential, − 1 / r m {\displaystyle -1/r^{m}} , using empirically determined coefficients A n {\displaystyle A_{n}} and B m {\displaystyle B_{m}} : [ 5 ] [ 6 ] V LJ ( r ) = A n r n − B m r m . {\displaystyle V_{\text{LJ}}(r)={\frac {A_{n}}{r^{n}}}-{\frac {B_{m}}{r^{m}}}.} In his 1931 review [ 5 ] Lennard-Jones suggested using m = 6 {\displaystyle m=6} to match the London dispersion force and n = 12 {\displaystyle n=12} based matching experimental data. [ 1 ] Setting A n = 4 ε σ 12 {\displaystyle A_{n}=4\varepsilon \sigma ^{12}} and B m = 4 ε σ 6 {\displaystyle B_{m}=4\varepsilon \sigma ^{6}} gives the widely used Lennard-Jones 12-6 potential: [ 7 ] V LJ ( r ) = 4 ε [ ( σ r ) 12 − ( σ r ) 6 ] , {\displaystyle V_{\text{LJ}}(r)=4\varepsilon \left[\left({\frac {\sigma }{r}}\right)^{12}-\left({\frac {\sigma }{r}}\right)^{6}\right],} where r is the distance between two interacting particles, ε is the depth of the potential well , and σ is the distance at which the particle-particle potential energy V is zero. The Lennard-Jones 12-6 potential has its minimum at a distance of r = r m i n = 2 1 / 6 σ , {\displaystyle r=r_{\rm {min}}=2^{1/6}\sigma ,} where the potential energy has the value V = − ε . {\displaystyle V=-\varepsilon .}
The Lennard-Jones potential is usually the standard choice for the development of theories for matter (especially soft-matter) as well as for the development and testing of computational methods and algorithms.
Numerous intermolecular potentials have been proposed in the past for the modeling of simple soft repulsive and attractive interactions between spherically symmetric particles, i.e. the general shape shown in the Figure. Examples for other potentials are the Morse potential , the Mie potential , [ 8 ] the Buckingham potential and the Tang-Tönnies potential. [ 9 ] While some of these may be more suited to modelling real fluids , [ 10 ] the simplicity of the Lennard-Jones potential, as well as its often surprising ability to accurately capture real fluid behavior, has historically made it the pair-potential of greatest general importance. [ 11 ]
In 1924, the year that Lennard-Jones received his PhD from Cambridge University , he published [ 6 ] [ 12 ] a series of landmark papers on the pair potentials that would ultimately be named for him. [ 2 ] [ 3 ] [ 13 ] [ 1 ] In these papers he adjusted the parameters of the potential then using the result in a model of gas viscosity, seeking a set of values consistent with experiment. His initial results suggested a repulsive n = 13.5 {\displaystyle n=13.5} and an attractive m = 3 {\displaystyle m=3} .
Before Lennard-Jones, back in 1903, Gustav Mie had worked on effective field theories; Eduard Grüneisen built on Mie work for solids, showing that n > m {\displaystyle n>m} and m > 3 {\displaystyle m>3} is required for solids. As a result of this work the Lennard-Jones potential is sometimes called the Mie−
Grüneisen potential in solid-state physics . [ 3 ]
In 1930, after the discovery of quantum mechanics , Fritz London showed that theory predicts the long-range attractive force should have m = 6 {\displaystyle m=6} . In 1931, Lennard-Jones applied this form of the potential to describe many properties of fluids setting the stage for many subsequent studies. [ 1 ]
Dimensionless reduced units can be defined based on the Lennard-Jones potential parameters, which is convenient for molecular simulations. From a numerical point of view, the advantages of this unit system include computing values which are closer to unity, using simplified equations and being able to easily scale the results. [ 14 ] [ 15 ] This reduced units system requires the specification of the size parameter σ {\displaystyle \sigma } and the energy parameter ε {\displaystyle \varepsilon } of the Lennard-Jones potential and the mass of the particle m {\displaystyle m} . All physical properties can be converted straightforwardly taking the respective dimension into account, see table. The reduced units are often abbreviated and indicated by an asterisk.
In general, reduced units can also be built up on other molecular interaction potentials that consist of a length parameter and an energy parameter.
The Lennard-Jones potential, cf. Eq. (1) and Figure on the top, has an infinite range. Only under its consideration, the 'true' and 'full' Lennard-Jones potential is examined. For the evaluation of an observable of an ensemble of particles interacting by the Lennard-Jones potential using molecular simulations, the interactions can only be evaluated explicitly up to a certain distance – simply due to the fact that the number of particles will always be finite. The maximum distance applied in a simulation is usually referred to as 'cut-off' radius r c {\displaystyle r_{\mathrm {c} }} (because the Lennard-Jones potential is radially symmetric). To obtain thermophysical properties (both macroscopic or microscopic) of the 'true' and 'full' Lennard-Jones (LJ) potential, the contribution of the potential beyond the cut-off radius has to be accounted for.
Different correction schemes have been developed to account for the influence of the long-range interactions in simulations and to sustain a sufficiently good approximation of the 'full' potential. [ 16 ] [ 14 ] They are based on simplifying assumptions regarding the structure of the fluid. For simple cases, such as in studies of the equilibrium of homogeneous fluids, simple correction terms yield excellent results. In other cases, such as in studies of inhomogeneous systems with different phases, accounting for the long-range interactions is more tedious. These corrections are usually referred to as 'long-range corrections'. For most properties, simple analytical expressions are known and well established. For a given observable X {\displaystyle X} , the 'corrected' simulation result X c o r r {\displaystyle X_{\mathrm {corr} }} is then simply computed from the actually sampled value X s a m p l e d {\displaystyle X_{\mathrm {sampled} }} and the long-range correction value X l r c {\displaystyle X_{\mathrm {lrc} }} , e.g. for the internal energy U c o r r = U s a m p l e d + U l r c {\displaystyle U_{\mathrm {corr} }=U_{\mathrm {sampled} }+U_{\mathrm {lrc} }} . [ 14 ] The hypothetical true value of the observable of the Lennard-Jones potential at truly infinite cut-off distance (thermodynamic limit) X t r u e {\displaystyle X_{\mathrm {true} }} can in general only be estimated.
Furthermore, the quality of the long-range correction scheme depends on the cut-off radius. The assumptions made with the correction schemes are usually not justified at (very) short cut-off radii. This is illustrated in the example shown in Figure on the right. The long-range correction scheme is said to be converged, if the remaining error of the correction scheme is sufficiently small at a given cut-off distance, cf. Figure.
The Lennard-Jones potential – as an archetype for intermolecular potentials – has been used numerous times as starting point for the development of more elaborate or more generalized intermolecular potentials. Various extensions and modifications of the Lennard-Jones potential have been proposed in the literature; a more extensive list is given in the ' interatomic potential ' article. The following list refers only to several example potentials that are directly related to the Lennard-Jones potential and are of both historic importance and still relevant for present research.
The Lennard-Jones truncated & shifted (LJTS) potential is an often used alternative to the 'full' Lennard-Jones potential (see Eq. (1)). The 'full' and the 'truncated & shifted' Lennard-Jones potential have to be kept strictly separate. They are simply two different intermolecular potentials yielding different thermophysical properties. The Lennard-Jones truncated & shifted potential is defined as V LJTS ( r ) = { V LJ ( r ) − V LJ ( r end ) r ≤ r end 0 r > r end , {\displaystyle V_{\text{LJTS}}(r)={\begin{cases}V_{\text{LJ}}(r)-V_{\text{LJ}}(r_{\text{end}})&~~~~r\leq r_{\text{end}}\\0&~~~~r>r_{\text{end}},\end{cases}}} with V LJ ( r ) = 4 ε [ ( σ r ) 12 − ( σ r ) 6 ] . {\displaystyle V_{\text{LJ}}(r)=4\varepsilon \left[\left({\frac {\sigma }{r}}\right)^{12}-\left({\frac {\sigma }{r}}\right)^{6}\right].}
Hence, the LJTS potential is truncated at r e n d {\displaystyle r_{\mathrm {end} }} and shifted by the corresponding energy value V L J ( r e n d ) {\displaystyle V_{\mathrm {LJ} }(r_{\mathrm {end} })} . The latter is applied to avoid a discontinuity jump of the potential at r e n d {\displaystyle r_{\mathrm {end} }} . For the LJTS potential, no long-range interactions beyond r e n d {\displaystyle r_{\mathrm {end} }} are required – neither explicitly nor implicitly. The most frequently used version of the Lennard-Jones truncated & shifted potential is the one with r e n d = 2.5 σ {\displaystyle r_{\mathrm {end} }=2.5\,\sigma } . [ citation needed ] Nevertheless, different r e n d {\displaystyle r_{\mathrm {end} }} values have been used in the literature. [ 24 ] [ 25 ] [ 26 ] [ 27 ] Each LJTS potential with a given truncation radius r e n d {\displaystyle r_{\mathrm {end} }} has to be considered as a potential and accordingly a substance of its own.
The LJTS potential is computationally significantly cheaper than the 'full' Lennard-Jones potential, but still covers the essential physical features of matter (the presence of a critical and a triple point, soft repulsive and attractive interactions, phase equilibria etc.). Therefore, the LJTS potential is used for the testing of new algorithms, simulation methods, and new physical theories. [ 28 ] [ 29 ] [ 30 ]
Interestingly, for homogeneous systems, the intermolecular forces that are calculated from the LJ and the LJTS potential at a given distance are the same (since d V / d r {\displaystyle {\text{d}}V/{\text{d}}r} is the same), whereas the potential energy and the pressure are affected by the shifting. Also, the properties of the LJTS substance may furthermore be affected by the chosen simulation algorithm, i.e. MD or MC sampling (this is in general not the case for the 'full' Lennard-Jones potential).
For the LJTS potential with r e n d = 2.5 σ {\displaystyle r_{\mathrm {end} }=2.5\,\sigma } , the potential energy shift is approximately 1/60 of the dispersion energy at the potential well: V L J ( r e n d = 2.5 σ ) = − 0.0163 ε {\displaystyle V_{\mathrm {LJ} }(r_{\mathrm {end} }=2.5\,\sigma )=-0.0163\,\varepsilon } . The Figure on the right shows the comparison of the vapor–liquid equilibrium of the 'full' Lennard-Jones potential and the 'Lennard-Jones truncated & shifted' potential. The 'full' Lennard-Jones potential results prevail a significantly higher critical temperature and pressure compared to the LJTS potential results, but the critical density is very similar. [ 31 ] [ 32 ] [ 26 ] The vapor pressure and the enthalpy of vaporization are influenced more strongly by the long-range interactions than the saturated densities. This is due to the fact that the potential is manipulated mainly energetically by the truncation and shifting.
The Lennard-Jones potential is not only of fundamental importance in computational chemistry and soft-matter physics , but also for the modeling of real substances. The Lennard-Jones potential is used for fundamental studies on the behavior of matter and for elucidating atomistic phenomena. It is also often used for somewhat special use cases, e.g. for studying thermophysical properties of two- or four-dimensional substances [ 33 ] [ 34 ] [ 35 ] (instead of the classical three spatial directions of our universe).
There are two main applications of the Lennard-Jones potentials: (i) for studying the hypothetical Lennard-Jones substance [ 13 ] and (ii) for modeling interactions in real substance models. [ 3 ] [ 2 ] These two applications are discussed in the following.
A Lennard-Jones substance or "Lennard-Jonesium" is the name given to an idealized substance which would result from atoms or molecules interacting exclusively through the Lennard-Jones potential. [ 13 ] Statistical mechanics [ 36 ] and computer simulations [ 15 ] [ 16 ] can be used to study the Lennard-Jones potential and to obtain thermophysical properties of the 'Lennard-Jones substance'. The Lennard-Jones substance is often referred to as 'Lennard-Jonesium,' [ 13 ] suggesting that it is viewed as a (fictive) chemical element . [ 21 ] Moreover, its energy and length parameters can be adjusted to fit many different real substances. Both the Lennard-Jones potential and, accordingly, the Lennard-Jones substance are simplified yet realistic models, such as they accurately capture essential physical principles like the presence of a critical and a triple point , condensation and freezing . Due in part to its mathematical simplicity, the Lennard-Jones potential has been extensively used in studies on matter since the early days of computer simulation. [ 37 ] [ 38 ] [ 39 ] [ 40 ]
Thermophysical properties of the Lennard-Jones substance, [ 13 ] i.e. particles interacting with the Lennard-Jones potential can be obtained using statistical mechanics. Some properties can be computed analytically, i.e. with machine precision, whereas most properties can only be obtained by performing molecular simulations. [ 15 ] The latter will in general be superimposed by both statistical and systematic uncertainties. [ 43 ] [ 21 ] [ 44 ] [ 45 ] The virial coefficients can for example be computed directly from the Lennard-potential using algebraic expressions [ 36 ] and reported data has therefore no uncertainty. Molecular simulation results, e.g. the pressure at a given temperature and density has both statistical and systematic uncertainties. [ 43 ] [ 45 ] Molecular simulations of the Lennard-Jones potential can in general be performed using either molecular dynamics (MD) simulations or Monte Carlo (MC) simulation. For MC simulations, the Lennard-Jones potential V L J ( r ) {\displaystyle V_{\mathrm {LJ} }(r)} is directly used, whereas MD simulations are always based on the derivative of the potential, i.e. the force F = − d V / d r {\displaystyle F=-\mathrm {d} V/\mathrm {d} r} . These differences in combination with differences in the treatment of the long-range interactions (see below) can influence computed thermophysical properties. [ 46 ] [ 32 ]
Since the Lennard-Jonesium is the archetype for the modeling of simple yet realistic intermolecular interactions, a large number of thermophysical properties were studied and reported in the literature. [ 21 ] Computer experiment data of the Lennard-Jones potential is presently considered the most accurately known data in classical mechanics computational chemistry. Hence, such data is also mostly used as a benchmark for validating and testing new algorithms and theories. The Lennard-Jones potential has been constantly used since the early days of molecular simulations. The first results from computer experiments for the Lennard-Jones potential were reported by Rosenbluth and Rosenbluth [ 38 ] and Wood and Parker [ 37 ] after molecular simulations on " fast computing machines " became available in 1953. [ 47 ] Since then many studies reported data of the Lennard-Jones substance; [ 21 ] approximately 50,000 data points are publicly available. The current state of research on the thermophysical properties of the Lennard-Jones substance is summarized by Stephan et al. [ 21 ] (which did not cover transport and mixture properties). The US National Institute of Standards and Technology (NIST) provides examples of molecular dynamics and Monte Carlo codes along with results obtained from them. [ 48 ] Transport property data of Lennard-Jones fluids have been compiled by Bell et al. [ 49 ] and Lautenschaeger and Hasse. [ 50 ]
Figure on the right shows the phase diagram of the Lennard-Jones fluid. Phase equilibria of the Lennard-Jones potential have been studied numerous times and are accordingly known today with good precision. [ 41 ] [ 21 ] [ 51 ] The Figure shows results correlations derived from computer experiment results (hence, lines instead of data points are shown).
The mean intermolecular interaction of a Lennard-Jones particle strongly depends on the thermodynamic state, i.e., temperature and pressure (or density). For solid states, the attractive Lennard-Jones interaction plays a dominant role – especially at low temperatures. For liquid states, no ordered structure is present compared to solid states. The mean potential energy per particle is negative. For gaseous states, attractive interactions of the Lennard-Jones potential play a minor role – since they are far distanced. The main part of the internal energy is stored as kinetic energy for gaseous states. At supercritical states, the attractive Lennard-Jones interaction plays a minor role. With increasing temperature, the mean kinetic energy of the particles increases and exceeds the energy well of the Lennard-Jones potential. Hence, the particles mainly interact by the potentials' soft repulsive interactions and the mean potential energy per particle is accordingly positive.
Overall, due to the large timespan the Lennard-Jones potential has been studied and thermophysical property data has been reported in the literature and computational resources were insufficient for accurate simulations (to modern standards), a noticeable amount of data is known to be dubious. [ 21 ] Nevertheless, in many studies such data is used as reference. The lack of data repositories and data assessment is a crucial element for future work in the long-going field of Lennard-Jones potential research.
The most important characteristic points of the Lennard-Jones potential are the critical point and the vapor–liquid–solid triple point . They were studied numerous times in the literature and compiled in Ref. [ 21 ] The critical point was thereby assessed to be located at
The given uncertainties were calculated from the standard deviation of the critical parameters derived from the most reliable available vapor–liquid equilibrium data sets. [ 21 ] These uncertainties can be assumed as a lower limit to the accuracy with which the critical point of fluid can be obtained from molecular simulation results.
The triple point is presently assumed to be located at
The uncertainties represent the scattering of data from different authors. [ 41 ] The critical point of the Lennard-Jones substance has been studied far more often than the triple point. For both the critical point and the vapor–liquid–solid triple point, several studies reported results out of the above stated ranges. The above stated data is the presently assumed correct and reliable data. Nevertheless, the determinateness of the critical temperature and the triple point temperature is still unsatisfactory.
Evidently, the phase coexistence curves (cf. figures) are of fundamental importance to characterize the Lennard-Jones potential. Furthermore, Brown's characteristic curves [ 55 ] yield an illustrative description of essential features of the Lennard-Jones potential. Brown's characteristic curves are defined as curves on which a certain thermodynamic property of the substance matches that of an ideal gas . For a real fluid, Z {\displaystyle Z} and its derivatives can match the values of the ideal gas for special T {\displaystyle T} , ρ {\displaystyle \rho } combinations only as a result of Gibbs' phase rule. The resulting points collectively constitute a characteristic curve. Four main characteristic curves are defined: One 0th-order (named Zeno curve ) and three 1st-order curves (named Amagat , Boyle , and Charles curve ). The characteristic curve are required to have a negative or zero curvature throughout and a single maximum in a double-logarithmic pressure-temperature diagram. Furthermore, Brown's characteristic curves and the virial coefficients are directly linked in the limit of the ideal gas and are therefore known exactly at ρ → 0 {\displaystyle \rho \rightarrow 0} . Both computer simulation results and equation of state results have been reported in the literature for the Lennard-Jones potential. [ 53 ] [ 21 ] [ 52 ] [ 56 ] [ 57 ]
Points on the Zeno curve Z have a compressibility factor of unity Z = p / ( ρ T ) = 1 {\displaystyle Z=p/(\rho T)=1} . The Zeno curve originates at the Boyle temperature T B = 3.417927982 ε k B − 1 {\displaystyle T_{\mathrm {B} }=3.417927982\,\varepsilon k_{\mathrm {B} }^{-1}} , surrounds the critical point, and has a slope of unity in the low temperature limit. [ 52 ] Points on the Boyle curve B have d Z d ( 1 / ρ ) | T = 0 {\displaystyle \left.{\frac {\mathrm {d} Z}{\mathrm {d} (1/\rho )}}\right|_{T}=0} . The Boyle curve originates with the Zeno curve at the Boyle temperature, faintly surrounds the critical point, and ends on the vapor pressure curve. Points on the Charles curve (a.k.a. Joule-Thomson inversion curve ) have d Z d T | p = 0 {\displaystyle \left.{\frac {\mathrm {d} Z}{\mathrm {d} T}}\right|_{p}=0} and more importantly d T d p | h = 0 {\displaystyle \left.{\frac {\mathrm {d} T}{\mathrm {d} p}}\right|_{h}=0} , i.e. no temperature change upon isenthalpic throttling. It originates at T = 6.430798418 ε k B − 1 {\displaystyle T=6.430798418\,\varepsilon k_{\mathrm {B} }^{-1}} in the ideal gas limit, crosses the Zeno curve, and terminates on the vapor pressure curve. Points on the Amagat curve A have d Z d T | ρ = 0 {\displaystyle \left.{\frac {\mathrm {d} Z}{\mathrm {d} T}}\right|_{\rho }=0} . It also starts in the ideal gas limit at T = 25.15242837 ε k B − 1 {\displaystyle T=25.15242837\,\varepsilon k_{\mathrm {B} }^{-1}} , surrounds the critical point and the other three characteristic curves and passes into the solid phase region. A comprehensive discussion of the characteristic curves of the Lennard-Jones potential is given by Stephan and Deiters. [ 52 ]
Properties of the Lennard-Jones fluid have been studied extensively in the literature due to the outstanding importance of the Lennard-Jones potential in soft-matter physics and related fields. [ 13 ] About 50 datasets of computer experiment data for the vapor–liquid equilibrium have been published to date. [ 21 ] Furthermore, more than 35,000 data points at homogeneous fluid states have been published over the years and recently been compiled and assessed for outliers in an open access database. [ 21 ]
The vapor–liquid equilibrium of the Lennard-Jones substance is presently known with a precision, i.e. mutual agreement of thermodynamically consistent data, of ± 1 % {\displaystyle \pm 1\%} for the vapor pressure, ± 0.2 % {\displaystyle \pm 0.2\%} for the saturated liquid density, ± 1 % {\displaystyle \pm 1\%} for the saturated vapor density, ± 0.75 % {\displaystyle \pm 0.75\%} for the enthalpy of vaporization, and ± 4 % {\displaystyle \pm 4\%} for the surface tension. [ 21 ] This status quo can not be considered satisfactory considering the fact that statistical uncertainties usually reported for single data sets are significantly below the above stated values (even for far more complex molecular force fields).
Both phase equilibrium properties and homogeneous state properties at arbitrary density can in general only be obtained from molecular simulations, whereas virial coefficients can be computed directly from the Lennard-Jones potential. [ 36 ] Numerical data for the second and third virial coefficient is available in a wide temperature range. [ 58 ] [ 52 ] [ 21 ] For higher virial coefficients (up to the sixteenth), the number of available data points decreases with increasing number of the virial coefficient. [ 59 ] [ 60 ] Also transport properties (viscosity, heat conductivity, and self diffusion coefficient) of the Lennard-Jones fluid have been studied, [ 61 ] [ 62 ] but the database is significantly less dense than for homogeneous equilibrium properties like p v T {\displaystyle pvT} – or internal energy data. Moreover, a large number of analytical models ( equations of state ) have been developed for the description of the Lennard-Jones fluid (see below for details).
The database and knowledge for the Lennard-Jones solid is significantly poorer than for the fluid phases. It was realized early that the interactions in solid phases should not be approximated to be pair-wise additive – especially for metals. [ 63 ] [ 64 ]
Nevertheless, the Lennard-Jones potential is used in solid-state physics due to its simplicity and computational efficiency. Hence, the basic properties of the solid phases and the solid–fluid phase equilibria have been investigated several times, e.g. Refs. [ 51 ] [ 41 ] [ 42 ] [ 65 ] [ 66 ] [ 54 ]
The Lennard-Jones substance form fcc (face centered cubic), hcp (hexagonal close-packed) and other close-packed polytype lattices – depending on temperature and pressure, cf. figure above with phase diagram. At low temperature and up to moderate pressure, the hcp lattice is energetically favored and therefore the equilibrium structure. The fcc lattice structure is energetically favored at both high temperature and high pressure and therefore overall the equilibrium structure in a wider state range. The coexistence line between the fcc and hcp phase starts at T = 0 {\displaystyle T=0} at approximately p = 878.5 ε σ − 3 {\displaystyle p=878.5\,\varepsilon \sigma ^{-3}} , passes through a temperature maximum at approximately T = 0.4 ε k B − 1 {\displaystyle T=0.4\,\varepsilon k_{\mathrm {B} }^{-1}} , and then ends on the vapor–solid phase boundary at approximately T = 0.32 ε k B − 1 {\displaystyle T=0.32\,\varepsilon k_{\mathrm {B} }^{-1}} , which thereby forms a triple point. [ 65 ] [ 41 ] Hence, only the fcc solid phase exhibits phase equilibria with the liquid and supercritical phase, cf. figure above with phase diagram.
The triple point of the two solid phases (fcc and hcp) and the vapor phase is reported to be located at: [ 65 ] [ 41 ]
Note, that other and significantly differing values have also been reported in the literature. Hence, the database for the fcc-hcp–vapor triple point should be further solidified in the future.
Mixtures of Lennard-Jones particles are mostly used as a prototype for the development of theories and methods of solutions, but also to study properties of solutions in general. This dates back to the fundamental work of conformal solution theory of Longuet-Higgins [ 67 ] and Leland and Rowlinson and co-workers. [ 68 ] [ 69 ] Those are today the basis of most theories for mixtures. [ 70 ] [ 71 ]
Mixtures of two or more Lennard-Jones components are set up by changing at least one potential interaction parameter ( ε {\displaystyle \varepsilon } or σ {\displaystyle \sigma } ) of one of the components with respect to the other. For a binary mixture, this yields three types of pair interactions that are all modeled by the Lennard-Jones potential: 1-1, 2-2, and 1-2 interactions. For the cross interactions 1–2, additional assumptions are required for the specification of parameters ε 12 {\displaystyle \varepsilon _{\mathrm {12} }} or σ 12 {\displaystyle \sigma _{\mathrm {12} }} from ε 11 {\displaystyle \varepsilon _{\mathrm {11} }} , σ 11 {\displaystyle \sigma _{\mathrm {11} }} and ε 22 {\displaystyle \varepsilon _{\mathrm {22} }} , σ 22 {\displaystyle \sigma _{\mathrm {22} }} . Various choices (all more or less empirical and not rigorously based on physical arguments) can be used for these so-called combination rules. [ 72 ] The most widely used [ 72 ] combination rule is the one of Lorentz and Berthelot [ 73 ]
σ 12 = η 12 σ 11 + σ 22 2 {\displaystyle \sigma _{12}=\eta _{12}{\frac {\sigma _{11}+\sigma _{22}}{2}}}
ε 12 = ξ 12 ε 11 ε 22 {\displaystyle \varepsilon _{12}=\xi _{12}{\sqrt {\varepsilon _{11}\varepsilon _{22}}}}
The parameter ξ 12 {\displaystyle \xi _{12}} is an additional state-independent interaction parameter for the mixture. The parameter η 12 {\displaystyle \eta _{12}} is usually set to unity since the arithmetic mean can be considered physically plausible for the cross-interaction size parameter. The parameter ξ 12 {\displaystyle \xi _{12}} on the other hand is often used to adjust the geometric mean so as to reproduce the phase behavior of the model mixture. For analytical models, e.g. equations of state , the deviation parameter is usually written as k 12 = 1 − ξ 12 {\displaystyle k_{12}=1-\xi _{12}} . For ξ 12 > 1 {\displaystyle \xi _{12}>1} , the cross-interaction dispersion energy and accordingly the attractive force between unlike particles is intensified, and the attractive forces between unlike particles are diminished for ξ 12 < 1 {\displaystyle \xi _{12}<1} .
For Lennard-Jones mixtures, both fluid and solid phase equilibria can be studied, i.e. vapor–liquid , liquid–liquid , gas–gas, solid–vapor, solid–liquid , and solid–solid. Accordingly, different types of triple points (three-phase equilibria) and critical points can exist as well as different eutectic and azeotropic points . [ 74 ] [ 71 ] Binary Lennard-Jones mixtures in the fluid region (various types of equilibria of liquid and gas phases) [ 31 ] [ 75 ] [ 76 ] [ 77 ] [ 78 ] have been studied more comprehensively then phase equilibria comprising solid phases. [ 79 ] [ 80 ] [ 81 ] [ 82 ] [ 83 ] A large number of different Lennard-Jones mixtures have been studied in the literature. To date, no standard for such has been established. Usually, the binary interaction parameters and the two component parameters are chosen such that a mixture with properties convenient for a given task are obtained. Yet, this often makes comparisons tricky.
For the fluid phase behavior, mixtures exhibit practically ideal behavior (in the sense of Raoult's law ) for ξ 12 = 1 {\displaystyle \xi _{12}=1} . For ξ 12 > 1 {\displaystyle \xi _{12}>1} attractive interactions prevail and the mixtures tend to form high-boiling azeotropes, i.e. a lower pressure than pure components' vapor pressures is required to stabilize the vapor–liquid equilibrium. For ξ 12 < 1 {\displaystyle \xi _{12}<1} repulsive interactions prevail and mixtures tend to form low-boiling azeotropes, i.e. a higher pressure than pure components' vapor pressures is required to stabilize the vapor–liquid equilibrium since the mean dispersive forces are decreased. Particularly low values of ξ 12 {\displaystyle \xi _{12}} furthermore will result in liquid–liquid miscibility gaps. Also various types of phase equilibria comprising solid phases have been studied in the literature, e.g. by Carol and co-workers. [ 81 ] [ 83 ] [ 80 ] [ 79 ] Also, cases exist where the solid phase boundaries interrupt fluid phase equilibria. However, for phase equilibria that comprise solid phases, the amount of published data is sparse.
A large number of equations of state (EOS) for the Lennard-Jones potential/ substance have been proposed since its characterization and evaluation became available with the first computer simulations. [ 47 ] Due to the fundamental importance of the Lennard-Jones potential, most currently available molecular-based EOS are built around the Lennard-Jones fluid. They have been comprehensively reviewed by Stephan et al. [ 11 ] [ 52 ]
Equations of state for the Lennard-Jones fluid are of particular importance in soft-matter physics and physical chemistry , used as starting point for the development of EOS for complex fluids, e.g. polymers and associating fluids. The monomer units of these models are usually directly adapted from Lennard-Jones EOS as a building block, e.g. the PHC EOS, [ 84 ] the BACKONE EOS, [ 85 ] [ 86 ] and SAFT type EOS. [ 17 ] [ 87 ] [ 88 ] [ 89 ]
More than 30 Lennard-Jones EOS have been proposed in the literature. A comprehensive evaluation [ 11 ] [ 52 ] of such EOS showed that several EOS [ 90 ] [ 91 ] [ 92 ] [ 93 ] describe the Lennard-Jones potential with good and similar accuracy, but none of them is outstanding. Three of those EOS show an unacceptable unphysical behavior in some fluid region, e.g. multiple van der Waals loops, while being elsewise reasonably precise. Only the Lennard-Jones EOS of Kolafa and Nezbeda [ 91 ] was found to be robust and precise for most thermodynamic properties of the Lennard-Jones fluid. [ 52 ] [ 11 ] Furthermore, the Lennard-Jones EOS of Johnson et al. [ 94 ] was found to be less precise for practically all available reference data [ 21 ] [ 11 ] than the Kolafa and Nezbeda EOS. [ 91 ]
The Lennard-Jones potential is extensively used for molecular modeling of real substances. There are essentially two ways the Lennard-Jones potential can be used for molecular modeling: (1) A real substance atom or molecule is modeled directly by the Lennard-Jones potential, which yields very good results for noble gases and methane , i.e. dispersively interacting spherical particles. In the case of methane, the molecule is assumed to be spherically symmetric and the hydrogen atoms are fused with the carbon atom to a common unit. This simplification can in general also be applied to more complex molecules, but yields usually poor results. (2) A real substance molecule is built of multiple Lennard-Jones interactions sites, which can be connected either by rigid bonds or flexible additional potentials (and eventually also consists of other potential types, e.g. partial charges). Molecular models (often referred to as ' force fields ') for practically all molecular and ionic particles can be constructed using this scheme for example for alkanes .
Upon using the first outlined approach, the molecular model has only the two parameters of the Lennard-Jones potential ε {\displaystyle \varepsilon } and σ {\displaystyle \sigma } that can be used for the fitting, e.g. ε / k B = 120 K {\displaystyle \varepsilon /k_{\mathrm {B} }=120\,\mathrm {K} } and σ = 0.34 n m {\displaystyle \sigma =0.34\,\mathrm {nm} } can be used for argon . Upon adjusting the model parameters ε and σ to real substance properties, the Lennard-Jones potential can be used to describe simple substance (like noble gases ) with good accuracy. Evidently, this approach is only a good approximation for spherical and simply dispersively interacting molecules and atoms. The direct use of the Lennard-Jones potential has the great advantage that simulation results and theories for the Lennard-Jones potential can be used directly. Hence, available results for the Lennard-Jones potential and substance can be directly scaled using the appropriate ε {\displaystyle \varepsilon } and σ {\displaystyle \sigma } (see reduced units). The Lennard-Jones potential parameters ε {\displaystyle \varepsilon } and σ {\displaystyle \sigma } can in general be fitted to any desired real substance property. In soft-matter physics, usually experimental data for the vapor–liquid phase equilibrium or the critical point are used for the parametrization; in solid-state physics, rather the compressibility, heat capacity or lattice constants are employed. [ 63 ] [ 64 ]
The second outlined approach of using the Lennard-Jones potential as a building block of elongated and complex molecules is far more sophisticated. Molecular models are thereby tailor-made in a sense that simulation results are only applicable for that particular model. This development approach for molecular force fields is today mainly performed in soft-matter physics and associated fields such as chemical engineering , chemistry, and computational biology. A large number of force fields are based on the Lennard-Jones potential, e.g. the TraPPE force field , [ 95 ] the OPLS force field, [ 96 ] and the MolMod force field [ 97 ] (an overview of molecular force fields is out of the scope of the present article). For the state-of-the-art modeling of solid-state materials, more elaborate multi-body potentials (e.g. EAM potentials [ 98 ] ) are used.
The Lennard-Jones potential yields a good approximation of intermolecular interactions for many applications: The macroscopic properties computed using the Lennard-Jones potential are in good agreement with experimental data for simple substances like argon on one side and the potential function V L J ( r ) {\displaystyle V_{\mathrm {LJ} }(r)} is in fair agreement with results from quantum chemistry on the other side. The Lennard-Jones potential gives a good description of molecular interactions in fluid phases, whereas molecular interactions in solid phases are only roughly well described. This is mainly due to the fact that multi-body interactions play a significant role in solid phases, which are not comprised in the Lennard-Jones potential. Therefore, the Lennard-Jones potential is extensively used in soft-matter physics and associated fields, whereas it is less frequently used in solid-state physics . Due to its simplicity, the Lennard-Jones potential is often used to describe the properties of gases and simple fluids and to model dispersive and repulsive interactions in molecular models . It is especially accurate for noble gas atoms and methane . It is furthermore a good approximation for molecular interactions at long and short distances for neutral atoms and molecules. Therefore, the Lennard-Jones potential is very often used as a building block of molecular models of complex molecules, e.g. alkanes or water . [ 95 ] [ 99 ] [ 97 ] The Lennard-Jones potential can also be used to model the adsorption interactions at solid–fluid interfaces, i.e. physisorption or chemisorption .
It is well accepted, that the main limitations of the Lennard-Jones potential lie in the fact the potential is a pair potential (does not cover multi-body interactions) and that the 1 / r 12 {\displaystyle 1/r^{12}} exponent term is used for the repulsion. Results from quantum chemistry suggest that a higher exponent than 12 has to be used, i.e. a steeper potential. Furthermore, the Lennard-Jones potential has a limited flexibility, i.e. only the two model parameters ε {\displaystyle \varepsilon } and σ {\displaystyle \sigma } can be used for the fitting to describe a real substance. | https://en.wikipedia.org/wiki/Lennard-Jones_potential |
The Lenovo ThinkPad X220 is a laptop computer from the ThinkPad series that was manufactured by Lenovo . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] It uses a 12.5 inch IPS or TN display. [ 8 ] [ 9 ]
A tablet version was also released. [ 10 ]
The keyboard from the X220 has been retrofitted in a X230. [ 11 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lenovo_ThinkPad_X220 |
A lensmeter or lensometer (sometimes even known as focimeter or vertometer) , [ 1 ] [ 2 ] is an optical instrument used in ophthalmology . It is mainly used by optometrists and opticians to measure the back or front vertex power of a spectacle lens and verify the correct prescription in a pair of eyeglasses , to properly orient and mark uncut lenses, and to confirm the correct mounting of lenses in spectacle frames. Lensmeters can also verify the power of contact lenses , if a special lens support is used.
The parameters appraised by a lensmeter are the values specified by an ophthalmologist or optometrist on the patient's prescription : sphere, cylinder, axis, add, and in some cases, prism. The lensmeter is also used to check the accuracy of progressive lenses , and is often capable of marking the lens center and various other measurements critical to proper performance of the lens. It may also be used prior to an eye examination to obtain the last prescription the patient was given, in order to expedite the subsequent examination.
In 1848, Antoine Claudet produced the photographometer, an instrument designed to measure the intensity of photogenic rays; and in 1849 he brought out the focimeter, for securing a perfect focus in photographic portraiture. [ 3 ] In 1876, Hermann Snellen introduced a phakometer which was a similar set up to an optical bench which could measure the power and find the optical centre of a convex lens. Troppman went a step further in 1912, introducing the first direct measuring instrument.
In 1922, a patent was filed for the first projection lensmeter, which has a similar system to the standard lensmeter pictured above, but projects the measuring target onto a screen eliminating the need for correction of the observer's refractive error in the instrument itself and reducing the requirement to peer down a small telescope into the instrument. Despite these advantages the above design is still predominant in the optical world. [ 4 ] | https://en.wikipedia.org/wiki/Lensmeter |
Lentisphera araneosa is a marine bacteria strain in the bacterial phylum Lentisphaerota . They are able to produce viscous transparent exopolymers and grow attached to each other by the polymer in a three-dimensional configuration. They are part of the natural surface bacterial population in the Atlantic and Pacific oceans. They are less than 1% of the total bacterial community. This species is gram negative , non-motile , non-pigmented , aerobic , chemoheterotrophic , and facultatively oligotrophic sphere-shaped. [ 2 ] [ 3 ] Its genome has been sequenced. [ 4 ]
This bacteria -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lentisphaera_araneosa |
Lentisphaerota is a phylum of bacteria closely related to Chlamydiota and Verrucomicrobiota . [ 2 ] [ 3 ]
It includes two monotypic orders Lentisphaerales and Victivallales . Phylum members can be aerobic or anaerobic and fall under two distinct phenotypes. These phenotypes live within bodies of sea water and were particularly hard to isolate in a pure culture. [ 4 ] One phenotype, L. marina , consists of terrestrial gut microbiota from mammals and birds. It was found in the Sea of Japan . [ 4 ] The other phenotype ( L. araneosa ) includes marine microorganisms : sequences from fish and coral microbiomes and marine sediment .
The phylogeny based on the work of the All-Species Living Tree Project . [ 5 ]
Oligosphaera ethanolica
Victivallis vadensis
L. araneosa
L. marina
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LSPN) [ 6 ] and the National Center for Biotechnology Information (NCBI). [ 7 ]
Notes: ♠ Strain found at the National Center for Biotechnology Information (NCBI) but not listed in the List of Prokaryotic names with Standing in Nomenclature (LPSN) | https://en.wikipedia.org/wiki/Lentisphaerota |
A lentoid is a geometric shape of a three-dimensional body, best described as a circle viewed from one direction and a convex lens viewed from every orthogonal direction. It has no strict mathematical definition, but may be described as the volume enclosed within overlapping paraboloids .
The term is most often used in describing jewelry and cellular phenomena in microbiology .
Since ancient times, the lentoid shape has been used to fashion jewelry and seals for identification made from a variety of gemstones and metals. In Minoan Crete , for example, Minoan seals have been found with complex carving on lentoid stones. [ 1 ] The lentoid shape was one of the most commonly recovered seal shapes from Minoan Knossos on Crete dating to the Bronze Age , as evidenced by the finds at that Bronze Age palace. [ 2 ]
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Lentoid |
Lentztrehaloses A, B, and C are trehalose analogues found in an actinomycete Lentzea sp. ML457-mF8. [ 1 ] [ 2 ] Lentztrehaloses A and B can be synthesized chemically. [ 3 ] The non-reducing disaccharide trehalose is commonly used in foods and various products as stabilizer and humectant, respectively. Trehalose has been shown to have curative effects for treating various diseases in animal models including neurodegenerative diseases, hepatic diseases, and arteriosclerosis. Trehalose, however, is readily digested by hydrolytic enzyme trehalase that is widely expressed in many organisms from microbes to human. As a result, trehalose may cause decomposition of the containing products, and its medicinal effect may be reduced by the hydrolysis by trehalase. Lentztrehaloses are rarely hydrolyzed by microbial and mammalian trehalases [ 4 ] and may be used in various areas as a biologically stable substitute of trehalose. | https://en.wikipedia.org/wiki/Lentztrehalose |
Leo Morandi (14 September 1923 – 2 May 2009) was a promoter of the new post- World War II commercial ceramics industry of Sassuolo , Italy. At first he collaborated with local ceramic producers ( Marazzi Group ) and with Industries D'Agostino in Salerno , Italy . In 1954, after selling an innovative patent to Ceramiche Marazzi ( biscuit selection unit), he was able to initiate startup of his own equipment supply business. Leo Morandi's innovations, knowledge, and experience aided growth and advancement throughout the Sassuolo ceramic companies, which in the 1960s constituted the worldwide known Sassuolo tile district .
In particular, he used two electrodes to select the biscuit; the passage of electricity through the material showed that it was not suitable for the successive glazing.
The production chain for ceramic tiles was then mainly handcrafted and labour-intensive, but Leo Morandi's inventions started the successful automation of various units. After the initial years he was able to open a distinct efficient production unit. Morandi was a reserved man and did not like publicity. He involved his workers and customers in the improvement of his inventions and innovations.
Ceramic industry equipment examples include: automated floor tile edge glaze remover, automated packaging (floor tiles where historically bound with thin iron wire, before shipment), specialized silk screen machines, the peristaltic pump , the transport line, the hydraulic press , tile overturning mechanism, press reception unit, and glazing applications done with disk booths. These basic tile processing elements are still in use.
Leo Morandi began exporting these proven automated Italian ceramic industry innovations to Spain , allowing advancement of two primary ceramic industry clusters.
1945 - 1 December Licence n.424701
Device to abrade automatically the enamel from the edges of the floor tiles for covering.
1958 - 14 June Licence n.592220
System to fasten with iron threads tiles of geometrical shape.
1958 - 1 August Licence n.598125
Machine to remove automatically from the tiles the enamel strains left on the edges during the humid glazing
1958 - 1 August Licence n.593124
Device to brush tiles on both sides with a single passage.
1958 - 2 August Licence n.593126
Automatic machine to remove from the tiles the slobbers left by the stamps on the edges.
1958 - 2 August Licence n.593127
Pump for dense liquids, particularly for vitreous enamel in watery suspension for the ceramic industry.
1958 - 25 September Licence n.595789
Machine to retrieve the vitreous enamel of the tiles which result defective after the humid glazing.
1959 - 6 April Licence n.606642
Procedure to manufacture enameled tiles with dull edges.
1960 - 7 April Licence n.629034
Automatic machine in order to collect into a pile humid tiles coming from the press.
1961 - 7 April Licence n.646867
Conveyor for ceramic floor tiles.
1961 - 7 April Licence n.646866
Device to turn over the tiles on the conveyor.
1961 - 15 April Licence n.647228
Machine to divide the tiles by thickness.
1962 - 28 March Licence n.685219
Hydraulic system, consisting in a press actioned by mostly self-feeding cylinders.
1968 - 13 March Licence n.832329
Automatic machine to apply screen printed decorations to tiles.
1973 - 6 Decembers Licence n.1001087
Device to distribute the tiles being part of a row.
1973 - 6 Decembers Licence 1001086
Device to empty the baking supports of tiles.
1978 - 12 April Licence n.1104063
Perfected device with rotary discs to nebulize and evenly distribute enamels to be applied on tiles.
1978 - 3 July Licence n.28979B
Glazing cabin with rotary discs to nebulize and evenly distribute enamels to be applied on tiles.
Il distretto ceramico di Sassuolo [1] (in Italian) | https://en.wikipedia.org/wiki/Leo_Morandi |
Leo Palatinus ( Latin for Palatine Lion ) was a constellation created by the astronomer Karl-Joseph König in 1785. He created the constellation to honor the patrons, Count Palatine Charles Theodore and Countess Palatine Elizabeth Auguste , of the observatory in Mannheim, Germany, where he worked. However, the constellation failed to attract attention from contemporary and subsequent astronomers, and it was never depicted in a chart aside from the 1785 description. [ 1 ] [ 2 ]
Leo Palatinus was made of two non-contiguous groups of stars: a scattering of fourth-magnitude stars in far northwestern Aquarius made a crowned lion, and second part of even fainter stars west of Equuleus made a monogram CTEA (the combined initials of König's patrons) above the lion. [ 2 ]
This constellation -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Leo_Palatinus |
Leo Yaffe , OC FRSC (July 6, 1916 – May 14, 1997) was a Canadian nuclear chemistry scientist and a proponent of the peaceful uses of nuclear power .
Born in Devils Lake , North Dakota , his family moved to Winnipeg in 1920. He studied at the University of Manitoba receiving a B.Sc.(Hons) in 1940, a M.Sc. in 1941, and was awarded an honorary D.Sc. in 1982. He received a Ph.D. in 1943 from McGill University .
In 1943, he was recruited by Atomic Energy of Canada Limited to work at the Manhattan Project 's Montreal Laboratory , moving to the Chalk River Laboratories , on the banks of the Ottawa River , in Ontario , at the end of the war. He remained with the AECL until 1952. His research group developed intense sources of the radioactive isotope cobalt-60 used for treatment of cancer , and radioactive tracers for medical diagnosis. [ 1 ]
In 1952, he moved to McGill University , where he studied nuclear reactions using the J.S. Foster cyclotron had just been built at McGill. In 1958 he became the Macdonald Professor of Chemistry. [ 1 ]
From 1963 to 1965 he was director of research at the International Atomic Energy Agency in Vienna . Returning to McGill he was appointed head of the department of chemistry until 1972. In 1974 he was appointed vice-principal (administration) which he held until he retired in 1981. From 1981 to 1982, he was the president of the Chemical Institute of Canada .
He married Betty Workman and has two children: Carla Krasnick, and Mark Yaffe. Yaffe died in Montreal in 1997. [ 2 ] The McGill University Archives holds a collection of his personal papers and photographs. [ 3 ] | https://en.wikipedia.org/wiki/Leo_Yaffe |
Leon N. Cooper ( né Kupchik ; February 28, 1930 – October 23, 2024) was an American theoretical physicist and neuroscientist . He won the Nobel Prize in Physics for his work on superconductivity . Cooper developed the concept of Cooper pairs and collaborated with John Bardeen and John Robert Schrieffer to develop the BCS theory of conventional superconductivity . [ 1 ] [ 2 ] In neuroscience, Cooper co-developed the BCM theory of synaptic plasticity . [ 3 ]
Leon N. Kupchick was born in the Bronx , New York City on February 28, 1930. [ 4 ] His middle initial N. does not stand for anything, though some sources erroneously suggested his middle name was Neil. [ 4 ]
His father Irving Kupchik was from Belarus and moved to the United States after the Russian Revolution in 1917. His mother Anna (née Zola) Kupchik was from Poland; she died when Leon was seven. [ 4 ] His father later changed the family's surname from Kupchick to Cooper when he remarried. [ 4 ]
Leon attended the Bronx High School of Science , graduating in 1947 [ 5 ] [ 6 ] He then studied at Columbia University in nearby Upper Manhattan , receiving a Bachelor of Arts degree in 1951. [ 7 ] He remained at Columbia for graduate school , obtaining a Master of Arts degree in 1953 [ 7 ] and a Doctor of Philosophy (PhD) in 1954. [ 7 ] [ 8 ] His PhD was on the subject of muonic atoms , with Robert Serber as his thesis advisor . [ 9 ] [ 10 ]
Cooper spent one year as a postdoctoral researcher at the Institute for Advanced Study in Princeton. New Jersey. He then taught at the University of Illinois at Urbana–Champaign and Ohio State University before joining Brown University in 1958. [ 8 ] He would remain at Brown for the rest of his career.
Cooper founded Brown's Institute for Brain and Neural Systems in 1973, becoming its first director. [ 7 ] In 1974 he was appointed Professor of Science at Brown, an endowed chair funded by Thomas J. Watson Sr. [ 7 ] Cooper held visiting research positions at various institutions including the Institute for Advanced Study in Princeton, New Jersey , and at CERN (European Organization for Nuclear Research) in Geneva , Switzerland. [ citation needed ]
Along with colleague Charles Elbaum , he founded the tech company Nestor in 1975, which sought commercial applications for artificial neural networks . [ 11 ] [ 12 ] Nestor partnered with Intel to develop the Ni1000 neural network computer chip in 1994. [ 13 ]
Cooper first married Martha Kennedy, with whom he had two daughters. [ 4 ] In 1969, he married for a second time, to Kay Allard. [ 14 ] He died at his home in Providence, Rhode Island , on October 23, 2024, at the age of 94. [ 4 ]
While Cooper was a postdoc in Princeton, he was approached by John Bardeen , a professor at the University of Illinois, and Bardeen's graduate student John Robert Schrieffer . Bardeen and Schrieffer were working on superconductivity , a topic which was new to Cooper but he agreed to collaborate with them. Superconductivity had been experimentally discovered in 1911 , but there was no theoretical explanation for the phenomenon. Cooper moved to Illinois as a postdoc to work with Bardeen.
After a year of theoretical investigation, Cooper developed the idea of a quasiparticle composed of two bound electrons, now known as a Cooper pair . Cooper published his concept of Cooper pairs in Physical Review in September 1956. [ 4 ] [ 15 ] The movement of Cooper pairs through a low-temperature metal would be almost unimpeded, producing a very low electrical resistance . After further development, Bardeen, Cooper and Schrieffer showed how this could produce superconductivity, publishing their theory in Physical Reviews in two papers during 1957. [ 4 ] [ 16 ] [ 17 ] This theory became known as the BCS theory , after the authors' initials, and is widely accepted as the explanation for conventional superconductivity . Bardeen, Schrieffer and Cooper were awarded the Nobel Prize in Physics in 1972 for their theory. [ 4 ]
After joining Brown University, Cooper became interested in neuroscience , particularly the process of learning . In 1982, Cooper and two doctoral students, Elie Bienenstock and Paul Munro, published their theory of synaptic plasticity in The Journal of Neuroscience . [ 4 ] They estimated the weakening and strengthening of synapses that could occur without saturation of the connections. As synapses saturate, electrical connections become less effective, thereby reducing the saturation. Connections therefore oscillate between saturation and unsaturation without reaching their limits. Their theory explained how the visual cortex works and how people learn to see. It became known as the BCM theory , after the authors' initials. [ 4 ]
Cooper was the author of Science and Human Experience – a collection of essays, including previously unpublished material, on issues such as consciousness and the structure of space.
(Cambridge University Press, 2014).
Cooper also wrote an unconventional liberal-arts physics textbook, originally An Introduction to the Meaning and Structure of Physics (Harper and Row, 1968) [ 19 ] and still in print in a somewhat condensed form as Physics: Structure and Meaning (Lebanon: New Hampshire, University Press of New England, 1992). | https://en.wikipedia.org/wiki/Leon_Cooper |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.