triplets
list
passage
stringlengths
6
20.1k
__index_level_0__
int64
0
834
[ "Leonardo's fighting vehicle", "instance of", "invention" ]
Leonardo da Vinci's fighting vehicle is one of the conceptions of the revered Italian polymath and artist Leonardo da Vinci.Design The concept was designed while Leonardo da Vinci was under the patronage of Ludovico Sforza in 1487. Sometimes described as a prototype of modern tanks, Leonardo's armored vehicle represented a conical cover inspired by a turtle's shell. The covering was to be made of wood and reinforced with metal plates that add to the thickness. Slanting angles would deflect enemy fire. The machine was powered by two large cranks operated internally by four strong men. The vehicle was equipped with an array of light cannons, placed around the perimeter.The gears of the design were located in a reversed order, making the vehicle unworkable. This is thought by some sources to have been a deliberate mistake by Leonardo as a form of security, in case his design was stolen and used irresponsibly. It is almost impossible to fix this problem successfully without taking away power from its forward movement and strength. Regardless, the vehicle would have been too heavy to move and would have lacked the battlefield mobility seen in modern tanks that make them so effective.The armored vehicle was designed to intimidate the enemy rather than to be used as a serious military weapon. Due to the vehicle's impressive size, it would not be capable of moving on rugged terrain. The project could hardly be applied and realized in the 15th century.Around 2010, a group of engineers recreated Leonardo's vehicle, based on the original design, and fixed the mistake in gearing.
3
[ "Engineers of the human soul", "discoverer or inventor", "Joseph Stalin" ]
"Engineers of the human soul" was a term applied to writers and other cultural workers by Joseph Stalin.
0
[ "Engineers of the human soul", "instance of", "term" ]
"Engineers of the human soul" was a term applied to writers and other cultural workers by Joseph Stalin.In the Soviet Union The phrase was apparently coined by Yury Olesha. Viktor Shklovsky said that Olesha used it in a meeting with Stalin at the home of Maxim Gorky, and it was subsequently used by Stalin, who said «Как метко выразился товарищ Олеша, писатели — инженеры человеческих душ» ("As comrade Olesha aptly expressed himself, writers are engineers of human souls").During his meeting with writers in preparation for the first Congress of the Union of Soviet Writers, Stalin said: "The production of souls is more important than the production of tanks.... And therefore I raise my glass to you, writers, the engineers of the human soul" (Joseph Stalin, "Speech at home of Maxim Gorky", 26 October 1932). It was taken up by Andrei Zhdanov and developed into the idea of Socialist realism.
2
[ "Socialism in one country", "instance of", "political theory" ]
Socialism in one country was a Soviet state policy to strengthen socialism within the country rather than socialism globally. Given the defeats of the 1917–1923 European communist revolutions, Joseph Stalin and Nikolai Bukharin encouraged the theory of the possibility of constructing socialism in the Soviet Union. The theory was eventually adopted as Soviet state policy. As a political theory, its exponents argue that it contradicts neither world revolution nor world communism. The theory opposes Leon Trotsky's theory of permanent revolution and the communist left's theory of world revolution.
2
[ "Mathematical induction", "discoverer or inventor", "Blaise Pascal" ]
In India, early implicit proofs by mathematical induction appear in Bhaskara's "cyclic method".None of these ancient mathematicians, however, explicitly stated the induction hypothesis. Another similar case (contrary to what Vacca has written, as Freudenthal carefully showed) was that of Francesco Maurolico in his Arithmeticorum libri duo (1575), who used the technique to prove that the sum of the first n odd integers is n2. The earliest rigorous use of induction was by Gersonides (1288–1344). The first explicit formulation of the principle of induction was given by Pascal in his Traité du triangle arithmétique (1665). Another Frenchman, Fermat, made ample use of a related principle: indirect proof by infinite descent. The induction hypothesis was also employed by the Swiss Jakob Bernoulli, and from then on it became well known. The modern formal treatment of the principle came only in the 19th century, with George Boole, Augustus De Morgan, Charles Sanders Peirce, Giuseppe Peano, and Richard Dedekind.
3
[ "Mathematical induction", "has part(s)", "base case" ]
Mathematical induction proves that we can climb as high as we like on a ladder, by proving that we can climb onto the bottom rung (the basis) and that from each rung we can climb up to the next one (the step). A proof by induction consists of two cases. The first, the base case, proves the statement for n = 0 {\displaystyle n=0} without assuming any knowledge of other cases. The second case, the induction step, proves that if the statement holds for any given case n = k {\displaystyle n=k} , then it must also hold for the next case n = k + 1 {\displaystyle n=k+1} . These two steps establish that the statement holds for every natural number n {\displaystyle n} . The base case does not necessarily begin with n = 0 {\displaystyle n=0} , but often with n = 1 {\displaystyle n=1} , and possibly with any fixed natural number n = N {\displaystyle n=N} , establishing the truth of the statement for all natural numbers n ≥ N {\displaystyle n\geq N} . The method can be extended to prove statements about more general well-founded structures, such as trees; this generalization, known as structural induction, is used in mathematical logic and computer science. Mathematical induction in this extended sense is closely related to recursion. Mathematical induction is an inference rule used in formal proofs, and is the foundation of most correctness proofs for computer programs.Although its name may suggest otherwise, mathematical induction should not be confused with inductive reasoning as used in philosophy (see Problem of induction). The mathematical method examines infinitely many cases to prove a general statement, but does so by a finite chain of deductive reasoning involving the variable n {\displaystyle n} , which can take infinitely many values.k ( k + 1 ) 2 + ( k + 1 ) = k ( k + 1 ) + 2 ( k + 1 ) 2 = ( k + 1 ) ( k + 2 ) 2 = ( k + 1 ) ( ( k + 1 ) + 1 ) 2 . {\displaystyle {\begin{aligned}{\frac {k(k+1)}{2}}+(k+1)&={\frac {k(k+1)+2(k+1)}{2}}\\&={\frac {(k+1)(k+2)}{2}}\\&={\frac {(k+1)((k+1)+1)}{2}}.\end{aligned}}} Equating the extreme left hand and right hand sides, we deduce that: 0 + 1 + 2 + ⋯ + k + ( k + 1 ) = ( k + 1 ) ( ( k + 1 ) + 1 ) 2 . {\displaystyle 0+1+2+\cdots +k+(k+1)={\frac {(k+1)((k+1)+1)}{2}}.} That is, the statement P(k + 1) also holds true, establishing the induction step. Conclusion: Since both the base case and the induction step have been proved as true, by mathematical induction the statement P(n) holds for every natural number n. ∎Example of error in the induction step The induction step must be proved for all values of n. To illustrate this, Joel E. Cohen proposed the following argument, which purports to prove by mathematical induction that all horses are of the same color:Base case: in a set of only one horse, there is only one color. Induction step: assume as induction hypothesis that within any set of n {\displaystyle n} horses, there is only one color. Now look at any set of n + 1 {\displaystyle n+1} horses. Number them: 1 , 2 , 3 , … , n , n + 1 {\displaystyle 1,2,3,\dotsc ,n,n+1} . Consider the sets { 1 , 2 , 3 , … , n } {\textstyle \left\{1,2,3,\dotsc ,n\right\}} and { 2 , 3 , 4 , … , n + 1 } {\textstyle \left\{2,3,4,\dotsc ,n+1\right\}} . Each is a set of only n {\displaystyle n} horses, therefore within each there is only one color. But the two sets overlap, so there must be only one color among all n + 1 {\displaystyle n+1} horses. The base case n = 1 {\displaystyle n=1} is trivial, and the induction step is correct in all cases n > 1 {\displaystyle n>1} . However, the argument used in the induction step is incorrect for n + 1 = 2 {\displaystyle n+1=2} , because the statement that "the two sets overlap" is false for { 1 } {\textstyle \left\{1\right\}} and { 2 } {\textstyle \left\{2\right\}} .
11
[ "Mathematical induction", "has part(s)", "inductive step" ]
Mathematical induction is a method for proving that a statement P ( n ) {\displaystyle P(n)} is true for every natural number n {\displaystyle n} , that is, that the infinitely many cases P ( 0 ) , P ( 1 ) , P ( 2 ) , P ( 3 ) , … {\displaystyle P(0),P(1),P(2),P(3),\dots }   all hold. Informal metaphors help to explain this technique, such as falling dominoes or climbing a ladder:Mathematical induction proves that we can climb as high as we like on a ladder, by proving that we can climb onto the bottom rung (the basis) and that from each rung we can climb up to the next one (the step). A proof by induction consists of two cases. The first, the base case, proves the statement for n = 0 {\displaystyle n=0} without assuming any knowledge of other cases. The second case, the induction step, proves that if the statement holds for any given case n = k {\displaystyle n=k} , then it must also hold for the next case n = k + 1 {\displaystyle n=k+1} . These two steps establish that the statement holds for every natural number n {\displaystyle n} . The base case does not necessarily begin with n = 0 {\displaystyle n=0} , but often with n = 1 {\displaystyle n=1} , and possibly with any fixed natural number n = N {\displaystyle n=N} , establishing the truth of the statement for all natural numbers n ≥ N {\displaystyle n\geq N} . The method can be extended to prove statements about more general well-founded structures, such as trees; this generalization, known as structural induction, is used in mathematical logic and computer science. Mathematical induction in this extended sense is closely related to recursion. Mathematical induction is an inference rule used in formal proofs, and is the foundation of most correctness proofs for computer programs.Although its name may suggest otherwise, mathematical induction should not be confused with inductive reasoning as used in philosophy (see Problem of induction). The mathematical method examines infinitely many cases to prove a general statement, but does so by a finite chain of deductive reasoning involving the variable n {\displaystyle n} , which can take infinitely many values.P ( n ) : 0 + 1 + 2 + ⋯ + n = n ( n + 1 ) 2 . {\displaystyle P(n)\!:\ \ 0+1+2+\cdots +n={\frac {n(n+1)}{2}}.} This states a general formula for the sum of the natural numbers less than or equal to a given number; in fact an infinite sequence of statements: 0 = ( 0 ) ( 0 + 1 ) 2 {\displaystyle 0={\tfrac {(0)(0+1)}{2}}} , 0 + 1 = ( 1 ) ( 1 + 1 ) 2 {\displaystyle 0+1={\tfrac {(1)(1+1)}{2}}} , 0 + 1 + 2 = ( 2 ) ( 2 + 1 ) 2 {\displaystyle 0+1+2={\tfrac {(2)(2+1)}{2}}} , etc. Proposition. For every n ∈ N {\displaystyle n\in \mathbb {N} } , 0 + 1 + 2 + ⋯ + n = n ( n + 1 ) 2 . {\displaystyle 0+1+2+\cdots +n={\tfrac {n(n+1)}{2}}.} Proof. Let P(n) be the statement 0 + 1 + 2 + ⋯ + n = n ( n + 1 ) 2 . {\displaystyle 0+1+2+\cdots +n={\tfrac {n(n+1)}{2}}.} We give a proof by induction on n. Base case: Show that the statement holds for the smallest natural number n = 0. P(0) is clearly true: 0 = 0 ( 0 + 1 ) 2 . {\displaystyle 0={\tfrac {0(0+1)}{2}}\,.} Induction step: Show that for every k ≥ 0, if P(k) holds, then P(k + 1) also holds.Infinite descent The method of infinite descent is a variation of mathematical induction which was used by Pierre de Fermat. It is used to show that some statement Q(n) is false for all natural numbers n. Its traditional form consists of showing that if Q(n) is true for some natural number n, it also holds for some strictly smaller natural number m. Because there are no infinite decreasing sequences of natural numbers, this situation would be impossible, thereby showing (by contradiction) that Q(n) cannot be true for any n. The validity of this method can be verified from the usual principle of mathematical induction. Using mathematical induction on the statement P(n) defined as "Q(m) is false for all natural numbers m less than or equal to n", it follows that P(n) holds for all n, which means that Q(n) is false for every natural number n.Limited Mathematical Induction If one wishes to prove that a property P holds for all natural numbers less than or equal to n, proving P satisfies the following conditions suffices: P holds for 0, For any natural number x less than n, if P holds for x, then P holds for x + 1Prefix induction The most common form of proof by mathematical induction requires proving in the induction step thatExample of error in the induction step The induction step must be proved for all values of n. To illustrate this, Joel E. Cohen proposed the following argument, which purports to prove by mathematical induction that all horses are of the same color:Base case: in a set of only one horse, there is only one color. Induction step: assume as induction hypothesis that within any set of n {\displaystyle n} horses, there is only one color. Now look at any set of n + 1 {\displaystyle n+1} horses. Number them: 1 , 2 , 3 , … , n , n + 1 {\displaystyle 1,2,3,\dotsc ,n,n+1} . Consider the sets { 1 , 2 , 3 , … , n } {\textstyle \left\{1,2,3,\dotsc ,n\right\}} and { 2 , 3 , 4 , … , n + 1 } {\textstyle \left\{2,3,4,\dotsc ,n+1\right\}} . Each is a set of only n {\displaystyle n} horses, therefore within each there is only one color. But the two sets overlap, so there must be only one color among all n + 1 {\displaystyle n+1} horses. The base case n = 1 {\displaystyle n=1} is trivial, and the induction step is correct in all cases n > 1 {\displaystyle n>1} . However, the argument used in the induction step is incorrect for n + 1 = 2 {\displaystyle n+1=2} , because the statement that "the two sets overlap" is false for { 1 } {\textstyle \left\{1\right\}} and { 2 } {\textstyle \left\{2\right\}} .
12
[ "Mathematical induction", "has part(s)", "inductive hypothesis" ]
Mathematical induction is a method for proving that a statement P ( n ) {\displaystyle P(n)} is true for every natural number n {\displaystyle n} , that is, that the infinitely many cases P ( 0 ) , P ( 1 ) , P ( 2 ) , P ( 3 ) , … {\displaystyle P(0),P(1),P(2),P(3),\dots }   all hold. Informal metaphors help to explain this technique, such as falling dominoes or climbing a ladder:Infinite descent The method of infinite descent is a variation of mathematical induction which was used by Pierre de Fermat. It is used to show that some statement Q(n) is false for all natural numbers n. Its traditional form consists of showing that if Q(n) is true for some natural number n, it also holds for some strictly smaller natural number m. Because there are no infinite decreasing sequences of natural numbers, this situation would be impossible, thereby showing (by contradiction) that Q(n) cannot be true for any n. The validity of this method can be verified from the usual principle of mathematical induction. Using mathematical induction on the statement P(n) defined as "Q(m) is false for all natural numbers m less than or equal to n", it follows that P(n) holds for all n, which means that Q(n) is false for every natural number n.Example of error in the induction step The induction step must be proved for all values of n. To illustrate this, Joel E. Cohen proposed the following argument, which purports to prove by mathematical induction that all horses are of the same color:Base case: in a set of only one horse, there is only one color. Induction step: assume as induction hypothesis that within any set of n {\displaystyle n} horses, there is only one color. Now look at any set of n + 1 {\displaystyle n+1} horses. Number them: 1 , 2 , 3 , … , n , n + 1 {\displaystyle 1,2,3,\dotsc ,n,n+1} . Consider the sets { 1 , 2 , 3 , … , n } {\textstyle \left\{1,2,3,\dotsc ,n\right\}} and { 2 , 3 , 4 , … , n + 1 } {\textstyle \left\{2,3,4,\dotsc ,n+1\right\}} . Each is a set of only n {\displaystyle n} horses, therefore within each there is only one color. But the two sets overlap, so there must be only one color among all n + 1 {\displaystyle n+1} horses. The base case n = 1 {\displaystyle n=1} is trivial, and the induction step is correct in all cases n > 1 {\displaystyle n>1} . However, the argument used in the induction step is incorrect for n + 1 = 2 {\displaystyle n+1=2} , because the statement that "the two sets overlap" is false for { 1 } {\textstyle \left\{1\right\}} and { 2 } {\textstyle \left\{2\right\}} .
13
[ "Mathematical induction", "instance of", "proof technique" ]
Mathematical induction is a method for proving that a statement P ( n ) {\displaystyle P(n)} is true for every natural number n {\displaystyle n} , that is, that the infinitely many cases P ( 0 ) , P ( 1 ) , P ( 2 ) , P ( 3 ) , … {\displaystyle P(0),P(1),P(2),P(3),\dots }   all hold. Informal metaphors help to explain this technique, such as falling dominoes or climbing a ladder:Mathematical induction proves that we can climb as high as we like on a ladder, by proving that we can climb onto the bottom rung (the basis) and that from each rung we can climb up to the next one (the step). A proof by induction consists of two cases. The first, the base case, proves the statement for n = 0 {\displaystyle n=0} without assuming any knowledge of other cases. The second case, the induction step, proves that if the statement holds for any given case n = k {\displaystyle n=k} , then it must also hold for the next case n = k + 1 {\displaystyle n=k+1} . These two steps establish that the statement holds for every natural number n {\displaystyle n} . The base case does not necessarily begin with n = 0 {\displaystyle n=0} , but often with n = 1 {\displaystyle n=1} , and possibly with any fixed natural number n = N {\displaystyle n=N} , establishing the truth of the statement for all natural numbers n ≥ N {\displaystyle n\geq N} . The method can be extended to prove statements about more general well-founded structures, such as trees; this generalization, known as structural induction, is used in mathematical logic and computer science. Mathematical induction in this extended sense is closely related to recursion. Mathematical induction is an inference rule used in formal proofs, and is the foundation of most correctness proofs for computer programs.Although its name may suggest otherwise, mathematical induction should not be confused with inductive reasoning as used in philosophy (see Problem of induction). The mathematical method examines infinitely many cases to prove a general statement, but does so by a finite chain of deductive reasoning involving the variable n {\displaystyle n} , which can take infinitely many values.Infinite descent The method of infinite descent is a variation of mathematical induction which was used by Pierre de Fermat. It is used to show that some statement Q(n) is false for all natural numbers n. Its traditional form consists of showing that if Q(n) is true for some natural number n, it also holds for some strictly smaller natural number m. Because there are no infinite decreasing sequences of natural numbers, this situation would be impossible, thereby showing (by contradiction) that Q(n) cannot be true for any n. The validity of this method can be verified from the usual principle of mathematical induction. Using mathematical induction on the statement P(n) defined as "Q(m) is false for all natural numbers m less than or equal to n", it follows that P(n) holds for all n, which means that Q(n) is false for every natural number n.Limited Mathematical Induction If one wishes to prove that a property P holds for all natural numbers less than or equal to n, proving P satisfies the following conditions suffices: P holds for 0, For any natural number x less than n, if P holds for x, then P holds for x + 1Prefix induction The most common form of proof by mathematical induction requires proving in the induction step that
16
[ "Anamnesis (philosophy)", "discoverer or inventor", "Plato" ]
In Plato's theory of epistemology, anamnesis (; Ancient Greek: ἀνάμνησις) is the recollection of innate knowledge acquired before birth, the claim that learning consists of rediscovering knowledge from within. Plato develops the theory of Anamnesis in his dialogues Meno, Phaedo, and Phaedrus.Meno In Meno, Plato's character (and old teacher) Socrates is challenged by Meno to explain how someone could find out what the nature of virtue is if they did not already know anything about it. In other words, one who knows none of the attributes, properties, and/or other descriptive markers of any kind that help signify what something is (physical or otherwise) will not recognize it even after coming across it. Therefore, if the converse is true, and one knows the attributes, properties and/or other descriptive markers of this thing, one should not need to seek it out at all. The conclusion is that in either instance, there is no point trying to gain that "something"; in the case of Plato's aforementioned work, there is no point in seeking knowledge. Socrates' response is to develop his theory of anamnesis and to suggest that the soul is immortal, and repeatedly incarnated; knowledge is in the soul from eternity (86b), but each time the soul is incarnated its knowledge is forgotten in the trauma of birth. What one perceives to be learning, then, is the recovery of what one has forgotten. (Once it has been brought back it is true belief, to be turned into genuine knowledge by understanding.) Socrates (and Plato) thus sees himself not as a teacher but as a midwife, aiding with the birth of knowledge that was already there in the student. The theory is illustrated by Socrates asking a slave boy questions about geometry. At first, the boy gives the wrong answer; when that is pointed out to him, he is puzzled, but by asking questions, Socrates helps him to reach the correct answer. That is intended to show that since the boy was not told the answer, he reached the truth by only recollecting what he had once known but later forgotten.
0
[ "Anamnesis (philosophy)", "facet of", "epistemology" ]
In Plato's theory of epistemology, anamnesis (; Ancient Greek: ἀνάμνησις) is the recollection of innate knowledge acquired before birth, the claim that learning consists of rediscovering knowledge from within. Plato develops the theory of Anamnesis in his dialogues Meno, Phaedo, and Phaedrus.Meno In Meno, Plato's character (and old teacher) Socrates is challenged by Meno to explain how someone could find out what the nature of virtue is if they did not already know anything about it. In other words, one who knows none of the attributes, properties, and/or other descriptive markers of any kind that help signify what something is (physical or otherwise) will not recognize it even after coming across it. Therefore, if the converse is true, and one knows the attributes, properties and/or other descriptive markers of this thing, one should not need to seek it out at all. The conclusion is that in either instance, there is no point trying to gain that "something"; in the case of Plato's aforementioned work, there is no point in seeking knowledge. Socrates' response is to develop his theory of anamnesis and to suggest that the soul is immortal, and repeatedly incarnated; knowledge is in the soul from eternity (86b), but each time the soul is incarnated its knowledge is forgotten in the trauma of birth. What one perceives to be learning, then, is the recovery of what one has forgotten. (Once it has been brought back it is true belief, to be turned into genuine knowledge by understanding.) Socrates (and Plato) thus sees himself not as a teacher but as a midwife, aiding with the birth of knowledge that was already there in the student. The theory is illustrated by Socrates asking a slave boy questions about geometry. At first, the boy gives the wrong answer; when that is pointed out to him, he is puzzled, but by asking questions, Socrates helps him to reach the correct answer. That is intended to show that since the boy was not told the answer, he reached the truth by only recollecting what he had once known but later forgotten.
1
[ "Anamnesis (philosophy)", "instance of", "theory" ]
In Plato's theory of epistemology, anamnesis (; Ancient Greek: ἀνάμνησις) is the recollection of innate knowledge acquired before birth, the claim that learning consists of rediscovering knowledge from within. Plato develops the theory of Anamnesis in his dialogues Meno, Phaedo, and Phaedrus.Meno In Meno, Plato's character (and old teacher) Socrates is challenged by Meno to explain how someone could find out what the nature of virtue is if they did not already know anything about it. In other words, one who knows none of the attributes, properties, and/or other descriptive markers of any kind that help signify what something is (physical or otherwise) will not recognize it even after coming across it. Therefore, if the converse is true, and one knows the attributes, properties and/or other descriptive markers of this thing, one should not need to seek it out at all. The conclusion is that in either instance, there is no point trying to gain that "something"; in the case of Plato's aforementioned work, there is no point in seeking knowledge. Socrates' response is to develop his theory of anamnesis and to suggest that the soul is immortal, and repeatedly incarnated; knowledge is in the soul from eternity (86b), but each time the soul is incarnated its knowledge is forgotten in the trauma of birth. What one perceives to be learning, then, is the recovery of what one has forgotten. (Once it has been brought back it is true belief, to be turned into genuine knowledge by understanding.) Socrates (and Plato) thus sees himself not as a teacher but as a midwife, aiding with the birth of knowledge that was already there in the student. The theory is illustrated by Socrates asking a slave boy questions about geometry. At first, the boy gives the wrong answer; when that is pointed out to him, he is puzzled, but by asking questions, Socrates helps him to reach the correct answer. That is intended to show that since the boy was not told the answer, he reached the truth by only recollecting what he had once known but later forgotten.
3
[ "Methexis", "instance of", "philosophical concept" ]
In theatre, methexis (Ancient Greek: μέθεξις; also methectics), is "group sharing". Originating from Greek theatre, the audience participates, creates and improvises the action of the ritual. In philosophy, methexis is the relation between a particular and a form (in Plato's sense), e.g. a beautiful object is said to partake of the form of beauty.Methexis is sometimes contrasted with mimesis. The latter "connotes emphasis on the solo performer (the hero) separate from the audience," in direct contrast to the communal methectic theatrical experience which has "little or no 'fourth wall'".
1
[ "Noocracy", "instance of", "form of government" ]
Noocracy (, from Greek word noucracy) where nous means 'wise' and kratos means 'rule' therefore 'rule of the wise' ) is a form of government where decision making is done by wise people. The idea is proposed by various philosophers such as Plato, Gautama Buddha and Confucius.
1
[ "Geocentric model", "opposite of", "heliocentrism" ]
First, from anywhere on Earth, the Sun appears to revolve around Earth once per day. While the Moon and the planets have their own motions, they also appear to revolve around Earth about once per day. The stars appeared to be fixed on a celestial sphere rotating once each day about an axis through the geographic poles of Earth. Second, Earth seems to be unmoving from the perspective of an earthbound observer; it feels solid, stable, and stationary.Ancient Greek, ancient Roman, and medieval philosophers usually combined the geocentric model with a spherical Earth, in contrast to the older flat-Earth model implied in some mythology. The ancient Jewish Babylonian uranography pictured a flat Earth with a dome-shaped, rigid canopy called the firmament placed over it (רקיע- rāqîa'). However, the Greek astronomer and mathematician Aristarchus of Samos (c. 310 – c. 230 BC) developed a heliocentric model placing all of the then-known planets in their correct order around the Sun. The ancient Greeks believed that the motions of the planets were circular, a view that was not challenged in Western culture until the 17th century, when Johannes Kepler postulated that orbits were heliocentric and elliptical (Kepler's first law of planetary motion). In 1687 Newton showed that elliptical orbits could be derived from his laws of gravitation. The astronomical predictions of Ptolemy's geocentric model, developed in the 2nd century CE, served as the basis for preparing astrological and astronomical charts for over 1,500 years. The geocentric model held sway into the early modern age, but from the late 16th century onward, it was gradually superseded by the heliocentric model of Copernicus (1473–1543), Galileo (1564–1642), and Kepler (1571–1630). There was much resistance to the transition between these two theories. Some felt that a new, unknown theory could not subvert an accepted consensus for geocentrism.The geocentric model was eventually replaced by the heliocentric model. Copernican heliocentrism could remove Ptolemy's epicycles because the retrograde motion could be seen to be the result of the combination of Earth and planet movement and speeds. Copernicus felt strongly that equants were a violation of Aristotelian purity, and proved that replacement of the equant with a pair of new epicycles was entirely equivalent. Astronomers often continued using the equants instead of the epicycles because the former was easier to calculate, and gave the same result. It has been determined, in fact, that the Copernican, Ptolemaic and even the Tychonic models provided identical results to identical inputs. They are computationally equivalent. It wasn't until Kepler demonstrated a physical observation that could show that the physical sun is directly involved in determining an orbit that a new model was required. The Ptolemaic order of spheres from Earth outward is: Moon Mercury Venus Sun Mars Jupiter Saturn Fixed Stars Primum Mobile ("First Moved")Ptolemy did not invent or work out this order, which aligns with the ancient Seven Heavens religious cosmology common to the major Eurasian religious traditions. It also follows the decreasing orbital periods of the Moon, Sun, planets and stars.
1
[ "Geocentric model", "subclass of", "superseded scientific theory" ]
In astronomy, the geocentric model (also known as geocentrism, often exemplified specifically by the Ptolemaic system) is a superseded description of the Universe with Earth at the center. Under most geocentric models, the Sun, Moon, stars, and planets all orbit Earth. The geocentric model was the predominant description of the cosmos in many European ancient civilizations, such as those of Aristotle in Classical Greece and Ptolemy in Roman Egypt. Ptolemy’s geocentric model was adopted and refined during the Islamic Golden Age, which Muslims believed correlated with the teachings of Islam.Two observations supported the idea that Earth was the center of the Universe:First, from anywhere on Earth, the Sun appears to revolve around Earth once per day. While the Moon and the planets have their own motions, they also appear to revolve around Earth about once per day. The stars appeared to be fixed on a celestial sphere rotating once each day about an axis through the geographic poles of Earth. Second, Earth seems to be unmoving from the perspective of an earthbound observer; it feels solid, stable, and stationary.Ancient Greek, ancient Roman, and medieval philosophers usually combined the geocentric model with a spherical Earth, in contrast to the older flat-Earth model implied in some mythology. The ancient Jewish Babylonian uranography pictured a flat Earth with a dome-shaped, rigid canopy called the firmament placed over it (רקיע- rāqîa'). However, the Greek astronomer and mathematician Aristarchus of Samos (c. 310 – c. 230 BC) developed a heliocentric model placing all of the then-known planets in their correct order around the Sun. The ancient Greeks believed that the motions of the planets were circular, a view that was not challenged in Western culture until the 17th century, when Johannes Kepler postulated that orbits were heliocentric and elliptical (Kepler's first law of planetary motion). In 1687 Newton showed that elliptical orbits could be derived from his laws of gravitation. The astronomical predictions of Ptolemy's geocentric model, developed in the 2nd century CE, served as the basis for preparing astrological and astronomical charts for over 1,500 years. The geocentric model held sway into the early modern age, but from the late 16th century onward, it was gradually superseded by the heliocentric model of Copernicus (1473–1543), Galileo (1564–1642), and Kepler (1571–1630). There was much resistance to the transition between these two theories. Some felt that a new, unknown theory could not subvert an accepted consensus for geocentrism.
5
[ "Aristotelian logic", "has quality", "syllogism" ]
Syllogism in the first figure In the Prior Analytics translated by A. J. Jenkins as it appears in volume 8 of the Great Books of the Western World, Aristotle says of the First Figure: "... If A is predicated of all B, and B of all C, A must be predicated of all C." In the Prior Analytics translated by Robin Smith, Aristotle says of the first figure: "... For if A is predicated of every B and B of every C, it is necessary for A to be predicated of every C."Taking a = is predicated of all = is predicated of every, and using the symbolical method used in the Middle Ages, then the first figure is simplified to: If AaB
3
[ "Aristotelian logic", "subclass of", "term logic" ]
In philosophy, term logic, also known as traditional logic, syllogistic logic or Aristotelian logic, is a loose name for an approach to formal logic that began with Aristotle and was developed further in ancient history mostly by his followers, the Peripatetics. It was revived after the third century CE by Porphyry's Isagoge. Term logic revived in medieval times, first in Islamic logic by Alpharabius in the tenth century, and later in Christian Europe in the twelfth century with the advent of new logic, remaining dominant until the advent of predicate logic in the late nineteenth century. However, even if eclipsed by newer logical systems, term logic still plays a significant role in the study of logic. Rather than radically breaking with term logic, modern logics typically expand it.
4
[ "Aristotelian physics", "instance of", "superseded scientific theory" ]
Methods nature is everywhere the cause of order. While consistent with common human experience, Aristotle's principles were not based on controlled, quantitative experiments, so they do not describe our universe in the precise, quantitative way now expected of science. Contemporaries of Aristotle like Aristarchus rejected these principles in favor of heliocentrism, but their ideas were not widely accepted. Aristotle's principles were difficult to disprove merely through casual everyday observation, but later development of the scientific method challenged his views with experiments and careful measurement, using increasingly advanced technology such as the telescope and vacuum pump.Life and death of Aristotelian physics The reign of Aristotelian physics, the earliest known speculative theory of physics, lasted almost two millennia. After the work of many pioneers such as Copernicus, Tycho Brahe, Galileo, Kepler, Descartes and Newton, it became generally accepted that Aristotelian physics was neither correct nor viable. Despite this, it survived as a scholastic pursuit well into the seventeenth century, until universities amended their curricula. In Europe, Aristotle's theory was first convincingly discredited by Galileo's studies. Using a telescope, Galileo observed that the Moon was not entirely smooth, but had craters and mountains, contradicting the Aristotelian idea of the incorruptibly perfect smooth Moon. Galileo also criticized this notion theoretically; a perfectly smooth Moon would reflect light unevenly like a shiny billiard ball, so that the edges of the moon's disk would have a different brightness than the point where a tangent plane reflects sunlight directly to the eye. A rough moon reflects in all directions equally, leading to a disk of approximately equal brightness which is what is observed. Galileo also observed that Jupiter has moons – i.e. objects revolving around a body other than the Earth – and noted the phases of Venus, which demonstrated that Venus (and, by implication, Mercury) traveled around the Sun, not the Earth. According to legend, Galileo dropped balls of various densities from the Tower of Pisa and found that lighter and heavier ones fell at almost the same speed. His experiments actually took place using balls rolling down inclined planes, a form of falling sufficiently slow to be measured without advanced instruments. In a relatively dense medium such as water, a heavier body falls faster than a lighter one. This led Aristotle to speculate that the rate of falling is proportional to the weight and inversely proportional to the density of the medium. From his experience with objects falling in water, he concluded that water is approximately ten times denser than air. By weighing a volume of compressed air, Galileo showed that this overestimates the density of air by a factor of forty. From his experiments with inclined planes, he concluded that if friction is neglected, all bodies fall at the same rate (which is also not true, since not only friction but also density of the medium relative to density of the bodies has to be negligible. Aristotle correctly noticed that medium density is a factor but focused on body weight instead of density. Galileo neglected medium density which led him to correct conclusion for vacuum). Galileo also advanced a theoretical argument to support his conclusion. He asked if two bodies of different weights and different rates of fall are tied by a string, does the combined system fall faster because it is now more massive, or does the lighter body in its slower fall hold back the heavier body? The only convincing answer is neither: all the systems fall at the same rate.Followers of Aristotle were aware that the motion of falling bodies was not uniform, but picked up speed with time. Since time is an abstract quantity, the peripatetics postulated that the speed was proportional to the distance. Galileo established experimentally that the speed is proportional to the time, but he also gave a theoretical argument that the speed could not possibly be proportional to the distance. In modern terms, if the rate of fall is proportional to the distance, the differential expression for the distance y travelled after time t is:
2
[ "Aristotle's biology", "discoverer or inventor", "Aristotle" ]
20th and 21st century interest Zoologists have frequently mocked Aristotle for errors and unverified secondhand reports. However, modern observation has confirmed one after another of his more surprising claims, including the active camouflage of the octopus and the ability of elephants to snorkel with their trunks while swimming.Aristotle remains largely unknown to modern scientists, though zoologists are perhaps most likely to mention him as "the father of biology"; the MarineBio Conservation Society notes that he identified "crustaceans, echinoderms, mollusks, and fish", that cetaceans are mammals, and that marine vertebrates could be either oviparous or viviparous, so he "is often referred to as the father of marine biology". The evolutionary zoologist Armand Leroi has taken an interest in Aristotle's biology. The concept of homology began with Aristotle, and the evolutionary developmental biologist Lewis I. Held commented that The deep thinker who would be most amused by .. deep homologies is Aristotle, who was fascinated by the natural world but bewildered by its inner workings.
0
[ "South Pole", "continent", "Antarctica" ]
The South Pole, also known as the Geographic South Pole, Terrestrial South Pole or 90th Parallel South, is the southernmost point on Earth and lies antipodally on the opposite side of Earth from the North Pole, at a distance of 12,430 miles (20,004 km) in all directions. It is one of the two points where Earth's axis of rotation intersects its surface. Situated on the continent of Antarctica, it is the site of the United States Amundsen–Scott South Pole Station, which was established in 1956 and has been permanently staffed since that year. The Geographic South Pole is distinct from the South Magnetic Pole, the position of which is defined based on Earth's magnetic field. The South Pole is the center of the Southern Hemisphere.
0
[ "South Pole", "opposite of", "North Pole" ]
The South Pole, also known as the Geographic South Pole, Terrestrial South Pole or 90th Parallel South, is the southernmost point on Earth and lies antipodally on the opposite side of Earth from the North Pole, at a distance of 12,430 miles (20,004 km) in all directions. It is one of the two points where Earth's axis of rotation intersects its surface. Situated on the continent of Antarctica, it is the site of the United States Amundsen–Scott South Pole Station, which was established in 1956 and has been permanently staffed since that year. The Geographic South Pole is distinct from the South Magnetic Pole, the position of which is defined based on Earth's magnetic field. The South Pole is the center of the Southern Hemisphere.
2
[ "South Pole", "discoverer or inventor", "Roald Amundsen" ]
Historic monuments Amundsen's Tent The tent was erected by the Norwegian expedition led by Roald Amundsen on its arrival on 14 December 1911. It is currently buried beneath the snow and ice in the vicinity of the Pole. It has been designated a Historic Site or Monument (HSM 80), following a proposal by Norway to the Antarctic Treaty Consultative Meeting. The precise location of the tent is unknown, but based on calculations of the rate of movement of the ice and the accumulation of snow, it is believed, as of 2010, to lie between 1.8 and 2.5 km (1.1 and 1.5 miles) from the Pole at a depth of 17 m (56 ft) below the present surface.
3
[ "South Pole", "instance of", "geographic region" ]
The South Pole, also known as the Geographic South Pole, Terrestrial South Pole or 90th Parallel South, is the southernmost point on Earth and lies antipodally on the opposite side of Earth from the North Pole, at a distance of 12,430 miles (20,004 km) in all directions. It is one of the two points where Earth's axis of rotation intersects its surface. Situated on the continent of Antarctica, it is the site of the United States Amundsen–Scott South Pole Station, which was established in 1956 and has been permanently staffed since that year. The Geographic South Pole is distinct from the South Magnetic Pole, the position of which is defined based on Earth's magnetic field. The South Pole is the center of the Southern Hemisphere.Geography For most purposes, the Geographic South Pole is defined as the southern point of the two points where Earth's axis of rotation intersects its surface (the other being the Geographic North Pole). However, Earth's axis of rotation is actually subject to very small "wobbles" (polar motion), so this definition is not adequate for very precise work. The geographic coordinates of the South Pole are usually given simply as 90°S, since its longitude is geometrically undefined and irrelevant. When a longitude is desired, it may be given as 0°. At the South Pole, all directions face north. For this reason, directions at the Pole are given relative to "grid north", which points northward along the prime meridian. Along tight latitude circles, clockwise is east, and anti-clockwise is west, opposite to the North Pole. The Geographic South Pole is presently located on the continent of Antarctica, although this has not been the case for all of Earth's history because of continental drift. It sits atop a featureless, barren, windswept and icy plateau at an altitude of 2,835 m (9,301 ft) above sea level, and is located about 1,300 km (810 mi) from the nearest open sea at the Bay of Whales. The ice is estimated to be about 2,700 m (8,900 ft) thick at the Pole, so the land surface under the ice sheet is actually near sea level.The polar ice sheet is moving at a rate of roughly 10 m (33 ft) per year in a direction between 37° and 40° west of grid north, down towards the Weddell Sea. Therefore, the position of the station and other artificial features relative to the geographic pole gradually shift over time.
4
[ "South Pole", "instance of", "geographical pole" ]
The South Pole, also known as the Geographic South Pole, Terrestrial South Pole or 90th Parallel South, is the southernmost point on Earth and lies antipodally on the opposite side of Earth from the North Pole, at a distance of 12,430 miles (20,004 km) in all directions. It is one of the two points where Earth's axis of rotation intersects its surface. Situated on the continent of Antarctica, it is the site of the United States Amundsen–Scott South Pole Station, which was established in 1956 and has been permanently staffed since that year. The Geographic South Pole is distinct from the South Magnetic Pole, the position of which is defined based on Earth's magnetic field. The South Pole is the center of the Southern Hemisphere.Geography For most purposes, the Geographic South Pole is defined as the southern point of the two points where Earth's axis of rotation intersects its surface (the other being the Geographic North Pole). However, Earth's axis of rotation is actually subject to very small "wobbles" (polar motion), so this definition is not adequate for very precise work. The geographic coordinates of the South Pole are usually given simply as 90°S, since its longitude is geometrically undefined and irrelevant. When a longitude is desired, it may be given as 0°. At the South Pole, all directions face north. For this reason, directions at the Pole are given relative to "grid north", which points northward along the prime meridian. Along tight latitude circles, clockwise is east, and anti-clockwise is west, opposite to the North Pole. The Geographic South Pole is presently located on the continent of Antarctica, although this has not been the case for all of Earth's history because of continental drift. It sits atop a featureless, barren, windswept and icy plateau at an altitude of 2,835 m (9,301 ft) above sea level, and is located about 1,300 km (810 mi) from the nearest open sea at the Bay of Whales. The ice is estimated to be about 2,700 m (8,900 ft) thick at the Pole, so the land surface under the ice sheet is actually near sea level.The polar ice sheet is moving at a rate of roughly 10 m (33 ft) per year in a direction between 37° and 40° west of grid north, down towards the Weddell Sea. Therefore, the position of the station and other artificial features relative to the geographic pole gradually shift over time.Climate and day and night During winter (May through August), the South Pole receives no sunlight at all, and is completely dark apart from moonlight. In summer (November through February), the sun is continuously above the horizon and appears to move in a counter-clockwise circle. However, it is always low in the sky, reaching a maximum of 23.5° around the December solstice because of the 23.5° tilt of the earth's axis. Much of the sunlight that does reach the surface is reflected by the white snow. This lack of warmth from the sun, combined with the high altitude (about 2,800 metres (9,200 ft)), means that the South Pole has one of the coldest climates on Earth (though it is not quite the coldest; that record goes to the region in the vicinity of the Vostok Station, also in Antarctica, which lies at a higher elevation).The South Pole is at an altitude of 9,200 feet (2,800 m) but feels like 11,000 feet (3,400 m). Centrifugal force from the spin of the planet pulls the atmosphere toward the equator. The South Pole is colder than the North Pole primarily because of the elevation difference and for being in the middle of a continent. The North Pole is a few feet from sea level in the middle of an ocean. In midsummer, as the sun reaches its maximum elevation of about 23.5 degrees, high temperatures at the South Pole in January average at −25.9 °C (−15 °F). As the six-month "day" wears on and the sun gets lower, temperatures drop as well: they reach −55 °C (−67 °F) around sunset (late March) and sunrise (late September). In midwinter, the average temperature remains steady at around −60 °C (−76 °F). The highest temperature ever recorded at the Amundsen–Scott South Pole Station was −12.3 °C (9.9 °F) on Christmas Day, 2011, and the lowest was −82.8 °C (−117.0 °F) on 23 June 1982 (for comparison, the lowest temperature directly recorded anywhere on earth was −89.2 °C (−128.6 °F) at Vostok Station on 21 July 1983, though −93.2 °C (−135.8 °F) was measured indirectly by satellite in East Antarctica between Dome A and Dome F in August 2010). Mean annual temperature at the South Pole is –49.5 °C (–57.1 °F).The South Pole has an ice cap climate (Köppen climate classification EF). It resembles a desert, receiving very little precipitation. Air humidity is near zero. However, high winds can cause the blowing of snowfall, and the accumulation of snow amounts to about 7 cm (2.8 in) per year. The former dome seen in pictures of the Amundsen–Scott station is partially buried due to snow storms, and the entrance to the dome had to be regularly bulldozed to uncover it. More recent buildings are raised on stilts so that the snow does not build up against their sides.
5
[ "South Pole", "different from", "south geomagnetic pole" ]
The South Pole, also known as the Geographic South Pole, Terrestrial South Pole or 90th Parallel South, is the southernmost point on Earth and lies antipodally on the opposite side of Earth from the North Pole, at a distance of 12,430 miles (20,004 km) in all directions. It is one of the two points where Earth's axis of rotation intersects its surface. Situated on the continent of Antarctica, it is the site of the United States Amundsen–Scott South Pole Station, which was established in 1956 and has been permanently staffed since that year. The Geographic South Pole is distinct from the South Magnetic Pole, the position of which is defined based on Earth's magnetic field. The South Pole is the center of the Southern Hemisphere.
8
[ "South Pole", "different from", "South Magnetic Pole" ]
The South Pole, also known as the Geographic South Pole, Terrestrial South Pole or 90th Parallel South, is the southernmost point on Earth and lies antipodally on the opposite side of Earth from the North Pole, at a distance of 12,430 miles (20,004 km) in all directions. It is one of the two points where Earth's axis of rotation intersects its surface. Situated on the continent of Antarctica, it is the site of the United States Amundsen–Scott South Pole Station, which was established in 1956 and has been permanently staffed since that year. The Geographic South Pole is distinct from the South Magnetic Pole, the position of which is defined based on Earth's magnetic field. The South Pole is the center of the Southern Hemisphere.
9
[ "South Pole", "located in the administrative territorial entity", "Antarctic Treaty area" ]
The South Pole, also known as the Geographic South Pole, Terrestrial South Pole or 90th Parallel South, is the southernmost point on Earth and lies antipodally on the opposite side of Earth from the North Pole, at a distance of 12,430 miles (20,004 km) in all directions. It is one of the two points where Earth's axis of rotation intersects its surface. Situated on the continent of Antarctica, it is the site of the United States Amundsen–Scott South Pole Station, which was established in 1956 and has been permanently staffed since that year. The Geographic South Pole is distinct from the South Magnetic Pole, the position of which is defined based on Earth's magnetic field. The South Pole is the center of the Southern Hemisphere.
14
[ "Calculus", "discoverer or inventor", "Isaac Newton" ]
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations. It has two major branches, differential calculus and integral calculus; the former concerns instantaneous rates of change, and the slopes of curves, while the latter concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus, and they make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit.Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. Today, calculus has widespread uses in science, engineering, and social science.
0
[ "Calculus", "has part(s)", "integral calculus" ]
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations. It has two major branches, differential calculus and integral calculus; the former concerns instantaneous rates of change, and the slopes of curves, while the latter concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus, and they make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit.Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. Today, calculus has widespread uses in science, engineering, and social science.
10
[ "Calculus", "has part(s)", "differential calculus" ]
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations. It has two major branches, differential calculus and integral calculus; the former concerns instantaneous rates of change, and the slopes of curves, while the latter concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus, and they make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit.Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. Today, calculus has widespread uses in science, engineering, and social science.Principles Limits and infinitesimals Calculus is usually developed by working with very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like real numbers but which are, in some sense, "infinitely small". For example, an infinitesimal number could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, ... and thus less than any positive real number. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. The symbols d x {\displaystyle dx} and d y {\displaystyle dy} were taken to be infinitesimal, and the derivative d y / d x {\displaystyle dy/dx} was their ratio.The infinitesimal approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. In the late 19th century, infinitesimals were replaced within academia by the epsilon, delta approach to limits. Limits describe the behavior of a function at a certain input in terms of its values at nearby inputs. They capture small-scale behavior using the intrinsic structure of the real number system (as a metric space with the least-upper-bound property). In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by sequences of smaller and smaller numbers, and the infinitely small behavior of a function is found by taking the limiting behavior for these sequences. Limits were thought to provide a more rigorous foundation for calculus, and for this reason they became the standard approach during the 20th century. However, the infinitesimal concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals.
12
[ "Calculus", "discoverer or inventor", "Gottfried Wilhelm Leibniz" ]
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations. It has two major branches, differential calculus and integral calculus; the former concerns instantaneous rates of change, and the slopes of curves, while the latter concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus, and they make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit.Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. Today, calculus has widespread uses in science, engineering, and social science.History Modern calculus was developed in 17th-century Europe by Isaac Newton and Gottfried Wilhelm Leibniz (independently of each other, first publishing around the same time) but elements of it appeared in ancient Greece, then in China and the Middle East, and still later again in medieval Europe and in India.These ideas were arranged into a true calculus of infinitesimals by Gottfried Wilhelm Leibniz, who was originally accused of plagiarism by Newton. He is now regarded as an independent inventor of and contributor to calculus. His contribution was to provide a clear set of rules for working with infinitesimal quantities, allowing the computation of second and higher derivatives, and providing the product rule and chain rule, in their differential and integral forms. Unlike Newton, Leibniz put painstaking effort into his choices of notation.Today, Leibniz and Newton are usually both given credit for independently inventing and developing calculus. Newton was the first to apply calculus to general physics and Leibniz developed much of the notation used in calculus today.: 51–52  The basic insights that both Newton and Leibniz provided were the laws of differentiation and integration, emphasizing that differentiation and integration are inverse processes, second and higher derivatives, and the notion of an approximating polynomial series. When Newton and Leibniz first published their results, there was great controversy over which mathematician (and therefore which country) deserved credit. Newton derived his results first (later to be published in his Method of Fluxions), but Leibniz published his "Nova Methodus pro Maximis et Minimis" first. Newton claimed Leibniz stole ideas from his unpublished notes, which Newton had shared with a few members of the Royal Society. This controversy divided English-speaking mathematicians from continental European mathematicians for many years, to the detriment of English mathematics. A careful examination of the papers of Leibniz and Newton shows that they arrived at their results independently, with Leibniz starting first with integration and Newton with differentiation. It is Leibniz, however, who gave the new discipline its name. Newton called his calculus "the science of fluxions", a term that endured in English schools into the 19th century.: 100  The first complete treatise on calculus to be written in English and use the Leibniz notation was not published until 1815.Since the time of Leibniz and Newton, many mathematicians have contributed to the continuing development of calculus. One of the first and most complete works on both infinitesimal and integral calculus was written in 1748 by Maria Gaetana Agnesi.Significance While many of the ideas of calculus had been developed earlier in Greece, China, India, Iraq, Persia, and Japan, the use of calculus began in Europe, during the 17th century, when Newton and Leibniz built on the work of earlier mathematicians to introduce its basic principles. The Hungarian polymath John von Neumann wrote of this work,
19
[ "Puiseux series", "discoverer or inventor", "Isaac Newton" ]
x − 2 + 2 x − 1 / 2 + x 1 / 3 + 2 x 11 / 6 + x 8 / 3 + x 5 + ⋯ = x − 12 / 6 + 2 x − 3 / 6 + x 2 / 6 + 2 x 11 / 6 + x 16 / 6 + x 30 / 6 + ⋯ {\displaystyle {\begin{aligned}x^{-2}&+2x^{-1/2}+x^{1/3}+2x^{11/6}+x^{8/3}+x^{5}+\cdots \\&=x^{-12/6}+2x^{-3/6}+x^{2/6}+2x^{11/6}+x^{16/6}+x^{30/6}+\cdots \end{aligned}}} is a Puiseux series in the indeterminate x. Puiseux series were first introduced by Isaac Newton in 1676 and rediscovered by Victor Puiseux in 1850.The definition of a Puiseux series includes that the denominators of the exponents must be bounded. So, by reducing exponents to a common denominator n, a Puiseux series becomes a Laurent series in an nth root of the indeterminate. For example, the example above is a Laurent series in x 1 / 6 . {\displaystyle x^{1/6}.} Because a complex number has n nth roots, a convergent Puiseux series typically defines n functions in a neighborhood of 0. Puiseux's theorem, sometimes also called the Newton–Puiseux theorem, asserts that, given a polynomial equation P ( x , y ) = 0 {\displaystyle P(x,y)=0} with complex coefficients, its solutions in y, viewed as functions of x, may be expanded as Puiseux series in x that are convergent in some neighbourhood of 0. In other words, every branch of an algebraic curve may be locally described by a Puiseux series in x (or in x − x0 when considering branches above a neighborhood of x0 ≠ 0). Using modern terminology, Puiseux's theorem asserts that the set of Puiseux series over an algebraically closed field of characteristic 0 is itself an algebraically closed field, called the field of Puiseux series. It is the algebraic closure of the field of formal Laurent series, which itself is the field of fractions of the ring of formal power series.
0
[ "Puiseux series", "named after", "Victor Puiseux" ]
x − 2 + 2 x − 1 / 2 + x 1 / 3 + 2 x 11 / 6 + x 8 / 3 + x 5 + ⋯ = x − 12 / 6 + 2 x − 3 / 6 + x 2 / 6 + 2 x 11 / 6 + x 16 / 6 + x 30 / 6 + ⋯ {\displaystyle {\begin{aligned}x^{-2}&+2x^{-1/2}+x^{1/3}+2x^{11/6}+x^{8/3}+x^{5}+\cdots \\&=x^{-12/6}+2x^{-3/6}+x^{2/6}+2x^{11/6}+x^{16/6}+x^{30/6}+\cdots \end{aligned}}} is a Puiseux series in the indeterminate x. Puiseux series were first introduced by Isaac Newton in 1676 and rediscovered by Victor Puiseux in 1850.The definition of a Puiseux series includes that the denominators of the exponents must be bounded. So, by reducing exponents to a common denominator n, a Puiseux series becomes a Laurent series in an nth root of the indeterminate. For example, the example above is a Laurent series in x 1 / 6 . {\displaystyle x^{1/6}.} Because a complex number has n nth roots, a convergent Puiseux series typically defines n functions in a neighborhood of 0. Puiseux's theorem, sometimes also called the Newton–Puiseux theorem, asserts that, given a polynomial equation P ( x , y ) = 0 {\displaystyle P(x,y)=0} with complex coefficients, its solutions in y, viewed as functions of x, may be expanded as Puiseux series in x that are convergent in some neighbourhood of 0. In other words, every branch of an algebraic curve may be locally described by a Puiseux series in x (or in x − x0 when considering branches above a neighborhood of x0 ≠ 0). Using modern terminology, Puiseux's theorem asserts that the set of Puiseux series over an algebraically closed field of characteristic 0 is itself an algebraically closed field, called the field of Puiseux series. It is the algebraic closure of the field of formal Laurent series, which itself is the field of fractions of the ring of formal power series.Newton–Puiseux theorem As early as 1671, Isaac Newton implicitly used Puiseux series and proved the following theorem for approximating with series the roots of algebraic equations whose coefficients are functions that are themselves approximated with series or polynomials. For this purpose, he introduced the Newton polygon, which remains a fundamental tool in this context. Newton worked with truncated series, and it is only in 1850 that Victor Puiseux introduced the concept of (non-truncated) Puiseux series and proved the theorem that is now known as Puiseux's theorem or Newton–Puiseux theorem. The theorem asserts that, given an algebraic equation whose coefficients are polynomials or, more generally, Puiseux series over a field of characteristic zero, every solution of the equation can be expressed as a Puiseux series. Moreover, the proof provides an algorithm for computing these Puiseux series, and, when working over the complex numbers, the resulting series are convergent. In modern terminology, the theorem can be restated as: the field of Puiseux series over a field of characteristic zero, and the field of convergent Puiseux series over the complex numbers, are both algebraically closed.
1
[ "Newtonian telescope", "subclass of", "reflecting telescope" ]
The Newtonian telescope, also called the Newtonian reflector or just a Newtonian, is a type of reflecting telescope invented by the English scientist Sir Isaac Newton, using a concave primary mirror and a flat diagonal secondary mirror. Newton's first reflecting telescope was completed in 1668 and is the earliest known functional reflecting telescope. The Newtonian telescope's simple design has made it very popular with amateur telescope makers.Description A Newtonian telescope is composed of a primary mirror or objective, usually parabolic in shape, and a smaller flat secondary mirror. The primary mirror makes it possible to collect light from the pointed region of the sky, while the secondary mirror redirects the light out of the optical axis at a right angle so it can be viewed with an eyepiece.
2
[ "Liquid-mirror telescope", "has quality", "advantage" ]
Moon-based liquid-mirror telescopes Low-temperature ionic liquids (below 130 kelvins) have been proposed as the fluid base for an extremely large-diameter spinning liquid-mirror telescope to be based on the Moon. Low temperature is advantageous in imaging long-wave infrared light, which is the form of light (extremely red-shifted) that arrives from the most distant parts of the visible universe. Such a liquid base would be covered by a thin metallic film that forms the reflective surface.
3
[ "Liquid-mirror telescope", "has quality", "cost" ]
Advantages and disadvantages The greatest advantage of a liquid mirror is its small cost, about 1% of a conventional telescope mirror. This cuts down the cost of the entire telescope at least 95%. The University of British Columbia’s 6-meter Large Zenith Telescope cost about a fiftieth as much as a conventional telescope with a glass mirror. The greatest disadvantage is that the mirror can only be pointed straight up. Research is underway to develop telescopes that can be tilted, but currently if a liquid mirror were to tilt out of the zenith, it would lose its shape. Therefore, the mirror's view changes as the Earth rotates, and objects cannot be physically tracked. An object can be briefly electronically tracked while in the field of view by shifting electrons across the CCD at the same speed as the image moves; this tactic is called time delay and integration or drift scanning. Some types of astronomical research are unaffected by these limitations, such as long-term sky surveys and supernova searches. Since the universe is believed to be isotropic and homogeneous (this is called the cosmological principle), the investigation of its structure by cosmologists can also use telescopes highly reduced in their direction of view. Since mercury metal and its vapor are both toxic to humans and animals, there remains a problem for its use in any telescope where it may affect its users and others in its area. In the Large Zenith Telescope, the mercury mirror and the human operators are housed in separately ventilated rooms. At its location in the Canadian mountains, the ambient temperature is fairly low, which reduces the rate of evaporation of the mercury. The less toxic metal gallium may be used instead of mercury, but it has the disadvantage of high cost. Recently Canadian researchers have proposed the substitution of magnetically deformable liquid mirrors composed of a suspension of iron and silver nanoparticles in ethylene glycol. In addition to low toxicity and relatively low cost, such a mirror would have the advantage of being easily and rapidly deformable using variations of magnetic field strength.
5
[ "Liquid-mirror telescope", "discoverer or inventor", "Ernesto Capocci" ]
Liquid-mirror telescopes are telescopes with mirrors made with a reflective liquid. The most common liquid used is mercury, but other liquids will work as well (for example, low-melting point alloys of gallium). The liquid and its container are rotated at a constant speed around a vertical axis, which causes the surface of the liquid to assume a paraboloidal shape. This parabolic reflector can serve as the primary mirror of a reflecting telescope. The rotating liquid assumes the same surface shape regardless of the container's shape; to reduce the amount of liquid metal needed, and thus weight, a rotating mercury mirror uses a container that is as close to the necessary parabolic shape as feasible. Liquid mirrors can be a low-cost alternative to conventional large telescopes. Compared to a solid glass mirror that must be cast, ground, and polished, a rotating liquid-metal mirror is much less expensive to manufacture. Isaac Newton noted that the free surface of a rotating liquid forms a circular paraboloid and can therefore be used as a telescope, but he could not actually build one because he had no way to stabilize the speed of rotation. The concept was further developed by Ernesto Capocci (1798 – 1864) of the Naples Observatory (1850), but it was not until 1872 that Henry Skey of Dunedin, New Zealand constructed the first working laboratory liquid-mirror telescope. Another difficulty is that a liquid-metal mirror can only be used in zenith telescopes, i.e., that look straight up, so it is not suitable for investigations where the telescope must remain pointing at the same location of inertial space (a possible exception to this rule may exist for a liquid-mirror space telescope, where the effect of Earth's gravity is replaced by artificial gravity, perhaps by propelling it gently forward with rockets). Only a telescope located at the North Pole or South Pole would offer a relatively static view of the sky, although the freezing point of mercury and the remoteness of the location would need to be considered. A very large radiotelescope already exists at the South Pole, but the same is not the case with the North Pole as it is located in the Arctic Ocean. The mercury mirror of the Large Zenith Telescope in Canada was the largest liquid-metal mirror ever built. It had a diameter of 6 meters and rotated at a rate of about 8.5 revolutions per minute. It was decommissioned in 2016. This mirror was a test, built for $1 million, but it was not suitable for astronomy because of the test site's weather. As of 2006, plans were being made to build a larger 8-meter liquid-mirror telescope ALPACA for astronomical use, and a larger project called LAMA with 66 individual 6.15-meter telescopes with a total collecting power equal to a 55-meter telescope, resolving power of a 70-meter scope.
8
[ "Newton disc", "discoverer or inventor", "Isaac Newton" ]
The Newton disc, also known as the disappearing colour disc, is a well-known physics experiment with a rotating disc with segments in different colours (usually Newton's primary colours: red, orange, yellow, green, blue, indigo, and violet or ROYGBIV) appearing as white (or off-white or grey) when it spun rapidy about its axis. This type of mix of light stimuli is called temporal optical mixing, a version of additive-averaging mixing. The concept that human visual perception cannot distinguish details of high-speed movements is popularly known as persistence of vision. The disc is named after Isaac Newton. Although he published a circular diagram with segments for the primary colours that he had discovered, it is uncertain whether he actually ever used a spinning disc to demonstrate the principles of light. Transparent variations for magic lantern projection have been produced.
0
[ "Newton disc", "named after", "Isaac Newton" ]
The Newton disc, also known as the disappearing colour disc, is a well-known physics experiment with a rotating disc with segments in different colours (usually Newton's primary colours: red, orange, yellow, green, blue, indigo, and violet or ROYGBIV) appearing as white (or off-white or grey) when it spun rapidy about its axis. This type of mix of light stimuli is called temporal optical mixing, a version of additive-averaging mixing. The concept that human visual perception cannot distinguish details of high-speed movements is popularly known as persistence of vision. The disc is named after Isaac Newton. Although he published a circular diagram with segments for the primary colours that he had discovered, it is uncertain whether he actually ever used a spinning disc to demonstrate the principles of light. Transparent variations for magic lantern projection have been produced.
1
[ "Newton disc", "color", "Roy G. Biv" ]
The Newton disc, also known as the disappearing colour disc, is a well-known physics experiment with a rotating disc with segments in different colours (usually Newton's primary colours: red, orange, yellow, green, blue, indigo, and violet or ROYGBIV) appearing as white (or off-white or grey) when it spun rapidy about its axis. This type of mix of light stimuli is called temporal optical mixing, a version of additive-averaging mixing. The concept that human visual perception cannot distinguish details of high-speed movements is popularly known as persistence of vision. The disc is named after Isaac Newton. Although he published a circular diagram with segments for the primary colours that he had discovered, it is uncertain whether he actually ever used a spinning disc to demonstrate the principles of light. Transparent variations for magic lantern projection have been produced.
2
[ "Newton disc", "instance of", "optical toy" ]
The Newton disc, also known as the disappearing colour disc, is a well-known physics experiment with a rotating disc with segments in different colours (usually Newton's primary colours: red, orange, yellow, green, blue, indigo, and violet or ROYGBIV) appearing as white (or off-white or grey) when it spun rapidy about its axis. This type of mix of light stimuli is called temporal optical mixing, a version of additive-averaging mixing. The concept that human visual perception cannot distinguish details of high-speed movements is popularly known as persistence of vision. The disc is named after Isaac Newton. Although he published a circular diagram with segments for the primary colours that he had discovered, it is uncertain whether he actually ever used a spinning disc to demonstrate the principles of light. Transparent variations for magic lantern projection have been produced.
3
[ "Newton polynomial", "discoverer or inventor", "Isaac Newton" ]
In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton's divided differences method.Definition Given a set of k + 1 data points
0
[ "Newton polynomial", "named after", "Isaac Newton" ]
In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton's divided differences method.
1
[ "Newtonianism", "discoverer or inventor", "Isaac Newton" ]
Background Newton's Principia Mathematica, published by the Royal Society in 1687 but not available widely and in English until after his death, is the text generally cited as revolutionary or otherwise radical in the development of science. The three books of Principia, considered a seminal text in mathematics and physics, are notable for their rejection of hypotheses in favor of inductive and deductive reasoning based on a set of definitions and axioms. This method may be contrasted to the Cartesian method of deduction based on sequential logical reasoning, and showed the efficacy of applying mathematical analysis as a means of making discoveries about the natural world.Newton's other seminal work was Opticks, printed in 1704 in Philosophical Transactions of the Royal Society, of which he became president in 1703. The treatise, which features his now famous work on the composition and dispersion of sunlight, is often cited as an example of how to analyze difficult questions via quantitative experimentation. Even so, the work was not considered revolutionary in Newton's time. One hundred years later, however, Thomas Young would describe Newton's observations in Opticks as "yet unrivalled... they only rise in our estimation as we compare them with later attempts to improve on them."
0
[ "Special relativity", "instance of", "scientific theory" ]
In physics, the special theory of relativity, or special relativity for short, is a scientific theory of the relationship between space and time. In Albert Einstein's original treatment, the theory is based on two postulates: The laws of physics are invariant (identical) in all inertial frames of reference (that is, frames of reference with no acceleration). The speed of light in vacuum is the same for all observers, regardless of the motion of light source or observer.Lack of an absolute reference frame The principle of relativity, which states that physical laws have the same form in each inertial reference frame, dates back to Galileo, and was incorporated into Newtonian physics. However, in the late 19th century, the existence of electromagnetic waves led some physicists to suggest that the universe was filled with a substance they called "aether", which, they postulated, would act as the medium through which these waves, or vibrations, propagated (in many respects similar to the way sound propagates through air). The aether was thought to be an absolute reference frame against which all speeds could be measured, and could be considered fixed and motionless relative to Earth or some other fixed reference point. The aether was supposed to be sufficiently elastic to support electromagnetic waves, while those waves could interact with matter, yet offering no resistance to bodies passing through it (its one property was that it allowed electromagnetic waves to propagate). The results of various experiments, including the Michelson–Morley experiment in 1887 (subsequently verified with more accurate and innovative experiments), led to the theory of special relativity, by showing that the aether did not exist. Einstein's solution was to discard the notion of an aether and the absolute state of rest. In relativity, any reference frame moving with uniform motion will observe the same laws of physics. In particular, the speed of light in vacuum is always measured to be c, even when measured by multiple systems that are moving at different (but constant) velocities.
1
[ "Special relativity", "instance of", "physical law" ]
In physics, the special theory of relativity, or special relativity for short, is a scientific theory of the relationship between space and time. In Albert Einstein's original treatment, the theory is based on two postulates: The laws of physics are invariant (identical) in all inertial frames of reference (that is, frames of reference with no acceleration). The speed of light in vacuum is the same for all observers, regardless of the motion of light source or observer.Special principle of relativity: If a system of coordinates K is chosen so that, in relation to it, physical laws hold good in their simplest form, the same laws hold good in relation to any other system of coordinates K′ moving in uniform translation relatively to K. Henri Poincaré provided the mathematical framework for relativity theory by proving that Lorentz transformations are a subset of his Poincaré group of symmetry transformations. Einstein later derived these transformations from his axioms. Many of Einstein's papers present derivations of the Lorentz transformation based upon these two principles.
2
[ "Special relativity", "described by source", "On the Electrodynamics of Moving Bodies" ]
In physics, the special theory of relativity, or special relativity for short, is a scientific theory of the relationship between space and time. In Albert Einstein's original treatment, the theory is based on two postulates: The laws of physics are invariant (identical) in all inertial frames of reference (that is, frames of reference with no acceleration). The speed of light in vacuum is the same for all observers, regardless of the motion of light source or observer.Origins and significance Special relativity was originally proposed by Albert Einstein in a paper published on 26 September 1905 titled "On the Electrodynamics of Moving Bodies". The incompatibility of Newtonian mechanics with Maxwell's equations of electromagnetism and, experimentally, the Michelson–Morley null result (and subsequent similar experiments) demonstrated that the historically hypothesized luminiferous aether did not exist. This led to Einstein's development of special relativity, which corrects mechanics to handle situations involving all motions and especially those at a speed close to that of light (known as relativistic velocities). Today, special relativity is proven to be the most accurate model of motion at any speed when gravitational and quantum effects are negligible. Even so, the Newtonian model is still valid as a simple and accurate approximation at low velocities (relative to the speed of light), for example, everyday motions on Earth. Special relativity has a wide range of consequences that have been experimentally verified. They include the relativity of simultaneity, length contraction, time dilation, the relativistic velocity addition formula, the relativistic Doppler effect, relativistic mass, a universal speed limit, mass–energy equivalence, the speed of causality and the Thomas precession. It has, for example, replaced the conventional notion of an absolute universal time with the notion of a time that is dependent on reference frame and spatial position. Rather than an invariant time interval between two events, there is an invariant spacetime interval. Combined with other laws of physics, the two postulates of special relativity predict the equivalence of mass and energy, as expressed in the mass–energy equivalence formula E = m c 2 {\displaystyle E=mc^{2}} , where c {\displaystyle c} is the speed of light in vacuum. It also explains how the phenomena of electricity and magnetism are related.A defining feature of special relativity is the replacement of the Galilean transformations of Newtonian mechanics with the Lorentz transformations. Time and space cannot be defined separately from each other (as was previously thought to be the case). Rather, space and time are interwoven into a single continuum known as "spacetime". Events that occur at the same time for one observer can occur at different times for another. Until several years later when Einstein developed general relativity, which introduced a curved spacetime to incorporate gravity, the phrase "special relativity" was not used. A translation sometimes used is "restricted relativity"; "special" really means "special case". Some of the work of Albert Einstein in special relativity is built on the earlier work by Hendrik Lorentz and Henri Poincaré. The theory became essentially complete in 1907.The theory is "special" in that it only applies in the special case where the spacetime is "flat", that is, where the curvature of spacetime (a consequence of the energy–momentum tensor and representing gravity) is negligible. In order to correctly accommodate gravity, Einstein formulated general relativity in 1915. Special relativity, contrary to some historical descriptions, does accommodate accelerations as well as accelerating frames of reference.Just as Galilean relativity is now accepted to be an approximation of special relativity that is valid for low speeds, special relativity is considered an approximation of general relativity that is valid for weak gravitational fields, that is, at a sufficiently small scale (e.g., when tidal forces are negligible) and in conditions of free fall. General relativity, however, incorporates non-Euclidean geometry in order to represent gravitational effects as the geometric curvature of spacetime. Special relativity is restricted to the flat spacetime known as Minkowski space. As long as the universe can be modeled as a pseudo-Riemannian manifold, a Lorentz-invariant frame that abides by special relativity can be defined for a sufficiently small neighborhood of each point in this curved spacetime. Galileo Galilei had already postulated that there is no absolute and well-defined state of rest (no privileged reference frames), a principle now called Galileo's principle of relativity. Einstein extended this principle so that it accounted for the constant speed of light, a phenomenon that had been observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, including both the laws of mechanics and of electrodynamics.Lack of an absolute reference frame The principle of relativity, which states that physical laws have the same form in each inertial reference frame, dates back to Galileo, and was incorporated into Newtonian physics. However, in the late 19th century, the existence of electromagnetic waves led some physicists to suggest that the universe was filled with a substance they called "aether", which, they postulated, would act as the medium through which these waves, or vibrations, propagated (in many respects similar to the way sound propagates through air). The aether was thought to be an absolute reference frame against which all speeds could be measured, and could be considered fixed and motionless relative to Earth or some other fixed reference point. The aether was supposed to be sufficiently elastic to support electromagnetic waves, while those waves could interact with matter, yet offering no resistance to bodies passing through it (its one property was that it allowed electromagnetic waves to propagate). The results of various experiments, including the Michelson–Morley experiment in 1887 (subsequently verified with more accurate and innovative experiments), led to the theory of special relativity, by showing that the aether did not exist. Einstein's solution was to discard the notion of an aether and the absolute state of rest. In relativity, any reference frame moving with uniform motion will observe the same laws of physics. In particular, the speed of light in vacuum is always measured to be c, even when measured by multiple systems that are moving at different (but constant) velocities.
4
[ "Mass–energy equivalence", "discoverer or inventor", "Albert Einstein" ]
In physics, mass–energy equivalence is the relationship between mass and energy in a system's rest frame, where the two quantities differ only by a multiplicative constant and the units of measurement. The principle is described by the physicist Albert Einstein's formula: E = m c 2 {\displaystyle E=mc^{2}} . In a reference frame where the system is moving, its relativistic energy and relativistic mass (instead of rest mass) obey the same formula. The formula defines the energy E of a particle in its rest frame as the product of mass (m) with the speed of light squared (c2). Because the speed of light is a large number in everyday units (approximately 300000 km/s or 186000 mi/s), the formula implies that a small amount of "rest mass", measured when the system is at rest, corresponds to an enormous amount of energy, which is independent of the composition of the matter. Rest mass, also called invariant mass, is a fundamental physical property that is independent of momentum, even at extreme speeds approaching the speed of light. Its value is the same in all inertial frames of reference. Massless particles such as photons have zero invariant mass, but massless free particles have both momentum and energy. The equivalence principle implies that when energy is lost in chemical reactions, nuclear reactions, and other energy transformations, the system will also lose a corresponding amount of mass. The energy, and mass, can be released to the environment as radiant energy, such as light, or as thermal energy. The principle is fundamental to many fields of physics, including nuclear and particle physics. Mass–energy equivalence arose from special relativity as a paradox described by the French polymath Henri Poincaré (1854–1912). Einstein was the first to propose the equivalence of mass and energy as a general principle and a consequence of the symmetries of space and time. The principle first appeared in "Does the inertia of a body depend upon its energy-content?", one of his annus mirabilis papers, published on 21 November 1905. The formula and its relationship to momentum, as described by the energy–momentum relation, were later developed by other physicists.Einstein: mass–energy equivalence Einstein did not write the exact formula E = mc2 in his 1905 Annus Mirabilis paper "Does the Inertia of an object Depend Upon Its Energy Content?"; rather, the paper states that if a body gives off the energy L in the form of radiation, its mass diminishes by L/c2. This formulation relates only a change Δm in mass to a change L in energy without requiring the absolute relationship. The relationship convinced him that mass and energy can be seen as two names for the same underlying, conserved physical quantity. He has stated that the laws of conservation of energy and conservation of mass are "one and the same". Einstein elaborated in a 1946 essay that "the principle of the conservation of mass… proved inadequate in the face of the special theory of relativity. It was therefore merged with the energy conservation principle—just as, about 60 years before, the principle of the conservation of mechanical energy had been combined with the principle of the conservation of heat [thermal energy]. We might say that the principle of the conservation of energy, having previously swallowed up that of the conservation of heat, now proceeded to swallow that of the conservation of mass—and holds the field alone."
0
[ "Mass–energy equivalence", "instance of", "concept in physics" ]
In physics, mass–energy equivalence is the relationship between mass and energy in a system's rest frame, where the two quantities differ only by a multiplicative constant and the units of measurement. The principle is described by the physicist Albert Einstein's formula: E = m c 2 {\displaystyle E=mc^{2}} . In a reference frame where the system is moving, its relativistic energy and relativistic mass (instead of rest mass) obey the same formula. The formula defines the energy E of a particle in its rest frame as the product of mass (m) with the speed of light squared (c2). Because the speed of light is a large number in everyday units (approximately 300000 km/s or 186000 mi/s), the formula implies that a small amount of "rest mass", measured when the system is at rest, corresponds to an enormous amount of energy, which is independent of the composition of the matter. Rest mass, also called invariant mass, is a fundamental physical property that is independent of momentum, even at extreme speeds approaching the speed of light. Its value is the same in all inertial frames of reference. Massless particles such as photons have zero invariant mass, but massless free particles have both momentum and energy. The equivalence principle implies that when energy is lost in chemical reactions, nuclear reactions, and other energy transformations, the system will also lose a corresponding amount of mass. The energy, and mass, can be released to the environment as radiant energy, such as light, or as thermal energy. The principle is fundamental to many fields of physics, including nuclear and particle physics. Mass–energy equivalence arose from special relativity as a paradox described by the French polymath Henri Poincaré (1854–1912). Einstein was the first to propose the equivalence of mass and energy as a general principle and a consequence of the symmetries of space and time. The principle first appeared in "Does the inertia of a body depend upon its energy-content?", one of his annus mirabilis papers, published on 21 November 1905. The formula and its relationship to momentum, as described by the energy–momentum relation, were later developed by other physicists.
4
[ "Theory of relativity", "discoverer or inventor", "Albert Einstein" ]
Development and acceptance Albert Einstein published the theory of special relativity in 1905, building on many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. Max Planck, Hermann Minkowski and others did subsequent work. Einstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916.The term "theory of relativity" was based on the expression "relative theory" (German: Relativtheorie) used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the same paper, Alfred Bucherer used for the first time the expression "theory of relativity" (German: Relativitätstheorie).By the 1920s, the physics community understood and accepted special relativity. It rapidly became a significant and necessary tool for theorists and experimentalists in the new fields of atomic physics, nuclear physics, and quantum mechanics. By comparison, general relativity did not appear to be as useful, beyond making minor corrections to predictions of Newtonian gravitation theory. It seemed to offer little potential for experimental test, as most of its assertions were on an astronomical scale. Its mathematics seemed difficult and fully understandable only by a small number of people. Around 1960, general relativity became central to physics and astronomy. New mathematical techniques to apply to general relativity streamlined calculations and made its concepts more easily visualized. As astronomical phenomena were discovered, such as quasars (1963), the 3-kelvin microwave background radiation (1965), pulsars (1967), and the first black hole candidates (1981), the theory explained their attributes, and measurement of them further confirmed the theory.Special relativity Special relativity is a theory of the structure of spacetime. It was introduced in Einstein's 1905 paper "On the Electrodynamics of Moving Bodies" (for the contributions of many other physicists and mathematicians, see History of special relativity). Special relativity is based on two postulates which are contradictory in classical mechanics:The laws of physics are the same for all observers in any inertial frame of reference relative to one another (principle of relativity). The speed of light in a vacuum is the same for all observers, regardless of their relative motion or of the motion of the light source.The resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment. Moreover, the theory has many surprising and counterintuitive consequences. Some of these are:
0
[ "Theory of relativity", "has quality", "spacetime" ]
Special relativity Special relativity is a theory of the structure of spacetime. It was introduced in Einstein's 1905 paper "On the Electrodynamics of Moving Bodies" (for the contributions of many other physicists and mathematicians, see History of special relativity). Special relativity is based on two postulates which are contradictory in classical mechanics:The laws of physics are the same for all observers in any inertial frame of reference relative to one another (principle of relativity). The speed of light in a vacuum is the same for all observers, regardless of their relative motion or of the motion of the light source.The resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment. Moreover, the theory has many surprising and counterintuitive consequences. Some of these are:
1
[ "Theory of relativity", "has quality", "mass–energy equivalence" ]
Relativity of simultaneity: Two events, simultaneous for one observer, may not be simultaneous for another observer if the observers are in relative motion. Time dilation: Moving clocks are measured to tick more slowly than an observer's "stationary" clock. Length contraction: Objects are measured to be shortened in the direction that they are moving with respect to the observer. Maximum speed is finite: No physical object, message or field line can travel faster than the speed of light in a vacuum. The effect of gravity can only travel through space at the speed of light, not faster or instantaneously. Mass–energy equivalence: E = mc2, energy and mass are equivalent and transmutable. Relativistic mass, idea used by some researchers.The defining feature of special relativity is the replacement of the Galilean transformations of classical mechanics by the Lorentz transformations. (See Maxwell's equations of electromagnetism.)General relativity General relativity is a theory of gravitation developed by Einstein in the years 1907–1915. The development of general relativity began with the equivalence principle, under which the states of accelerated motion and being at rest in a gravitational field (for example, when standing on the surface of the Earth) are physically identical. The upshot of this is that free fall is inertial motion: an object in free fall is falling because that is how objects move when there is no force being exerted on them, instead of this being due to the force of gravity as is the case in classical mechanics. This is incompatible with classical mechanics and special relativity because in those theories inertially moving objects cannot accelerate with respect to each other, but objects in free fall do so. To resolve this difficulty Einstein first proposed that spacetime is curved. Einstein discussed his idea with mathematician Marcel Grossmann and they concluded that general relativity could be formulated in the context of Riemannian geometry which had been developed in the 1800s. In 1915, he devised the Einstein field equations which relate the curvature of spacetime with the mass, energy, and any momentum within it. Some of the consequences of general relativity are:
3
[ "Theory of relativity", "has quality", "time dilation" ]
Special relativity Special relativity is a theory of the structure of spacetime. It was introduced in Einstein's 1905 paper "On the Electrodynamics of Moving Bodies" (for the contributions of many other physicists and mathematicians, see History of special relativity). Special relativity is based on two postulates which are contradictory in classical mechanics:The laws of physics are the same for all observers in any inertial frame of reference relative to one another (principle of relativity). The speed of light in a vacuum is the same for all observers, regardless of their relative motion or of the motion of the light source.The resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment. Moreover, the theory has many surprising and counterintuitive consequences. Some of these are:Relativity of simultaneity: Two events, simultaneous for one observer, may not be simultaneous for another observer if the observers are in relative motion. Time dilation: Moving clocks are measured to tick more slowly than an observer's "stationary" clock. Length contraction: Objects are measured to be shortened in the direction that they are moving with respect to the observer. Maximum speed is finite: No physical object, message or field line can travel faster than the speed of light in a vacuum. The effect of gravity can only travel through space at the speed of light, not faster or instantaneously. Mass–energy equivalence: E = mc2, energy and mass are equivalent and transmutable. Relativistic mass, idea used by some researchers.The defining feature of special relativity is the replacement of the Galilean transformations of classical mechanics by the Lorentz transformations. (See Maxwell's equations of electromagnetism.)General relativity General relativity is a theory of gravitation developed by Einstein in the years 1907–1915. The development of general relativity began with the equivalence principle, under which the states of accelerated motion and being at rest in a gravitational field (for example, when standing on the surface of the Earth) are physically identical. The upshot of this is that free fall is inertial motion: an object in free fall is falling because that is how objects move when there is no force being exerted on them, instead of this being due to the force of gravity as is the case in classical mechanics. This is incompatible with classical mechanics and special relativity because in those theories inertially moving objects cannot accelerate with respect to each other, but objects in free fall do so. To resolve this difficulty Einstein first proposed that spacetime is curved. Einstein discussed his idea with mathematician Marcel Grossmann and they concluded that general relativity could be formulated in the context of Riemannian geometry which had been developed in the 1800s. In 1915, he devised the Einstein field equations which relate the curvature of spacetime with the mass, energy, and any momentum within it. Some of the consequences of general relativity are:Gravitational time dilation: Clocks run slower in deeper gravitational wells. Precession: Orbits precess in a way unexpected in Newton's theory of gravity. (This has been observed in the orbit of Mercury and in binary pulsars). Light deflection: Rays of light bend in the presence of a gravitational field. Frame-dragging: Rotating masses "drag along" the spacetime around them. Metric expansion of space: The universe is expanding, and the far parts of it are moving away from us faster than the speed of light.Technically, general relativity is a theory of gravitation whose defining feature is its use of the Einstein field equations. The solutions of the field equations are metric tensors which define the topology of the spacetime and how objects move inertially.
4
[ "Theory of relativity", "has part(s)", "general relativity" ]
The laws of physics are the same for all observers in any inertial frame of reference relative to one another (principle of relativity). The speed of light in a vacuum is the same for all observers, regardless of their relative motion or of the motion of the light source.The resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment. Moreover, the theory has many surprising and counterintuitive consequences. Some of these are:General relativity General relativity is a theory of gravitation developed by Einstein in the years 1907–1915. The development of general relativity began with the equivalence principle, under which the states of accelerated motion and being at rest in a gravitational field (for example, when standing on the surface of the Earth) are physically identical. The upshot of this is that free fall is inertial motion: an object in free fall is falling because that is how objects move when there is no force being exerted on them, instead of this being due to the force of gravity as is the case in classical mechanics. This is incompatible with classical mechanics and special relativity because in those theories inertially moving objects cannot accelerate with respect to each other, but objects in free fall do so. To resolve this difficulty Einstein first proposed that spacetime is curved. Einstein discussed his idea with mathematician Marcel Grossmann and they concluded that general relativity could be formulated in the context of Riemannian geometry which had been developed in the 1800s. In 1915, he devised the Einstein field equations which relate the curvature of spacetime with the mass, energy, and any momentum within it. Some of the consequences of general relativity are:
5
[ "Theory of relativity", "has quality", "relativity" ]
The laws of physics are the same for all observers in any inertial frame of reference relative to one another (principle of relativity). The speed of light in a vacuum is the same for all observers, regardless of their relative motion or of the motion of the light source.The resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment. Moreover, the theory has many surprising and counterintuitive consequences. Some of these are:
6
[ "Theory of relativity", "instance of", "scientific theory" ]
The theory of relativity usually encompasses two interrelated physics theories by Albert Einstein; special relativity and general relativity, proposed and published in 1905 and 1915, respectively. Special relativity applies to all physical phenomena in the absence of gravity. General relativity explains the law of gravitation and its relation to the forces of nature. It applies to the cosmological and astrophysical realm, including astronomy.The theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton. It introduced concepts including 4-dimensional spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves.Special relativity Special relativity is a theory of the structure of spacetime. It was introduced in Einstein's 1905 paper "On the Electrodynamics of Moving Bodies" (for the contributions of many other physicists and mathematicians, see History of special relativity). Special relativity is based on two postulates which are contradictory in classical mechanics:
10
[ "Theory of relativity", "has part(s)", "special relativity" ]
Special relativity Special relativity is a theory of the structure of spacetime. It was introduced in Einstein's 1905 paper "On the Electrodynamics of Moving Bodies" (for the contributions of many other physicists and mathematicians, see History of special relativity). Special relativity is based on two postulates which are contradictory in classical mechanics:
12
[ "Theory of relativity", "has quality", "length contraction" ]
Relativity of simultaneity: Two events, simultaneous for one observer, may not be simultaneous for another observer if the observers are in relative motion. Time dilation: Moving clocks are measured to tick more slowly than an observer's "stationary" clock. Length contraction: Objects are measured to be shortened in the direction that they are moving with respect to the observer. Maximum speed is finite: No physical object, message or field line can travel faster than the speed of light in a vacuum. The effect of gravity can only travel through space at the speed of light, not faster or instantaneously. Mass–energy equivalence: E = mc2, energy and mass are equivalent and transmutable. Relativistic mass, idea used by some researchers.The defining feature of special relativity is the replacement of the Galilean transformations of classical mechanics by the Lorentz transformations. (See Maxwell's equations of electromagnetism.)
14
[ "Theory of relativity", "instance of", "branch of physics" ]
The theory of relativity usually encompasses two interrelated physics theories by Albert Einstein; special relativity and general relativity, proposed and published in 1905 and 1915, respectively. Special relativity applies to all physical phenomena in the absence of gravity. General relativity explains the law of gravitation and its relation to the forces of nature. It applies to the cosmological and astrophysical realm, including astronomy.The theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton. It introduced concepts including 4-dimensional spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves.Special relativity Special relativity is a theory of the structure of spacetime. It was introduced in Einstein's 1905 paper "On the Electrodynamics of Moving Bodies" (for the contributions of many other physicists and mathematicians, see History of special relativity). Special relativity is based on two postulates which are contradictory in classical mechanics:
18
[ "Equivalence principle", "depicts", "inertial mass" ]
In the theory of general relativity, the equivalence principle is the equivalence of gravitational and inertial mass, and Albert Einstein's observation that the gravitational "force" as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (accelerated) frame of reference.In other words, passive gravitational mass must be proportional to inertial mass for all objects. Furthermore, by Newton's third law of motion:Tests of the weak equivalence principle Tests of the weak equivalence principle are those that verify the equivalence of gravitational mass and inertial mass. An obvious test is dropping different objects, ideally in a vacuum environment, e.g., inside the Fallturm Bremen drop tower.
0
[ "Equivalence principle", "depicts", "gravitational mass" ]
In the theory of general relativity, the equivalence principle is the equivalence of gravitational and inertial mass, and Albert Einstein's observation that the gravitational "force" as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (accelerated) frame of reference.
1
[ "Equivalence principle", "discoverer or inventor", "Albert Einstein" ]
In the theory of general relativity, the equivalence principle is the equivalence of gravitational and inertial mass, and Albert Einstein's observation that the gravitational "force" as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (accelerated) frame of reference.Einstein's statement of the equality of inertial and gravitational mass A little reflection will show that the law of the equality of the inertial and gravitational mass is equivalent to the assertion that the acceleration imparted to a body by a gravitational field is independent of the nature of the body. For Newton's equation of motion in a gravitational field, written out in full, it is:If two stones were placed in any part of the world near each other, and beyond the sphere of influence of a third cognate body, these stones, like two magnetic needles, would come together in the intermediate point, each approaching the other by a space proportional to the comparative mass of the other. If the moon and earth were not retained in their orbits by their animal force or some other equivalent, the earth would mount to the moon by a fifty-fourth part of their distance, and the moon fall towards the earth through the other fifty-three parts, and they would there meet, assuming, however, that the substance of both is of the same density. The 1/54 ratio is Kepler's estimate of the Moon–Earth mass ratio, based on their diameters. The accuracy of his statement can be deduced by using Newton's inertia law F=ma and Galileo's gravitational observation that distance D = ( 1 / 2 ) a t 2 {\displaystyle D=(1/2)at^{2}} . Setting these accelerations equal for a mass is the equivalence principle. Noting the time to collision for each mass is the same gives Kepler's statement that Dmoon/DEarth=MEarth/Mmoon, without knowing the time to collision or how or if the acceleration force from gravity is a function of distance. Newton's gravitational theory simplified and formalized Galileo's and Kepler's ideas by recognizing Kepler's "animal force or some other equivalent" beyond gravity and inertia were not needed, deducing from Kepler's planetary laws how gravity reduces with distance. The equivalence principle was properly introduced by Albert Einstein in 1907, when he observed that the acceleration of bodies towards the center of the Earth at a rate of 1g (g = 9.81 m/s2 being a standard reference of gravitational acceleration at the Earth's surface) is equivalent to the acceleration of an inertially moving body that would be observed on a rocket in free space being accelerated at a rate of 1g. Einstein stated it thus:We arrive at a very satisfactory interpretation of this law of experience, if we assume that the systems K and K' are physically exactly equivalent, that is, if we assume that we may just as well regard the system K as being in a space free from gravitational fields, if we then regard K as uniformly accelerated. This assumption of exact physical equivalence makes it impossible for us to speak of the absolute acceleration of the system of reference, just as the usual theory of relativity forbids us to talk of the absolute velocity of a system; and it makes the equal falling of all bodies in a gravitational field seem a matter of course. This observation was the start of a process that culminated in general relativity. Einstein suggested that it should be elevated to the status of a general principle, which he called the "principle of equivalence" when constructing his theory of relativity:As long as we restrict ourselves to purely mechanical processes in the realm where Newton's mechanics holds sway, we are certain of the equivalence of the systems K and K'. But this view of ours will not have any deeper significance unless the systems K and K' are equivalent with respect to all physical processes, that is, unless the laws of nature with respect to K are in entire agreement with those with respect to K'. By assuming this to be so, we arrive at a principle which, if it is really true, has great heuristic importance. For by theoretical consideration of processes which take place relatively to a system of reference with uniform acceleration, we obtain information as to the career of processes in a homogeneous gravitational field. Einstein combined (postulated) the equivalence principle with special relativity to predict that clocks run at different rates in a gravitational potential, and light rays bend in a gravitational field, even before he developed the concept of curved spacetime. So the original equivalence principle, as described by Einstein, concluded that free-fall and inertial motion were physically equivalent. This form of the equivalence principle can be stated as follows. An observer in a windowless room cannot distinguish between being on the surface of the Earth, and being in a spaceship in deep space accelerating at 1g. This is not strictly true, because massive bodies give rise to tidal effects (caused by variations in the strength and direction of the gravitational field) which are absent from an accelerating spaceship in deep space. The room, therefore, should be small enough that tidal effects can be neglected. Although the equivalence principle guided the development of general relativity, it is not a founding principle of relativity but rather a simple consequence of the geometrical nature of the theory. In general relativity, objects in free-fall follow geodesics of spacetime, and what we perceive as the force of gravity is instead a result of our being unable to follow those geodesics of spacetime, because the mechanical resistance of Earth's matter or surface prevents us from doing so. Since Einstein developed general relativity, there was a need to develop a framework to test the theory against other possible theories of gravity compatible with special relativity. This was developed by Robert Dicke as part of his program to test general relativity. Two new principles were suggested, the so-called Einstein equivalence principle and the strong equivalence principle, each of which assumes the weak equivalence principle as a starting point. They only differ in whether or not they apply to gravitational experiments. Another clarification needed is that the equivalence principle assumes a constant acceleration of 1g without considering the mechanics of generating 1g. If we do consider the mechanics of it, then we must assume the aforementioned windowless room has a fixed mass. Accelerating it at 1g means there is a constant force being applied, which = m*g where m is the mass of the windowless room along with its contents (including the observer). Now, if the observer jumps inside the room, an object lying freely on the floor will decrease in weight momentarily because the acceleration is going to decrease momentarily due to the observer pushing back against the floor in order to jump. The object will then gain weight while the observer is in the air and the resulting decreased mass of the windowless room allows greater acceleration; it will lose weight again when the observer lands and pushes once more against the floor; and it will finally return to its initial weight afterwards. To make all these effects equal those we would measure on a planet producing 1g, the windowless room must be assumed to have the same mass as that planet. Additionally, the windowless room must not cause its own gravity, otherwise the scenario changes even further. These are technicalities, clearly, but practical ones if we wish the experiment to demonstrate more or less precisely the equivalence of 1g gravity and 1g acceleration.
2
[ "Equivalence principle", "facet of", "general relativity" ]
In the theory of general relativity, the equivalence principle is the equivalence of gravitational and inertial mass, and Albert Einstein's observation that the gravitational "force" as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (accelerated) frame of reference.We arrive at a very satisfactory interpretation of this law of experience, if we assume that the systems K and K' are physically exactly equivalent, that is, if we assume that we may just as well regard the system K as being in a space free from gravitational fields, if we then regard K as uniformly accelerated. This assumption of exact physical equivalence makes it impossible for us to speak of the absolute acceleration of the system of reference, just as the usual theory of relativity forbids us to talk of the absolute velocity of a system; and it makes the equal falling of all bodies in a gravitational field seem a matter of course. This observation was the start of a process that culminated in general relativity. Einstein suggested that it should be elevated to the status of a general principle, which he called the "principle of equivalence" when constructing his theory of relativity:It follows that:Tests of the weak equivalence principle Tests of the weak equivalence principle are those that verify the equivalence of gravitational mass and inertial mass. An obvious test is dropping different objects, ideally in a vacuum environment, e.g., inside the Fallturm Bremen drop tower.
3
[ "Equivalence principle", "depicts", "mass" ]
In the theory of general relativity, the equivalence principle is the equivalence of gravitational and inertial mass, and Albert Einstein's observation that the gravitational "force" as experienced locally while standing on a massive body (such as the Earth) is the same as the pseudo-force experienced by an observer in a non-inertial (accelerated) frame of reference.
5
[ "Equivalence principle", "instance of", "physical law" ]
Tests of the weak equivalence principle Tests of the weak equivalence principle are those that verify the equivalence of gravitational mass and inertial mass. An obvious test is dropping different objects, ideally in a vacuum environment, e.g., inside the Fallturm Bremen drop tower.
6
[ "Unified field theory", "discoverer or inventor", "Albert Einstein" ]
In physics, a unified field theory (UFT) is a type of field theory that allows all that is usually thought of as fundamental forces and elementary particles to be written in terms of a pair of physical and virtual fields. According to the modern discoveries in physics, forces are not transmitted directly between interacting objects but instead are described and interrupted by intermediary entities called fields. Classically, however, a duality of the fields is combined into a single physical field. For over a century, unified field theory has remained an open line of research and the term was coined by Albert Einstein, who attempted to unify his general theory of relativity with electromagnetism. The "Theory of Everything" and Grand Unified Theory are closely related to unified field theory, but differ by not requiring the basis of nature to be fields, and often by attempting to explain physical constants of nature. Earlier attempts based on classical physics are described in the article on classical unified field theories. The goal of a unified field theory has led to a great deal of progress for future theoretical physics, and progress continues.
0
[ "Unified field theory", "discoverer or inventor", "James Clerk Maxwell" ]
History Classic theory The first successful classical unified field theory was developed by James Clerk Maxwell. In 1820, Hans Christian Ørsted discovered that electric currents exerted forces on magnets, while in 1831, Michael Faraday made the observation that time-varying magnetic fields could induce electric currents. Until then, electricity and magnetism had been thought of as unrelated phenomena. In 1864, Maxwell published his famous paper on a dynamical theory of the electromagnetic field. This was the first example of a theory that was able to encompass previously separate field theories (namely electricity and magnetism) to provide a unifying theory of electromagnetism. By 1905, Albert Einstein had used the constancy of the speed of light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime and in 1915 he expanded this theory of special relativity to a description of gravity, general relativity, using a field to describe the curving geometry of four-dimensional spacetime. In the years following the creation of the general theory, a large number of physicists and mathematicians enthusiastically participated in the attempt to unify the then-known fundamental interactions. In view of later developments in this domain, of particular interest are the theories of Hermann Weyl of 1919, who introduced the concept of an (electromagnetic) gauge field in a classical field theory and, two years later, that of Theodor Kaluza, who extended General Relativity to five dimensions. Continuing in this latter direction, Oscar Klein proposed in 1926 that the fourth spatial dimension be curled up into a small, unobserved circle. In Kaluza–Klein theory, the gravitational curvature of the extra spatial direction behaves as an additional force similar to electromagnetism. These and other models of electromagnetism and gravity were pursued by Albert Einstein in his attempts at a classical unified field theory. By 1930 Einstein had already considered the Einstein-Maxwell–Dirac System [Dongen]. This system is (heuristically) the super-classical [Varadarajan] limit of (the not mathematically well-defined) quantum electrodynamics. One can extend this system to include the weak and strong nuclear forces to get the Einstein–Yang-Mills–Dirac System. The French physicist Marie-Antoinette Tonnelat published a paper in the early 1940s on the standard commutation relations for the quantized spin-2 field. She continued this work in collaboration with Erwin Schrödinger after World War II. In the 1960s Mendel Sachs proposed a generally covariant field theory that did not require recourse to renormalization or perturbation theory. In 1965, Tonnelat published a book on the state of research on unified field theories.
1
[ "Tokamak", "discoverer or inventor", "Andrei Sakharov" ]
A tokamak (; Russian: токамáк) is a device which uses a powerful magnetic field to confine plasma in the shape of a torus. The tokamak is one of several types of magnetic confinement devices being developed to produce controlled thermonuclear fusion power. As of 2016, it was the leading candidate for a practical fusion reactor.Tokamaks were initially conceptualized in the 1950s by Soviet physicists Igor Tamm and Andrei Sakharov, inspired by a letter by Oleg Lavrentiev. The first working tokamak was attributed to the work of Natan Yavlinsky on the T-1 in 1958. It had been demonstrated that a stable plasma equilibrium requires magnetic field lines that wind around the torus in a helix. Devices like the z-pinch and stellarator had attempted this, but demonstrated serious instabilities. It was the development of the concept now known as the safety factor (labelled q in mathematical notation) that guided tokamak development; by arranging the reactor so this critical factor q was always greater than 1, the tokamaks strongly suppressed the instabilities which plagued earlier designs. By the mid-1960s, the tokamak designs began to show greatly improved performance. The initial results were released in 1965, but were ignored; Lyman Spitzer dismissed them out of hand after noting potential problems in their system for measuring temperatures. A second set of results was published in 1968, this time claiming performance far in advance of any other machine. When these were also met skeptically, the Soviets invited a delegation from the United Kingdom to make their own measurements. These confirmed the Soviet results, and their 1969 publication resulted in a stampede of tokamak construction. By the mid-1970s, dozens of tokamaks were in use around the world. By the late 1970s, these machines had reached all of the conditions needed for practical fusion, although not at the same time nor in a single reactor. With the goal of breakeven (a fusion energy gain factor equal to 1) now in sight, a new series of machines were designed that would run on a fusion fuel of deuterium and tritium. These machines, notably the Joint European Torus (JET) and Tokamak Fusion Test Reactor (TFTR), had the explicit goal of reaching breakeven. Instead, these machines demonstrated new problems that limited their performance. Solving these would require a much larger and more expensive machine, beyond the abilities of any one country. After an initial agreement between Ronald Reagan and Mikhail Gorbachev in November 1985, the International Thermonuclear Experimental Reactor (ITER) effort emerged and remains the primary international effort to develop practical fusion power. Many smaller designs, and offshoots like the spherical tokamak, continue to be used to investigate performance parameters and other issues. As of 2022, JET remains the record holder for fusion output, with 59 MJ of energy output, although sustained over a 5 second period, the short-term energy produced is 11 MJ with an energy of 40 MJ.
0
[ "Tokamak", "uses", "magnetic confinement fusion" ]
A tokamak (; Russian: токамáк) is a device which uses a powerful magnetic field to confine plasma in the shape of a torus. The tokamak is one of several types of magnetic confinement devices being developed to produce controlled thermonuclear fusion power. As of 2016, it was the leading candidate for a practical fusion reactor.Tokamaks were initially conceptualized in the 1950s by Soviet physicists Igor Tamm and Andrei Sakharov, inspired by a letter by Oleg Lavrentiev. The first working tokamak was attributed to the work of Natan Yavlinsky on the T-1 in 1958. It had been demonstrated that a stable plasma equilibrium requires magnetic field lines that wind around the torus in a helix. Devices like the z-pinch and stellarator had attempted this, but demonstrated serious instabilities. It was the development of the concept now known as the safety factor (labelled q in mathematical notation) that guided tokamak development; by arranging the reactor so this critical factor q was always greater than 1, the tokamaks strongly suppressed the instabilities which plagued earlier designs. By the mid-1960s, the tokamak designs began to show greatly improved performance. The initial results were released in 1965, but were ignored; Lyman Spitzer dismissed them out of hand after noting potential problems in their system for measuring temperatures. A second set of results was published in 1968, this time claiming performance far in advance of any other machine. When these were also met skeptically, the Soviets invited a delegation from the United Kingdom to make their own measurements. These confirmed the Soviet results, and their 1969 publication resulted in a stampede of tokamak construction. By the mid-1970s, dozens of tokamaks were in use around the world. By the late 1970s, these machines had reached all of the conditions needed for practical fusion, although not at the same time nor in a single reactor. With the goal of breakeven (a fusion energy gain factor equal to 1) now in sight, a new series of machines were designed that would run on a fusion fuel of deuterium and tritium. These machines, notably the Joint European Torus (JET) and Tokamak Fusion Test Reactor (TFTR), had the explicit goal of reaching breakeven. Instead, these machines demonstrated new problems that limited their performance. Solving these would require a much larger and more expensive machine, beyond the abilities of any one country. After an initial agreement between Ronald Reagan and Mikhail Gorbachev in November 1985, the International Thermonuclear Experimental Reactor (ITER) effort emerged and remains the primary international effort to develop practical fusion power. Many smaller designs, and offshoots like the spherical tokamak, continue to be used to investigate performance parameters and other issues. As of 2022, JET remains the record holder for fusion output, with 59 MJ of energy output, although sustained over a 5 second period, the short-term energy produced is 11 MJ with an energy of 40 MJ.Magnetic confinement When heated to fusion temperatures, the electrons in atoms dissociate, resulting in a fluid of nuclei and electrons known as plasma. Unlike electrically neutral atoms, a plasma is electrically conductive, and can, therefore, be manipulated by electrical or magnetic fields.Sakharov's concern about the electrodes led him to consider using magnetic confinement instead of electrostatic. In the case of a magnetic field, the particles will circle around the lines of force. As the particles are moving at high speed, their resulting paths look like a helix. If one arranges a magnetic field so lines of force are parallel and close together, the particles orbiting adjacent lines may collide, and fuse.Such a field can be created in a solenoid, a cylinder with magnets wrapped around the outside. The combined fields of the magnets create a set of parallel magnetic lines running down the length of the cylinder. This arrangement prevents the particles from moving sideways to the wall of the cylinder, but it does not prevent them from running out the end. The obvious solution to this problem is to bend the cylinder around into a donut shape, or torus, so that the lines form a series of continual rings. In this arrangement, the particles circle endlessly.Sakharov discussed the concept with Igor Tamm, and by the end of October 1950 the two had written a proposal and sent it to Igor Kurchatov, the director of the atomic bomb project within the USSR, and his deputy, Igor Golovin. However, this initial proposal ignored a fundamental problem; when arranged along a straight solenoid, the external magnets are evenly spaced, but when bent around into a torus, they are closer together on the inside of the ring than the outside. This leads to uneven forces that cause the particles to drift away from their magnetic lines.During visits to the Laboratory of Measuring Instruments of the USSR Academy of Sciences (LIPAN), the Soviet nuclear research centre, Sakharov suggested two possible solutions to this problem. One was to suspend a current-carrying ring in the centre of the torus. The current in the ring would produce a magnetic field that would mix with the one from the magnets on the outside. The resulting field would be twisted into a helix, so that any given particle would find itself repeatedly on the outside, then inside, of the torus. The drifts caused by the uneven fields are in opposite directions on the inside and outside, so over the course of multiple orbits around the long axis of the torus, the opposite drifts would cancel out. Alternately, he suggested using an external magnet to induce a current in the plasma itself, instead of a separate metal ring, which would have the same effect.In January 1951, Kurchatov arranged a meeting at LIPAN to consider Sakharov's concepts. They found widespread interest and support, and in February a report on the topic was forwarded to Lavrentiy Beria, who oversaw the atomic efforts in the USSR. For a time, nothing was heard back.
3
[ "Tokamak", "subclass of", "fusion reactor" ]
A tokamak (; Russian: токамáк) is a device which uses a powerful magnetic field to confine plasma in the shape of a torus. The tokamak is one of several types of magnetic confinement devices being developed to produce controlled thermonuclear fusion power. As of 2016, it was the leading candidate for a practical fusion reactor.Tokamaks were initially conceptualized in the 1950s by Soviet physicists Igor Tamm and Andrei Sakharov, inspired by a letter by Oleg Lavrentiev. The first working tokamak was attributed to the work of Natan Yavlinsky on the T-1 in 1958. It had been demonstrated that a stable plasma equilibrium requires magnetic field lines that wind around the torus in a helix. Devices like the z-pinch and stellarator had attempted this, but demonstrated serious instabilities. It was the development of the concept now known as the safety factor (labelled q in mathematical notation) that guided tokamak development; by arranging the reactor so this critical factor q was always greater than 1, the tokamaks strongly suppressed the instabilities which plagued earlier designs. By the mid-1960s, the tokamak designs began to show greatly improved performance. The initial results were released in 1965, but were ignored; Lyman Spitzer dismissed them out of hand after noting potential problems in their system for measuring temperatures. A second set of results was published in 1968, this time claiming performance far in advance of any other machine. When these were also met skeptically, the Soviets invited a delegation from the United Kingdom to make their own measurements. These confirmed the Soviet results, and their 1969 publication resulted in a stampede of tokamak construction. By the mid-1970s, dozens of tokamaks were in use around the world. By the late 1970s, these machines had reached all of the conditions needed for practical fusion, although not at the same time nor in a single reactor. With the goal of breakeven (a fusion energy gain factor equal to 1) now in sight, a new series of machines were designed that would run on a fusion fuel of deuterium and tritium. These machines, notably the Joint European Torus (JET) and Tokamak Fusion Test Reactor (TFTR), had the explicit goal of reaching breakeven. Instead, these machines demonstrated new problems that limited their performance. Solving these would require a much larger and more expensive machine, beyond the abilities of any one country. After an initial agreement between Ronald Reagan and Mikhail Gorbachev in November 1985, the International Thermonuclear Experimental Reactor (ITER) effort emerged and remains the primary international effort to develop practical fusion power. Many smaller designs, and offshoots like the spherical tokamak, continue to be used to investigate performance parameters and other issues. As of 2022, JET remains the record holder for fusion output, with 59 MJ of energy output, although sustained over a 5 second period, the short-term energy produced is 11 MJ with an energy of 40 MJ.
9
[ "Evolution", "discoverer or inventor", "Charles Darwin" ]
Darwinian revolution The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin and Alfred Wallace in terms of variable populations. Darwin used the expression "descent with modification" rather than "evolution". Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism. Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London. At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe.
0
[ "Evolution", "has immediate cause", "natural selection" ]
In biology, evolution is the change in heritable characteristics of biological populations over successive generations. These characteristics are the expressions of genes, which are passed on from parent to offspring during reproduction. Variation tends to exist within any given population as a result of genetic mutation and recombination. Evolution occurs when evolutionary processes such as natural selection (including sexual selection) and genetic drift act on this variation, resulting in certain characteristics becoming more common or more rare within a population. The evolutionary pressures that determine whether a characteristic is common or rare within a population constantly change, resulting in a change in heritable characteristics arising over successive generations. It is this process of evolution that has given rise to biodiversity at every level of biological organisation.The theory of evolution by natural selection was conceived independently by Charles Darwin and Alfred Russel Wallace in the mid-19th century and was set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour (phenotypic variation); (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics. In the early 20th century, other competing ideas of evolution such as mutationism and orthogenesis were refuted as the modern synthesis concluded Darwinian evolution acts on Mendelian genetic variation.All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits are more similar among species that share a more recent common ancestor, and these traits can be used to reconstruct phylogenetic trees.Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but numerous other scientific and industrial fields, including agriculture, medicine, and computer science.
2
[ "Evolution", "follows", "Lamarckism" ]
Pre-Darwinian The "New Science" of the 17th century rejected the Aristotelian approach. It sought to explain natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences: the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation. The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan.Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Georges-Louis Leclerc, Comte de Buffon, suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament"). The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809, which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin.
5
[ "Evolution", "cause", "genetic drift" ]
In biology, evolution is the change in heritable characteristics of biological populations over successive generations. These characteristics are the expressions of genes, which are passed on from parent to offspring during reproduction. Variation tends to exist within any given population as a result of genetic mutation and recombination. Evolution occurs when evolutionary processes such as natural selection (including sexual selection) and genetic drift act on this variation, resulting in certain characteristics becoming more common or more rare within a population. The evolutionary pressures that determine whether a characteristic is common or rare within a population constantly change, resulting in a change in heritable characteristics arising over successive generations. It is this process of evolution that has given rise to biodiversity at every level of biological organisation.The theory of evolution by natural selection was conceived independently by Charles Darwin and Alfred Russel Wallace in the mid-19th century and was set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour (phenotypic variation); (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics. In the early 20th century, other competing ideas of evolution such as mutationism and orthogenesis were refuted as the modern synthesis concluded Darwinian evolution acts on Mendelian genetic variation.All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits are more similar among species that share a more recent common ancestor, and these traits can be used to reconstruct phylogenetic trees.Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but numerous other scientific and industrial fields, including agriculture, medicine, and computer science.Evolutionary processes From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms, for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, gene flow and mutation bias.
6
[ "Evolution", "instance of", "type of process" ]
In biology, evolution is the change in heritable characteristics of biological populations over successive generations. These characteristics are the expressions of genes, which are passed on from parent to offspring during reproduction. Variation tends to exist within any given population as a result of genetic mutation and recombination. Evolution occurs when evolutionary processes such as natural selection (including sexual selection) and genetic drift act on this variation, resulting in certain characteristics becoming more common or more rare within a population. The evolutionary pressures that determine whether a characteristic is common or rare within a population constantly change, resulting in a change in heritable characteristics arising over successive generations. It is this process of evolution that has given rise to biodiversity at every level of biological organisation.The theory of evolution by natural selection was conceived independently by Charles Darwin and Alfred Russel Wallace in the mid-19th century and was set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour (phenotypic variation); (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics. In the early 20th century, other competing ideas of evolution such as mutationism and orthogenesis were refuted as the modern synthesis concluded Darwinian evolution acts on Mendelian genetic variation.All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits are more similar among species that share a more recent common ancestor, and these traits can be used to reconstruct phylogenetic trees.Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but numerous other scientific and industrial fields, including agriculture, medicine, and computer science.
25
[ "Natural selection", "discoverer or inventor", "Charles Darwin" ]
Natural selection is the differential survival and reproduction of individuals due to differences in phenotype. It is a key mechanism of evolution, the change in the heritable traits characteristic of a population over generations. Charles Darwin popularised the term "natural selection", contrasting it with artificial selection, which is intentional, whereas natural selection is not. Variation exists within all populations of organisms. This occurs partly because random mutations arise in the genome of an individual organism, and their offspring can inherit such mutations. Throughout the lives of the individuals, their genomes interact with their environments to cause variations in traits. The environment of a genome includes the molecular biology in the cell, other cells, other individuals, populations, species, as well as the abiotic environment. Because individuals with certain variants of the trait tend to survive and reproduce more than individuals with other less successful variants, the population evolves. Other factors affecting reproductive success include sexual selection (now often included in natural selection) and fecundity selection. Natural selection acts on the phenotype, the characteristics of the organism which actually interact with the environment, but the genetic (heritable) basis of any phenotype that gives that phenotype a reproductive advantage may become more common in a population. Over time this process can result in populations that specialise for particular ecological niches (microevolution) and may eventually result in speciation (the emergence of new species, macroevolution). In other words, natural selection is a key process in the evolution of a population. Natural selection is a cornerstone of modern biology. The concept, published by Darwin and Alfred Russel Wallace in a joint presentation of papers in 1858, was elaborated in Darwin's influential 1859 book On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. He described natural selection as analogous to artificial selection, a process by which animals and plants with traits considered desirable by human breeders are systematically favoured for reproduction. The concept of natural selection originally developed in the absence of a valid theory of heredity; at the time of Darwin's writing, science had yet to develop modern theories of genetics. The union of traditional Darwinian evolution with subsequent discoveries in classical genetics formed the modern synthesis of the mid-20th century. The addition of molecular genetics has led to evolutionary developmental biology, which explains evolution at the molecular level. While genotypes can slowly change by random genetic drift, natural selection remains the primary explanation for adaptive evolution.Darwin's theory In 1859, Charles Darwin set out his theory of evolution by natural selection as an explanation for adaptation and speciation. He defined natural selection as the "principle by which each slight variation [of a trait], if useful, is preserved". The concept was simple but powerful: individuals best adapted to their environments are more likely to survive and reproduce. As long as there is some variation between them and that variation is heritable, there will be an inevitable selection of individuals with the most advantageous variations. If the variations are heritable, then differential reproductive success leads to the evolution of particular populations of a species, and populations that evolve to be sufficiently different eventually become different species.If during the long course of ages and under varying conditions of life, organic beings vary at all in the several parts of their organisation, and I think this cannot be disputed; if there be, owing to the high geometrical powers of increase of each species, at some age, season, or year, a severe struggle for life, and this certainly cannot be disputed; then, considering the infinite complexity of the relations of all organic beings to each other and to their conditions of existence, causing an infinite diversity in structure, constitution, and habits, to be advantageous to them, I think it would be a most extraordinary fact if no variation ever had occurred useful to each being's own welfare, in the same way as so many variations have occurred useful to man. But if variations useful to any organic being do occur, assuredly individuals thus characterised will have the best chance of being preserved in the struggle for life; and from the strong principle of inheritance they will tend to produce offspring similarly characterised. This principle of preservation, I have called, for the sake of brevity, Natural Selection. Once he had his theory, Darwin was meticulous about gathering and refining evidence before making his idea public. He was in the process of writing his "big book" to present his research when the naturalist Alfred Russel Wallace independently conceived of the principle and described it in an essay he sent to Darwin to forward to Charles Lyell. Lyell and Joseph Dalton Hooker decided to present his essay together with unpublished writings that Darwin had sent to fellow naturalists, and On the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection was read to the Linnean Society of London announcing co-discovery of the principle in July 1858. Darwin published a detailed account of his evidence and conclusions in On the Origin of Species in 1859. In the 3rd edition of 1861 Darwin acknowledged that others—like William Charles Wells in 1813, and Patrick Matthew in 1831—had proposed similar ideas, but had neither developed them nor presented them in notable scientific publications.
0
[ "Natural selection", "facet of", "evolution" ]
Evolution by means of natural selection A prerequisite for natural selection to result in adaptive evolution, novel traits and speciation is the presence of heritable genetic variation that results in fitness differences. Genetic variation is the result of mutations, genetic recombinations and alterations in the karyotype (the number, shape, size and internal arrangement of the chromosomes). Any of these changes might have an effect that is highly advantageous or highly disadvantageous, but large effects are rare. In the past, most changes in the genetic material were considered neutral or close to neutral because they occurred in noncoding DNA or resulted in a synonymous substitution. However, many mutations in non-coding DNA have deleterious effects. Although both mutation rates and average fitness effects of mutations are dependent on the organism, a majority of mutations in humans are slightly deleterious.Some mutations occur in "toolkit" or regulatory genes. Changes in these often have large effects on the phenotype of the individual because they regulate the function of many other genes. Most, but not all, mutations in regulatory genes result in non-viable embryos. Some nonlethal regulatory mutations occur in HOX genes in humans, which can result in a cervical rib or polydactyly, an increase in the number of fingers or toes. When such mutations result in a higher fitness, natural selection favours these phenotypes and the novel trait spreads in the population. Established traits are not immutable; traits that have high fitness in one environmental context may be much less fit if environmental conditions change. In the absence of natural selection to preserve such a trait, it becomes more variable and deteriorate over time, possibly resulting in a vestigial manifestation of the trait, also called evolutionary baggage. In many circumstances, the apparently vestigial structure may retain a limited functionality, or may be co-opted for other advantageous traits in a phenomenon known as preadaptation. A famous example of a vestigial structure, the eye of the blind mole-rat, is believed to retain function in photoperiod perception.
2
[ "Natural selection", "cause", "competition" ]
Cell and molecular biology In 1881, the embryologist Wilhelm Roux published Der Kampf der Theile im Organismus (The Struggle of Parts in the Organism) in which he suggested that the development of an organism results from a Darwinian competition between the parts of the embryo, occurring at all levels, from molecules to organs. In recent years, a modern version of this theory has been proposed by Jean-Jacques Kupiec. According to this cellular Darwinism, random variation at the molecular level generates diversity in cell types whereas cell interactions impose a characteristic order on the developing embryo.
3
[ "Natural selection", "facet of", "Darwinism" ]
By resource being competed for Finally, selection can be classified according to the resource being competed for. Sexual selection results from competition for mates. Sexual selection typically proceeds via fecundity selection, sometimes at the expense of viability. Ecological selection is natural selection via any means other than sexual selection, such as kin selection, competition, and infanticide. Following Darwin, natural selection is sometimes defined as ecological selection, in which case sexual selection is considered a separate mechanism.Sexual selection as first articulated by Darwin (using the example of the peacock's tail) refers specifically to competition for mates, which can be intrasexual, between individuals of the same sex, that is male–male competition, or intersexual, where one gender chooses mates, most often with males displaying and females choosing. However, in some species, mate choice is primarily by males, as in some fishes of the family Syngnathidae.Phenotypic traits can be displayed in one sex and desired in the other sex, causing a positive feedback loop called a Fisherian runaway, for example, the extravagant plumage of some male birds such as the peacock. An alternate theory proposed by the same Ronald Fisher in 1930 is the sexy son hypothesis, that mothers want promiscuous sons to give them large numbers of grandchildren and so choose promiscuous fathers for their children. Aggression between members of the same sex is sometimes associated with very distinctive features, such as the antlers of stags, which are used in combat with other stags. More generally, intrasexual selection is often associated with sexual dimorphism, including differences in body size between males and females of a species.
4
[ "Natural selection", "facet of", "modern evolutionary synthesis" ]
The modern synthesis Natural selection relies crucially on the idea of heredity, but developed before the basic concepts of genetics. Although the Moravian monk Gregor Mendel, the father of modern genetics, was a contemporary of Darwin's, his work lay in obscurity, only being rediscovered in 1900. With the early 20th century integration of evolution with Mendel's laws of inheritance, the so-called modern synthesis, scientists generally came to accept natural selection. The synthesis grew from advances in different fields. Ronald Fisher developed the required mathematical language and wrote The Genetical Theory of Natural Selection (1930). J. B. S. Haldane introduced the concept of the "cost" of natural selection.Sewall Wright elucidated the nature of selection and adaptation. In his book Genetics and the Origin of Species (1937), Theodosius Dobzhansky established the idea that mutation, once seen as a rival to selection, actually supplied the raw material for natural selection by creating genetic diversity.A second synthesis Ernst Mayr recognised the key importance of reproductive isolation for speciation in his Systematics and the Origin of Species (1942).W. D. Hamilton conceived of kin selection in 1964. This synthesis cemented natural selection as the foundation of evolutionary theory, where it remains today. A second synthesis was brought about at the end of the 20th century by advances in molecular genetics, creating the field of evolutionary developmental biology ("evo-devo"), which seeks to explain the evolution of form in terms of the genetic regulatory programs which control the development of the embryo at molecular level. Natural selection is here understood to act on embryonic development to change the morphology of the adult body.Fitness The concept of fitness is central to natural selection. In broad terms, individuals that are more "fit" have better potential for survival, as in the well-known phrase "survival of the fittest", but the precise meaning of the term is much more subtle. Modern evolutionary theory defines fitness not by how long an organism lives, but by how successful it is at reproducing. If an organism lives half as long as others of its species, but has twice as many offspring surviving to adulthood, its genes become more common in the adult population of the next generation. Though natural selection acts on individuals, the effects of chance mean that fitness can only really be defined "on average" for the individuals within a population. The fitness of a particular genotype corresponds to the average effect on all individuals with that genotype. A distinction must be made between the concept of "survival of the fittest" and "improvement in fitness". "Survival of the fittest" does not give an "improvement in fitness", it only represents the removal of the less fit variants from a population. A mathematical example of "survival of the fittest" is given by Haldane in his paper "The Cost of Natural Selection". Haldane called this process "substitution" or more commonly in biology, this is called "fixation". This is correctly described by the differential survival and reproduction of individuals due to differences in phenotype. On the other hand, "improvement in fitness" is not dependent on the differential survival and reproduction of individuals due to differences in phenotype, it is dependent on the absolute survival of the particular variant. The probability of a beneficial mutation occurring on some member of a population depends on the total number of replications of that variant. The mathematics of "improvement in fitness was described by Kleinman. An empirical example of "improvement in fitness" is given by the Kishony Mega-plate experiment. In this experiment, "improvement in fitness" depends on the number of replications of the particular variant for a new variant to appear that is capable of growing in the next higher drug concentration region. Fixation or substitution is not required for this "improvement in fitness". On the other hand, "improvement in fitness" can occur in an environment where "survival of the fittest" is also acting. Richard Lenski's classic E. coli long-term evolution experiment is an example of adaptation in a competitive environment, ("improvement in fitness" during "survival of the fittest"). The probability of a beneficial mutation occurring on some member of the lineage to give improved fitness is slowed by the competition. The variant which is a candidate for a beneficial mutation in this limited carrying capacity environment must first out-compete the "less fit" variants in order to accumulate the requisite number of replications for there to be a reasonable probability of that beneficial mutation occurring.Impact Darwin's ideas, along with those of Adam Smith and Karl Marx, had a profound influence on 19th century thought, including his radical claim that "elaborately constructed forms, so different from each other, and dependent on each other in so complex a manner" evolved from the simplest forms of life by a few simple principles. This inspired some of Darwin's most ardent supporters—and provoked the strongest opposition. Natural selection had the power, according to Stephen Jay Gould, to "dethrone some of the deepest and most traditional comforts of Western thought", such as the belief that humans have a special place in the world.In the words of the philosopher Daniel Dennett, "Darwin's dangerous idea" of evolution by natural selection is a "universal acid," which cannot be kept restricted to any vessel or container, as it soon leaks out, working its way into ever-wider surroundings. Thus, in the last decades, the concept of natural selection has spread from evolutionary biology to other disciplines, including evolutionary computation, quantum Darwinism, evolutionary economics, evolutionary epistemology, evolutionary psychology, and cosmological natural selection. This unlimited applicability has been called universal Darwinism.
5
[ "Natural selection", "cause", "genetic variation" ]
Evolution by means of natural selection A prerequisite for natural selection to result in adaptive evolution, novel traits and speciation is the presence of heritable genetic variation that results in fitness differences. Genetic variation is the result of mutations, genetic recombinations and alterations in the karyotype (the number, shape, size and internal arrangement of the chromosomes). Any of these changes might have an effect that is highly advantageous or highly disadvantageous, but large effects are rare. In the past, most changes in the genetic material were considered neutral or close to neutral because they occurred in noncoding DNA or resulted in a synonymous substitution. However, many mutations in non-coding DNA have deleterious effects. Although both mutation rates and average fitness effects of mutations are dependent on the organism, a majority of mutations in humans are slightly deleterious.Some mutations occur in "toolkit" or regulatory genes. Changes in these often have large effects on the phenotype of the individual because they regulate the function of many other genes. Most, but not all, mutations in regulatory genes result in non-viable embryos. Some nonlethal regulatory mutations occur in HOX genes in humans, which can result in a cervical rib or polydactyly, an increase in the number of fingers or toes. When such mutations result in a higher fitness, natural selection favours these phenotypes and the novel trait spreads in the population. Established traits are not immutable; traits that have high fitness in one environmental context may be much less fit if environmental conditions change. In the absence of natural selection to preserve such a trait, it becomes more variable and deteriorate over time, possibly resulting in a vestigial manifestation of the trait, also called evolutionary baggage. In many circumstances, the apparently vestigial structure may retain a limited functionality, or may be co-opted for other advantageous traits in a phenomenon known as preadaptation. A famous example of a vestigial structure, the eye of the blind mole-rat, is believed to retain function in photoperiod perception.
6
[ "Pangenesis", "discoverer or inventor", "Charles Darwin" ]
Collapse Galton's experiments on rabbits Darwin's half-cousin Francis Galton conducted wide-ranging inquiries into heredity which led him to refute Charles Darwin's hypothetical theory of pangenesis. In consultation with Darwin, he set out to see if gemmules were transported in the blood. In a long series of experiments from 1869 to 1871, he transfused the blood between dissimilar breeds of rabbits, and examined the features of their offspring. He found no evidence of characters transmitted in the transfused blood.Galton was troubled because he began the work in good faith, intending to prove Darwin right, and having praised pangenesis in Hereditary Genius in 1869. Cautiously, he criticized his cousin's theory, although qualifying his remarks by saying that Darwin's gemmules, which he called "pangenes", might be temporary inhabitants of the blood that his experiments had failed to pick up.Darwin challenged the validity of Galton's experiment, giving his reasons in an article published in Nature where he wrote: Now, in the chapter on Pangenesis in my Variation of Animals and Plants under Domestication, I have not said one word about the blood, or about any fluid proper to any circulating system. It is, indeed, obvious that the presence of gemmules in the blood can form no necessary part of my hypothesis; for I refer in illustration of it to the lowest animals, such as the Protozoa, which do not possess blood or any vessels; and I refer to plants in which the fluid, when present in the vessels, cannot be considered as true blood." He goes on to admit: "Nevertheless, when I first heard of Mr. Galton's experiments, I did not sufficiently reflect on the subject, and saw not the difficulty of believing in the presence of gemmules in the blood. After the circulation of Galton's results, the perception of pangenesis quickly changed to severe skepticism if not outright disbelief.
0
[ "Pangenesis", "instance of", "superseded scientific theory" ]
Pangenesis was Charles Darwin's hypothetical mechanism for heredity, in which he proposed that each part of the body continually emitted its own type of small organic particles called gemmules that aggregated in the gonads, contributing heritable information to the gametes. He presented this 'provisional hypothesis' in his 1868 work The Variation of Animals and Plants Under Domestication, intending it to fill what he perceived as a major gap in evolutionary theory at the time. The etymology of the word comes from the Greek words pan (a prefix meaning "whole", "encompassing") and genesis ("birth") or genos ("origin"). Pangenesis mirrored ideas originally formulated by Hippocrates and other pre-Darwinian scientists, but using new concepts such as cell theory, explaining cell development as beginning with gemmules which were specified to be necessary for the occurrence of new growths in an organism, both in initial development and regeneration. It also accounted for regeneration and the Lamarckian concept of the inheritance of acquired characteristics, as a body part altered by the environment would produce altered gemmules. This made Pangenesis popular among the neo-Lamarckian school of evolutionary thought. This hypothesis was made effectively obsolete after the 1900 rediscovery among biologists of Gregor Mendel's theory of the particulate nature of inheritance.
4
[ "Linnaeus's flower clock", "discoverer or inventor", "Carl Linnaeus" ]
Linnaeus's flower clock was a garden plan hypothesized by Carl Linnaeus that would take advantage of several plants that open or close their flowers at particular times of the day to accurately indicate the time. According to Linnaeus's autobiographical notes, he discovered and developed the floral clock in 1748. It builds on the fact that there are species of plants that open or close their flowers at set times of day. He proposed the concept in his 1751 publication Philosophia Botanica, calling it the horologium florae (lit. 'flower clock'). His observations of how plants changed over time are summarised in several publications. Calendarium florae (the Flower Almanack) describes the seasonal changes in nature and the botanic garden during the year 1755. In Somnus plantarum (the Sleep of Plants), he describes how different plants prepare for sleep during the night, and in Vernatio arborum he gives an account of the timing of leaf-bud burst in different trees and bushes. He may never have planted such a garden, but the idea was attempted by several botanical gardens in the early 19th century, with mixed success. Many plants exhibit a strong circadian rhythm (see also Chronobiology), and a few have been observed to open at quite a regular time, but the accuracy of such a clock is diminished because flowering time is affected by weather and seasonal effects. The flowering times recorded by Linnaeus are also subject to differences in daylight due to latitude: his measurements are based on flowering times in Uppsala, where he taught and had received his university education. The plants suggested for use by Linnaeus are given in the table below, ordered by recorded opening time; "-" signifies that data are missing.
0
[ "Linnaeus's flower clock", "described by source", "Philosophia Botanica" ]
Linnaeus's flower clock was a garden plan hypothesized by Carl Linnaeus that would take advantage of several plants that open or close their flowers at particular times of the day to accurately indicate the time. According to Linnaeus's autobiographical notes, he discovered and developed the floral clock in 1748. It builds on the fact that there are species of plants that open or close their flowers at set times of day. He proposed the concept in his 1751 publication Philosophia Botanica, calling it the horologium florae (lit. 'flower clock'). His observations of how plants changed over time are summarised in several publications. Calendarium florae (the Flower Almanack) describes the seasonal changes in nature and the botanic garden during the year 1755. In Somnus plantarum (the Sleep of Plants), he describes how different plants prepare for sleep during the night, and in Vernatio arborum he gives an account of the timing of leaf-bud burst in different trees and bushes. He may never have planted such a garden, but the idea was attempted by several botanical gardens in the early 19th century, with mixed success. Many plants exhibit a strong circadian rhythm (see also Chronobiology), and a few have been observed to open at quite a regular time, but the accuracy of such a clock is diminished because flowering time is affected by weather and seasonal effects. The flowering times recorded by Linnaeus are also subject to differences in daylight due to latitude: his measurements are based on flowering times in Uppsala, where he taught and had received his university education. The plants suggested for use by Linnaeus are given in the table below, ordered by recorded opening time; "-" signifies that data are missing.
3
[ "Pascal's wager", "discoverer or inventor", "Blaise Pascal" ]
Pascal's wager is a philosophical argument presented by the seventeenth-century French mathematician, philosopher, physicist and theologian Blaise Pascal (1623–1662). It posits that human beings wager with their lives that God either exists or does not. Pascal argues that a rational person should live as though God exists and seek to believe in God. If God does not exist, such a person will have only a finite loss (some pleasures, luxury, etc.), whereas if God does exist, they stand to receive infinite gains (as represented by eternity in Heaven) and avoid infinite losses (an eternity in Hell).The original wager was set out in Pascal's posthumously published Pensées ("Thoughts"), an assembly of previously unpublished notes. Pascal's wager marked the first formal use of decision theory, existentialism, pragmatism, and voluntarism.The wager is commonly criticized with counterarguments such as the failure to prove the existence of God, the argument from inconsistent revelations, and the argument from inauthentic belief.
0
[ "Pascal's wager", "named after", "Blaise Pascal" ]
Pascal's wager is a philosophical argument presented by the seventeenth-century French mathematician, philosopher, physicist and theologian Blaise Pascal (1623–1662). It posits that human beings wager with their lives that God either exists or does not. Pascal argues that a rational person should live as though God exists and seek to believe in God. If God does not exist, such a person will have only a finite loss (some pleasures, luxury, etc.), whereas if God does exist, they stand to receive infinite gains (as represented by eternity in Heaven) and avoid infinite losses (an eternity in Hell).The original wager was set out in Pascal's posthumously published Pensées ("Thoughts"), an assembly of previously unpublished notes. Pascal's wager marked the first formal use of decision theory, existentialism, pragmatism, and voluntarism.The wager is commonly criticized with counterarguments such as the failure to prove the existence of God, the argument from inconsistent revelations, and the argument from inauthentic belief.
1
[ "Pascal's wager", "instance of", "argument" ]
Pascal's wager is a philosophical argument presented by the seventeenth-century French mathematician, philosopher, physicist and theologian Blaise Pascal (1623–1662). It posits that human beings wager with their lives that God either exists or does not. Pascal argues that a rational person should live as though God exists and seek to believe in God. If God does not exist, such a person will have only a finite loss (some pleasures, luxury, etc.), whereas if God does exist, they stand to receive infinite gains (as represented by eternity in Heaven) and avoid infinite losses (an eternity in Hell).The original wager was set out in Pascal's posthumously published Pensées ("Thoughts"), an assembly of previously unpublished notes. Pascal's wager marked the first formal use of decision theory, existentialism, pragmatism, and voluntarism.The wager is commonly criticized with counterarguments such as the failure to prove the existence of God, the argument from inconsistent revelations, and the argument from inauthentic belief.The wager The wager uses the following logic (excerpts from Pensées, part III, §233):Failure to prove the existence of God Voltaire (another prominent French writer of the Enlightenment), a generation after Pascal, regarded the idea of the wager as a "proof of God" as "indecent and childish", adding, "the interest I have to believe a thing is no proof that such a thing exists". Pascal, however, did not advance the wager as a proof of God's existence but rather as a necessary pragmatic decision which is "impossible to avoid" for any living person. He argued that abstaining from making a wager is not an option and that "reason is incapable of divining the truth"; thus, a decision of whether to believe in the existence of God must be made by "considering the consequences of each possibility". Voltaire's critique concerns not the nature of the Pascalian wager as proof of God's existence, but the contention that the very belief Pascal tried to promote is not convincing. Voltaire hints at the fact that Pascal, as a Jansenist, believed that only a small, and already predestined, portion of humanity would eventually be saved by God. Voltaire explained that no matter how far someone is tempted with rewards to believe in Christian salvation, the result will be at best a faint belief. Pascal, in his Pensées, agrees with this, not stating that people can choose to believe (and therefore make a safe wager), but rather that some cannot believe. As Étienne Souriau explained, in order to accept Pascal's argument, the bettor needs to be certain that God seriously intends to honour the bet; he says that the wager assumes that God also accepts the bet, which is not proved; Pascal's bettor is here like the fool who seeing a leaf floating on a river's waters and quivering at some point, for a few seconds, between the two sides of a stone, says: "I bet a million with Rothschild that it takes finally the left path." And, effectively, the leaf passed on the left side of the stone, but unfortunately for the fool Rothschild never said "I [will take that] bet".
5
[ "Pascal's calculator", "discoverer or inventor", "Blaise Pascal" ]
History Blaise Pascal began to work on his calculator in 1642, when he was 18 years old. He had been assisting his father, who worked as a tax commissioner, and sought to produce a device which could reduce some of his workload. Pascal received a Royal Privilege in 1649 that granted him exclusive rights to make and sell calculating machines in France. This was a major influence on the next mechanical calculator design made by Tom Monaghan. By 1654 he had sold about twenty machines (only nine of those twenty machines are known to exist today), but the cost and complexity of the Pascaline was a barrier to further sales and production ceased in that year. By that time Pascal had moved on to the study of religion and philosophy, which gave us both the Lettres provinciales and the Pensées. The tercentenary celebration of Pascal's invention of the mechanical calculator occurred during World War II when France was occupied by Germany and therefore the main celebration was held in London, England. Speeches given during the event highlighted Pascal's practical achievements when he was already known in the field of pure mathematics, and his creative imagination, along with how ahead of their time both the machine and its inventor were.
2
[ "Pascal's barrel", "instance of", "experiment" ]
Pascal's barrel Pascal's barrel is the name of a hydrostatics experiment allegedly performed by Blaise Pascal in 1646. In the experiment, Pascal supposedly inserted a long vertical tube into a barrel filled with water. When water was poured into the vertical tube, the increase in hydrostatic pressure caused the barrel to burst.The experiment is mentioned nowhere in Pascal's preserved works and it may be apocryphal, attributed to him by 19th-century French authors, among whom the experiment is known as crève-tonneau (approx.: "barrel-buster"); nevertheless the experiment remains associated with Pascal in many elementary physics textbooks.
0
[ "Pascal's barrel", "discoverer or inventor", "Blaise Pascal" ]
Pascal's law (also Pascal's principle or the principle of transmission of fluid-pressure) is a principle in fluid mechanics given by Blaise Pascal that states that a pressure change at any point in a confined incompressible fluid is transmitted throughout the fluid such that the same change occurs everywhere. The law was established by French mathematician Blaise Pascal in 1653 and published in 1663.Pascal's barrel Pascal's barrel is the name of a hydrostatics experiment allegedly performed by Blaise Pascal in 1646. In the experiment, Pascal supposedly inserted a long vertical tube into a barrel filled with water. When water was poured into the vertical tube, the increase in hydrostatic pressure caused the barrel to burst.The experiment is mentioned nowhere in Pascal's preserved works and it may be apocryphal, attributed to him by 19th-century French authors, among whom the experiment is known as crève-tonneau (approx.: "barrel-buster"); nevertheless the experiment remains associated with Pascal in many elementary physics textbooks.
1
[ "Leonard Kleinrock", "instance of", "human" ]
Leonard Kleinrock (born June 13, 1934) is an American computer scientist and Internet pioneer. He is a long-tenured professor at UCLA's Henry Samueli School of Engineering and Applied Science. In the early 1960s, Kleinrock pioneered the application of queueing theory to model delays in message switching networks in his Ph.D. thesis, published as a book in 1964. He later published several of the standard works on the subject. In the early 1970s, he applied queueing theory to model and measure the performance of packet switching networks. This work played an influential role in the development of the ARPANET. He supervised many graduate students whose later work on the communication protocols for internetworking led to the Internet protocol suite. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today.
0
[ "Leonard Kleinrock", "country of citizenship", "United States of America" ]
Leonard Kleinrock (born June 13, 1934) is an American computer scientist and Internet pioneer. He is a long-tenured professor at UCLA's Henry Samueli School of Engineering and Applied Science. In the early 1960s, Kleinrock pioneered the application of queueing theory to model delays in message switching networks in his Ph.D. thesis, published as a book in 1964. He later published several of the standard works on the subject. In the early 1970s, he applied queueing theory to model and measure the performance of packet switching networks. This work played an influential role in the development of the ARPANET. He supervised many graduate students whose later work on the communication protocols for internetworking led to the Internet protocol suite. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today.
2
[ "Leonard Kleinrock", "occupation", "computer scientist" ]
Leonard Kleinrock (born June 13, 1934) is an American computer scientist and Internet pioneer. He is a long-tenured professor at UCLA's Henry Samueli School of Engineering and Applied Science. In the early 1960s, Kleinrock pioneered the application of queueing theory to model delays in message switching networks in his Ph.D. thesis, published as a book in 1964. He later published several of the standard works on the subject. In the early 1970s, he applied queueing theory to model and measure the performance of packet switching networks. This work played an influential role in the development of the ARPANET. He supervised many graduate students whose later work on the communication protocols for internetworking led to the Internet protocol suite. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today.Education and career Leonard Kleinrock was born in New York City on June 13, 1934, to a Jewish family, and graduated from the noted Bronx High School of Science in 1951. He received a Bachelor of Electrical Engineering degree in 1957 from the City College of New York, and a master's degree and a doctorate (Ph.D.) in electrical engineering and computer science from the Massachusetts Institute of Technology in 1959 and 1963 respectively. He then joined the faculty at the University of California at Los Angeles (UCLA), where he remains to the present day; during 1991–1995 he served as the chairman of the Computer Science Department there.
3
[ "Leonard Kleinrock", "occupation", "professor" ]
Leonard Kleinrock (born June 13, 1934) is an American computer scientist and Internet pioneer. He is a long-tenured professor at UCLA's Henry Samueli School of Engineering and Applied Science. In the early 1960s, Kleinrock pioneered the application of queueing theory to model delays in message switching networks in his Ph.D. thesis, published as a book in 1964. He later published several of the standard works on the subject. In the early 1970s, he applied queueing theory to model and measure the performance of packet switching networks. This work played an influential role in the development of the ARPANET. He supervised many graduate students whose later work on the communication protocols for internetworking led to the Internet protocol suite. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today.
5
[ "Leonard Kleinrock", "educated at", "Massachusetts Institute of Technology" ]
Education and career Leonard Kleinrock was born in New York City on June 13, 1934, to a Jewish family, and graduated from the noted Bronx High School of Science in 1951. He received a Bachelor of Electrical Engineering degree in 1957 from the City College of New York, and a master's degree and a doctorate (Ph.D.) in electrical engineering and computer science from the Massachusetts Institute of Technology in 1959 and 1963 respectively. He then joined the faculty at the University of California at Los Angeles (UCLA), where he remains to the present day; during 1991–1995 he served as the chairman of the Computer Science Department there.Achievements Queueing theory Kleinrock's best-known and most-significant work is on queueing theory, a branch of operations research that has applications in many fields. His thesis proposal in 1961 led to a doctoral thesis at the Massachusetts Institute of Technology in 1962, later published in book form in 1964. In this work, he analyzed queueing delays in Plan 55-A, a message switching system operated by Western Union for processing telegrams. Kleinrock later published several of the standard works on the subject.
9
[ "Leonard Kleinrock", "employer", "University of California, Los Angeles" ]
Leonard Kleinrock (born June 13, 1934) is an American computer scientist and Internet pioneer. He is a long-tenured professor at UCLA's Henry Samueli School of Engineering and Applied Science. In the early 1960s, Kleinrock pioneered the application of queueing theory to model delays in message switching networks in his Ph.D. thesis, published as a book in 1964. He later published several of the standard works on the subject. In the early 1970s, he applied queueing theory to model and measure the performance of packet switching networks. This work played an influential role in the development of the ARPANET. He supervised many graduate students whose later work on the communication protocols for internetworking led to the Internet protocol suite. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today.
10
[ "Leonard Kleinrock", "field of work", "queueing theory" ]
Leonard Kleinrock (born June 13, 1934) is an American computer scientist and Internet pioneer. He is a long-tenured professor at UCLA's Henry Samueli School of Engineering and Applied Science. In the early 1960s, Kleinrock pioneered the application of queueing theory to model delays in message switching networks in his Ph.D. thesis, published as a book in 1964. He later published several of the standard works on the subject. In the early 1970s, he applied queueing theory to model and measure the performance of packet switching networks. This work played an influential role in the development of the ARPANET. He supervised many graduate students whose later work on the communication protocols for internetworking led to the Internet protocol suite. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today.Achievements Queueing theory Kleinrock's best-known and most-significant work is on queueing theory, a branch of operations research that has applications in many fields. His thesis proposal in 1961 led to a doctoral thesis at the Massachusetts Institute of Technology in 1962, later published in book form in 1964. In this work, he analyzed queueing delays in Plan 55-A, a message switching system operated by Western Union for processing telegrams. Kleinrock later published several of the standard works on the subject.Internet Kleinrock published hundreds of research papers, which ultimately launched a new field of research on the theory and application of queuing theory to computer networks. In this role, he supervised the research of scores of graduate students. He disseminated his research and that of his students to wider audiences for academic and commercial use, and organized hundreds of commercial seminars presented by experts and pioneers in the U.S. and internationally. Kleinrock's theoretical work on network design, hierarchical routing, wireless network access, network measurement, network congestion control, and nomadic computing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today.Crocker, Cerf, Postel and others at DARPA, Stanford University and other collaborating groups, developed the conventions – the Request for Comments or RfCs – and the communication protocols for internetworking that led to the Internet protocol suite. In 1988, Kleinrock was the chairman of a group that presented the report Toward a National Research Network to the U.S. Congress, concluding that "There is a clear and urgent need for a national research network". Although the U.S. did not build a nationwide national research and education network, this report influenced Al Gore to pursue the development of the High Performance Computing Act of 1991, which helped facilitate development of the Internet as it is known today. Funding from the bill was used in the development of the 1993 web browser Mosaic at the National Center for Supercomputing Applications (NCSA), which accelerated the adoption of the World Wide Web.
13
[ "Leonard Kleinrock", "educated at", "City College of New York" ]
Education and career Leonard Kleinrock was born in New York City on June 13, 1934, to a Jewish family, and graduated from the noted Bronx High School of Science in 1951. He received a Bachelor of Electrical Engineering degree in 1957 from the City College of New York, and a master's degree and a doctorate (Ph.D.) in electrical engineering and computer science from the Massachusetts Institute of Technology in 1959 and 1963 respectively. He then joined the faculty at the University of California at Los Angeles (UCLA), where he remains to the present day; during 1991–1995 he served as the chairman of the Computer Science Department there.
15