source stringlengths 31 168 | text stringlengths 51 3k |
|---|---|
https://en.wikipedia.org/wiki/Student%27s%20t-distribution | In probability and statistics, Student's t-distribution (or simply the t-distribution) is
a continuous probability distribution that generalizes the standard normal distribution. Like the latter, it is symmetric around zero and bell-shaped.
However, has heavier tails and the amount of probability mass in the tails is controlled by the parameter . For the Student's t distribution becomes the standard Cauchy distribution, whereas for it becomes the standard normal distribution .
The Student's t-distribution plays a role in a number of widely used statistical analyses, including Student's t-test for assessing the statistical significance of the difference between two sample means, the construction of confidence intervals for the difference between two population means, and in linear regression analysis.
In the form of the location-scale t-distribution it generalizes the normal distribution and also arises in the Bayesian analysis of data from a normal family as a compound distribution when marginalizing over the variance parameter.
History and etymology
In statistics, the t-distribution was first derived as a posterior distribution in 1876 by Helmert and Lüroth. The t-distribution also appeared in a more general form as Pearson Type IV distribution in Karl Pearson's 1895 paper.
In the English-language literature, the distribution takes its name from William Sealy Gosset's 1908 paper in Biometrika under the pseudonym "Student". One version of the origin of the pseudonym is that Gosset's employer preferred staff to use pen names when publishing scientific papers instead of their real name, so he used the name "Student" to hide his identity. Another version is that Guinness did not want their competitors to know that they were using the t-test to determine the quality of raw material.
Gosset worked at the Guinness Brewery in Dublin, Ireland, and was interested in the problems of small samples – for example, the chemical properties of barley where sample sizes might be as few as 3. Gosset's paper refers to the distribution as the "frequency distribution of standard deviations of samples drawn from a normal population". It became well known through the work of Ronald Fisher, who called the distribution "Student's distribution" and represented the test value with the letter t.
Definition
Probability density function
Student's t-distribution has the probability density function (PDF) given by
where is the number of degrees of freedom and is the gamma function. This may also be written as
where B is the Beta function. In particular for integer valued degrees of freedom we have:
For even,
For odd,
The probability density function is symmetric, and its overall shape resembles the bell shape of a normally distributed variable with mean 0 and variance 1, except that it is a bit lower and wider. As the number of degrees of freedom grows, the t-distribution approaches the normal distribution with mean 0 and variance 1. For this reaso |
https://en.wikipedia.org/wiki/Orthogonal%20matrix | In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.
One way to express this is
where is the transpose of and is the identity matrix.
This leads to the equivalent characterization: a matrix is orthogonal if its transpose is equal to its inverse:
where is the inverse of .
An orthogonal matrix is necessarily invertible (with inverse ), unitary (), where is the Hermitian adjoint (conjugate transpose) of , and therefore normal () over the real numbers. The determinant of any orthogonal matrix is either +1 or −1. As a linear transformation, an orthogonal matrix preserves the inner product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation, reflection or rotoreflection. In other words, it is a unitary transformation.
The set of orthogonal matrices, under multiplication, forms the group , known as the orthogonal group. The subgroup consisting of orthogonal matrices with determinant +1 is called the special orthogonal group, and each of its elements is a special orthogonal matrix. As a linear transformation, every special orthogonal matrix acts as a rotation.
Overview
An orthogonal matrix is the real specialization of a unitary matrix, and thus always a normal matrix. Although we consider only real matrices here, the definition can be used for matrices with entries from any field. However, orthogonal matrices arise naturally from dot products, and for matrices of complex numbers that leads instead to the unitary requirement. Orthogonal matrices preserve the dot product, so, for vectors and in an -dimensional real Euclidean space
where is an orthogonal matrix. To see the inner product connection, consider a vector in an -dimensional real Euclidean space. Written with respect to an orthonormal basis, the squared length of is . If a linear transformation, in matrix form , preserves vector lengths, then
Thus finite-dimensional linear isometries—rotations, reflections, and their combinations—produce orthogonal matrices. The converse is also true: orthogonal matrices imply orthogonal transformations. However, linear algebra includes orthogonal transformations between spaces which may be neither finite-dimensional nor of the same dimension, and these have no orthogonal matrix equivalent.
Orthogonal matrices are important for a number of reasons, both theoretical and practical. The orthogonal matrices form a group under matrix multiplication, the orthogonal group denoted by , which—with its subgroups—is widely used in mathematics and the physical sciences. For example, the point group of a molecule is a subgroup of O(3). Because floating point versions of orthogonal matrices have advantageous properties, they are key to many algorithms in numerical linear algebra, such as decomposition. As another example, with appropriate normalization the discrete cosine transform (used in MP3 compression) is represented by an orthogona |
https://en.wikipedia.org/wiki/Euler%27s%20criterion | In number theory, Euler's criterion is a formula for determining whether an integer is a quadratic residue modulo a prime. Precisely,
Let p be an odd prime and a be an integer coprime to p. Then
Euler's criterion can be concisely reformulated using the Legendre symbol:
The criterion dates from a 1748 paper by Leonhard Euler.
Proof
The proof uses the fact that the residue classes modulo a prime number are a field. See the article prime field for more details.
Because the modulus is prime, Lagrange's theorem applies: a polynomial of degree can only have at most roots. In particular, has at most 2 solutions for each . This immediately implies that besides 0 there are at least distinct quadratic residues modulo : each of the possible values of can only be accompanied by one other to give the same residue.
In fact, This is because
So, the distinct quadratic residues are:
As is coprime to , Fermat's little theorem says that
which can be written as
Since the integers mod form a field, for each , one or the other of these factors must be zero.
Now if is a quadratic residue, ,
So every quadratic residue (mod ) makes the first factor zero.
Applying Lagrange's theorem again, we note that there can be no more than values of that make the first factor zero. But as we noted at the beginning, there are at least distinct quadratic residues (mod ) (besides 0). Therefore, they are precisely the residue classes that make the first factor zero. The other residue classes, the nonresidues, must make the second factor zero, or they would not satisfy Fermat's little theorem. This is Euler's criterion.
Alternative proof
This proof only uses the fact that any congruence has a unique (modulo ) solution provided does not divide . (This is true because as runs through all nonzero remainders modulo without repetitions, so does —if we have , then , hence , but and aren't congruent modulo .) It follows from this fact that all nonzero remainders modulo the square of which isn't congruent to can be grouped into unordered pairs according to the rule that the product of the members of each pair is congruent to modulo (since by this fact for every we can find such an , uniquely, and vice versa, and they will differ from each other if is not congruent to ). If is a quadratic nonresidue, this is simply a regrouping of all nonzero residues into pairs, hence we conclude that . If is a quadratic residue, exactly two remainders were not among those paired, and such that . If we pair those two absent remainders together, their product will be rather than , whence in this case . In summary, considering these two cases we have demonstrated that for we have . It remains to substitute (which is obviously a square) into this formula to obtain at once Wilson's theorem, Euler's criterion, and (by squaring both sides of Euler's criterion) Fermat's little theorem.
Examples
Example 1: Finding primes for which a is a residue
Let a = 17. For whi |
https://en.wikipedia.org/wiki/Cayley%27s%20theorem | In group theory, Cayley's theorem, named in honour of Arthur Cayley, states that every group is isomorphic to a subgroup of a symmetric group.
More specifically, is isomorphic to a subgroup of the symmetric group whose elements are the permutations of the underlying set of .
Explicitly,
for each , the left-multiplication-by- map sending each element to is a permutation of , and
the map sending each element to is an injective homomorphism, so it defines an isomorphism from onto a subgroup of .
The homomorphism can also be understood as arising from the left translation action of on the underlying set .
When is finite, is finite too. The proof of Cayley's theorem in this case shows that if is a finite group of order , then is isomorphic to a subgroup of the standard symmetric group . But might also be isomorphic to a subgroup of a smaller symmetric group, for some ; for instance, the order 6 group is not only isomorphic to a subgroup of , but also (trivially) isomorphic to a subgroup of . The problem of finding the minimal-order symmetric group into which a given group embeds is rather difficult.
Alperin and Bell note that "in general the fact that finite groups are imbedded in symmetric groups has not influenced the methods used to study finite groups".
When is infinite, is infinite, but Cayley's theorem still applies.
History
While it seems elementary enough, at the time the modern definitions did not exist, and when Cayley introduced what are now called groups it was not immediately clear that this was equivalent to the previously known groups, which are now called permutation groups. Cayley's theorem unifies the two.
Although Burnside
attributes the theorem
to Jordan,
Eric Nummela
nonetheless argues that the standard name—"Cayley's Theorem"—is in fact appropriate. Cayley, in his original 1854 paper,
showed that the correspondence in the theorem is one-to-one, but he failed to explicitly show it was a homomorphism (and thus an embedding). However, Nummela notes that Cayley made this result known to the mathematical community at the time, thus predating Jordan by 16 years or so.
The theorem was later published by Walther Dyck in 1882 and is attributed to Dyck in the first edition of Burnside's book.
Background
A permutation of a set is a bijective function from to . The set of all permutations of forms a group under function composition, called the symmetric group on , and written as .
In particular, taking to be the underlying set of a group produces a symmetric group denoted .
Proof of the theorem
If g is any element of a group G with operation ∗, consider the function , defined by . By the existence of inverses, this function has also an inverse, . So multiplication by g acts as a bijective function. Thus, fg is a permutation of G, and so is a member of Sym(G).
The set is a subgroup of Sym(G) that is isomorphic to G. The fastest way to establish this is to consider the function with for e |
https://en.wikipedia.org/wiki/Direct%20sum%20of%20groups | In mathematics, a group G is called the direct sum of two normal subgroups with trivial intersection if it is generated by the subgroups. In abstract algebra, this method of construction of groups can be generalized to direct sums of vector spaces, modules, and other structures; see the article direct sum of modules for more information. A group which can be expressed as a direct sum of non-trivial subgroups is called decomposable, and if a group cannot be expressed as such a direct sum then it is called indecomposable.
Definition
A group G is called the direct sum of two subgroups H1 and H2 if
each H1 and H2 are normal subgroups of G,
the subgroups H1 and H2 have trivial intersection (i.e., having only the identity element of G in common),
G = ⟨H1, H2⟩; in other words, G is generated by the subgroups H1 and H2.
More generally, G is called the direct sum of a finite set of subgroups {Hi} if
each Hi is a normal subgroup of G,
each Hi has trivial intersection with the subgroup ,
G = ⟨{Hi}⟩; in other words, G is generated by the subgroups {Hi}.
If G is the direct sum of subgroups H and K then we write , and if G is the direct sum of a set of subgroups {Hi} then we often write G = ΣHi. Loosely speaking, a direct sum is isomorphic to a weak direct product of subgroups.
Properties
If , then it can be proven that:
for all h in H, k in K, we have that
for all g in G, there exists unique h in H, k in K such that
There is a cancellation of the sum in a quotient; so that is isomorphic to H
The above assertions can be generalized to the case of , where {Hi} is a finite set of subgroups:
if , then for all hi in Hi, hj in Hj, we have that
for each g in G, there exists a unique set of elements hi in Hi such that
g = h1 ∗ h2 ∗ ... ∗ hi ∗ ... ∗ hn
There is a cancellation of the sum in a quotient; so that is isomorphic to ΣHi.
Note the similarity with the direct product, where each g can be expressed uniquely as
g = (h1,h2, ..., hi, ..., hn).
Since for all , it follows that multiplication of elements in a direct sum is isomorphic to multiplication of the corresponding elements in the direct product; thus for finite sets of subgroups, ΣHi is isomorphic to the direct product ×{Hi}.
Direct summand
Given a group , we say that a subgroup is a direct summand of if there exists another subgroup of such that .
In abelian groups, if is a divisible subgroup of , then is a direct summand of .
Examples
If we take it is clear that is the direct product of the subgroups .
If is a divisible subgroup of an abelian group then there exists another subgroup of such that .
If also has a vector space structure then can be written as a direct sum of and another subspace that will be isomorphic to the quotient .
Equivalence of decompositions into direct sums
In the decomposition of a finite group into a direct sum of indecomposable subgroups the embedding of the subgroups is not unique. For example, in the Klein group we have th |
https://en.wikipedia.org/wiki/Algebraic%20structure | In mathematics, an algebraic structure consists of a nonempty set A (called the underlying set, carrier set or domain), a collection of operations on A (typically binary operations such as addition and multiplication), and a finite set of identities, known as axioms, that these operations must satisfy.
An algebraic structure may be based on other algebraic structures with operations and axioms involving several structures. For instance, a vector space involves a second structure called a field, and an operation called scalar multiplication between elements of the field (called scalars), and elements of the vector space (called vectors).
Abstract algebra is the name that is commonly given to the study of algebraic structures. The general theory of algebraic structures has been formalized in universal algebra. Category theory is another formalization that includes also other mathematical structures and functions between structures of the same type (homomorphisms).
In universal algebra, an algebraic structure is called an algebra; this term may be ambiguous, since, in other contexts, an algebra is an algebraic structure that is a vector space over a field or a module over a commutative ring.
The collection of all structures of a given type (same operations and same laws) is called a variety in universal algebra; this term is also used with a completely different meaning in algebraic geometry, as an abbreviation of algebraic variety. In category theory, the collection of all structures of a given type and homomorphisms between them form a concrete category.
Introduction
Addition and multiplication are prototypical examples of operations that combine two elements of a set to produce a third element of the same set. These operations obey several algebraic laws. For example, and are associative laws, and and are commutative laws. Many systems studied by mathematicians have operations that obey some, but not necessarily all, of the laws of ordinary arithmetic. For example, the possible moves of an object in three-dimensional space can be combined by performing a first move of the object, and then a second move from its new position. Such moves, formally called rigid motions, obey the associative law, but fail to satisfy the commutative law.
Sets with one or more operations that obey specific laws are called algebraic structures. When a new problem involves the same laws as such an algebraic structure, all the results that have been proved using only the laws of the structure can be directly applied to the new problem.
In full generality, algebraic structures may involve an arbitrary collection of operations, including operations that combine more than two elements (higher arity operations) and operations that take only one argument (unary operations) or even zero arguments (nullary operations). The examples listed below are by no means a complete list, but include the most common structures taught in undergraduate courses.
Common axioms
Eq |
https://en.wikipedia.org/wiki/Stand%20and%20Deliver | Stand and Deliver is a 1988 American drama film directed by Ramón Menéndez, written by Menéndez and Tom Musca, based on the true story of a high school mathematics teacher, Jaime Escalante. For portraying Escalante, Edward James Olmos was nominated for the Academy Award for Best Actor at the 61st Academy Awards. The film won the Independent Spirit Award for Best Feature in 1988. The film's title refers to the 1987 Mr. Mister song of the same name, which is also featured in the film's ending credits.
In 2011, the film was selected for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant".
Plot
In the early 1980s, Jaime Escalante becomes a mathematics teacher at James A. Garfield High School in East Los Angeles. The school is full of Latino students from working-class families whose academic achievement is far below their grade level. Two students, Angel and another gangster, arrive late and question Escalante's authority. Escalante demonstrates how to multiply numbers using one's fingers and appeals to the students' sense of humor. After class, some gangsters threaten Escalante. After school, he stops the gangsters from fighting. He then introduces himself as a "one-man gang" with the classroom as his domain. Escalante tells the students that he has decided to teach the students algebra.
At a meeting, Escalante learns that the school's accreditation is under threat, as test scores are not high enough. Escalante says that students will rise to the level that is expected of them. Escalante gives the students a quiz every morning and a new student joins the class. He instructs his class under the philosophy of ganas, roughly translating to "desire".
Escalante tells other faculty that he wants to teach the students calculus. He seeks to change the school culture to help the students excel in academics, as he has seen the untapped potential of his class. Other teachers ridicule him, as the students have not taken the prerequisites. Escalante states that the students can take the prerequisites over the summer. He sets a goal of having the students take Advanced Placement Calculus by their senior year.
The students sign up for the prerequisites over the summer. There is no air conditioning, but Escalante is able to teach the class, giving them oranges and telling them to focus so they can get good jobs and take vacations. In the fall, he gives the students contracts to be signed by the parents; they must come in on Saturdays, show up an hour early to school, and stay until 5pm in order to prepare for the AP Calculus exam.
Two weeks before the students' calculus exam, Escalante is teaching an ESL class to some adults. He suddenly clutches at his torso in pain, stumbles into the hallway, and falls. A substitute teacher is found for the students while Escalante recovers in the hospital, but the substitute teacher is a music teacher. Soon after, Escalante |
https://en.wikipedia.org/wiki/System%20of%20linear%20equations | In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same variables.
For example,
is a system of three equations in the three variables . A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by the ordered triple
since it makes all three equations valid. The word "system" indicates that the equations should be considered collectively, rather than individually.
Linear systems are the basis and a fundamental part of linear algebra, a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
Very often, and in this article, the coefficients of the equations are real or complex numbers and the solutions are searched in the same set of numbers, but the theory and the algorithms apply for coefficients and solutions in any field. For solutions in an integral domain like the ring of the integers, or in other algebraic structures, other theories have been developed, see Linear equation over a ring. Integer linear programming is a collection of methods for finding the "best" integer solution (when there are many). Gröbner basis theory provides algorithms when coefficients and unknowns are polynomials. Tropical geometry is another example of linear algebra in a more exotic structure.
Elementary examples
Trivial example
The system of one equation in one unknown
has the solution
However, a linear system is commonly considered as having at least two equations.
Simple nontrivial example
The simplest kind of nontrivial linear system involves two equations and two variables:
One method for solving such a system is as follows. First, solve the top equation for in terms of :
Now substitute this expression for x into the bottom equation:
This results in a single equation involving only the variable . Solving gives , and substituting this back into the equation for yields . This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.)
General form
A general system of m linear equations with n unknowns and coefficients can be written as
where are the unknowns, are the coefficients of the system, and are the constant terms.
Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure.
Vector equation
One extremely helpful view is that each unknown is a weight for a column vector in a l |
https://en.wikipedia.org/wiki/Chi-squared%20distribution | In probability theory and statistics, the chi-squared distribution (also chi-square or -distribution) with degrees of freedom is the distribution of a sum of the squares of independent standard normal random variables. The chi-squared distribution is a special case of the gamma distribution and is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing and in construction of confidence intervals. This distribution is sometimes called the central chi-squared distribution, a special case of the more general noncentral chi-squared distribution.
The chi-squared distribution is used in the common chi-squared tests for goodness of fit of an observed distribution to a theoretical one, the independence of two criteria of classification of qualitative data, and in confidence interval estimation for a population standard deviation of a normal distribution from a sample standard deviation. Many other statistical tests also use this distribution, such as Friedman's analysis of variance by ranks.
Definitions
If are independent, standard normal random variables, then the sum of their squares,
is distributed according to the chi-squared distribution with degrees of freedom. This is usually denoted as
The chi-squared distribution has one parameter: a positive integer that specifies the number of degrees of freedom (the number of random variables being summed, Zi s).
Introduction
The chi-squared distribution is used primarily in hypothesis testing, and to a lesser extent for confidence intervals for population variance when the underlying distribution is normal. Unlike more widely known distributions such as the normal distribution and the exponential distribution, the chi-squared distribution is not as often applied in the direct modeling of natural phenomena. It arises in the following hypothesis tests, among others:
Chi-squared test of independence in contingency tables
Chi-squared test of goodness of fit of observed data to hypothetical distributions
Likelihood-ratio test for nested models
Log-rank test in survival analysis
Cochran–Mantel–Haenszel test for stratified contingency tables
Wald test
Score test
It is also a component of the definition of the t-distribution and the F-distribution used in t-tests, analysis of variance, and regression analysis.
The primary reason for which the chi-squared distribution is extensively used in hypothesis testing is its relationship to the normal distribution. Many hypothesis tests use a test statistic, such as the t-statistic in a t-test. For these hypothesis tests, as the sample size, , increases, the sampling distribution of the test statistic approaches the normal distribution (central limit theorem). Because the test statistic (such as ) is asymptotically normally distributed, provided the sample size is sufficiently large, the distribution used for hypothesis testing may be approximated by a normal distribution. Testing hypotheses using a nor |
https://en.wikipedia.org/wiki/General%20linear%20group | In mathematics, the general linear group of degree n is the set of invertible matrices, together with the operation of ordinary matrix multiplication. This forms a group, because the product of two invertible matrices is again invertible, and the inverse of an invertible matrix is invertible, with the identity matrix as the identity element of the group. The group is so named because the columns (and also the rows) of an invertible matrix are linearly independent, hence the vectors/points they define are in general linear position, and matrices in the general linear group take points in general linear position to points in general linear position.
To be more precise, it is necessary to specify what kind of objects may appear in the entries of the matrix. For example, the general linear group over R (the set of real numbers) is the group of invertible matrices of real numbers, and is denoted by GLn(R) or .
More generally, the general linear group of degree n over any field F (such as the complex numbers), or a ring R (such as the ring of integers), is the set of invertible matrices with entries from F (or R), again with matrix multiplication as the group operation. Typical notation is GLn(F) or , or simply GL(n) if the field is understood.
More generally still, the general linear group of a vector space GL(V) is the automorphism group, not necessarily written as matrices.
The special linear group, written or SLn(F), is the subgroup of consisting of matrices with a determinant of 1.
The group and its subgroups are often called linear groups or matrix groups (the automorphism group GL(V) is a linear group but not a matrix group). These groups are important in the theory of group representations, and also arise in the study of spatial symmetries and symmetries of vector spaces in general, as well as the study of polynomials. The modular group may be realised as a quotient of the special linear group .
If , then the group is not abelian.
General linear group of a vector space
If V is a vector space over the field F, the general linear group of V, written GL(V) or Aut(V), is the group of all automorphisms of V, i.e. the set of all bijective linear transformations , together with functional composition as group operation. If V has finite dimension n, then GL(V) and are isomorphic. The isomorphism is not canonical; it depends on a choice of basis in V. Given a basis of V and an automorphism T in GL(V), we have then for every basis vector ei that
for some constants aij in F; the matrix corresponding to T is then just the matrix with entries given by the aji.
In a similar way, for a commutative ring R the group may be interpreted as the group of automorphisms of a free R-module M of rank n. One can also define GL(M) for any R-module, but in general this is not isomorphic to (for any n).
In terms of determinants
Over a field F, a matrix is invertible if and only if its determinant is nonzero. Therefore, an alternative definition of |
https://en.wikipedia.org/wiki/Zeta%20distribution | In probability theory and statistics, the zeta distribution is a discrete probability distribution. If X is a zeta-distributed random variable with parameter s, then the probability that X takes the integer value k is given by the probability mass function
where ζ(s) is the Riemann zeta function (which is undefined for s = 1).
The multiplicities of distinct prime factors of X are independent random variables.
The Riemann zeta function being the sum of all terms for positive integer k, it appears thus as the normalization of the Zipf distribution. The terms "Zipf distribution" and the "zeta distribution" are often used interchangeably. But while the Zeta distribution is a probability distribution by itself, it is not associated to the Zipf's law with same exponent. See also Yule–Simon distribution
Definition
The Zeta distribution is defined for positive integers , and its probability mass function is given by
,
where is the parameter, and is the Riemann zeta function.
The cumulative distribution function is given by
where is the generalized harmonic number
Moments
The nth raw moment is defined as the expected value of Xn:
The series on the right is just a series representation of the Riemann zeta function, but it only converges for values of that are greater than unity. Thus:
The ratio of the zeta functions is well-defined, even for n > s − 1 because the series representation of the zeta function can be analytically continued. This does not change the fact that the moments are specified by the series itself, and are therefore undefined for large n.
Moment generating function
The moment generating function is defined as
The series is just the definition of the polylogarithm, valid for so that
Since this does not converge on an open interval containing , the moment generating function does not exist.
The case s = 1
ζ(1) is infinite as the harmonic series, and so the case when s = 1 is not meaningful. However, if A is any set of positive integers that has a density, i.e. if
exists where N(A, n) is the number of members of A less than or equal to n, then
is equal to that density.
The latter limit can also exist in some cases in which A does not have a density. For example, if A is the set of all positive integers whose first digit is d, then A has no density, but nonetheless the second limit given above exists and is proportional to
which is Benford's law.
Infinite divisibility
The Zeta distribution can be constructed with a sequence of independent random variables with a Geometric distribution. Let be a prime number and be a random variable with a Geometric distribution of parameter , namely
If the random variables are independent, then, the random variable defined by
has the Zeta distribution : .
Stated differently, the random variable is infinitely divisible with Lévy measure given by the following sum of Dirac masses :
See also
Other "power-law" distributions
Cauchy distribution
Lévy distribution |
https://en.wikipedia.org/wiki/Euclidean | Euclidean (or, less commonly, Euclidian) is an adjective derived from the name of Euclid, an ancient Greek mathematician. It is the name of:
Geometry
Euclidean space, the two-dimensional plane and three-dimensional space of Euclidean geometry as well as their higher dimensional generalizations
Euclidean geometry, the study of the properties of Euclidean spaces
Non-Euclidean geometry, systems of points, lines, and planes analogous to Euclidean geometry but without uniquely determined parallel lines
Euclidean distance, the distance between pairs of points in Euclidean spaces
Euclidean ball, the set of points within some fixed distance from a center point
Number theory
Euclidean division, the division which produces a quotient and a remainder
Euclidean algorithm, a method for finding greatest common divisors
Extended Euclidean algorithm, a method for solving the Diophantine equation ax + by = d where d is the greatest common divisor of a and b
Euclid's lemma: if a prime number divides a product of two numbers, then it divides at least one of those two numbers
Euclidean domain, a ring in which Euclidean division may be defined, which allows Euclid's lemma to be true and the Euclidean algorithm and the extended Euclidean algorithm to work
Other
Euclidean relation, a property of binary relations related to transitivity
Euclidean distance map, a digital image in which each pixel value represents the Euclidean distance to an obstacle
Euclidean zoning, a system of land use management modeled after the zoning code of Euclid, Ohio
Euclidean division of the Intermediate Math League of Eastern Massachusetts
See also
Euclid (disambiguation)
Euclid's Elements, a 13-book mathematical treatise written by Euclid, that includes both geometry and number theory
Euclideon, an Australian computer graphics company
Mathematics disambiguation pages |
https://en.wikipedia.org/wiki/Spectral%20theorem | In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on Hilbert spaces.
The spectral theorem also provides a canonical decomposition, called the spectral decomposition, of the underlying vector space on which the operator acts.
Augustin-Louis Cauchy proved the spectral theorem for symmetric matrices, i.e., that every real, symmetric matrix is diagonalizable. In addition, Cauchy was the first to be systematic about determinants. The spectral theorem as generalized by John von Neumann is today perhaps the most important result of operator theory.
This article mainly focuses on the simplest kind of spectral theorem, that for a self-adjoint operator on a Hilbert space. However, as noted above, the spectral theorem also holds for normal operators on a Hilbert space.
Finite-dimensional case
Hermitian maps and Hermitian matrices
We begin by considering a Hermitian matrix on (but the following discussion will be adaptable to the more restrictive case of symmetric matrices on We consider a Hermitian map on a finite-dimensional complex inner product space endowed with a positive definite sesquilinear inner product . The Hermitian condition on means that for all ,
An equivalent condition is that , where is the Hermitian conjugate of . In the case that is identified with a Hermitian matrix, the matrix of is equal to its conjugate transpose. (If is a real matrix, then this is equivalent to , that is, is a symmetric matrix.)
This condition implies that all eigenvalues of a Hermitian map are real: To see this, it is enough to apply it to the case when is an eigenvector. (Recall that an eigenvector of a linear map is a non-zero vector such that for some scalar . The value is the corresponding eigenvalue. Moreover, the eigenvalues are roots of the characteristic polynomial.)
We provide a sketch of a proof for the case where the underlying field of scalars is the complex numbers.
By the fundamental theorem of algebra, applied to the characteristic polynomial of , there is at least one eigenva |
https://en.wikipedia.org/wiki/Matrix%20addition | In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together.
For a vector, , adding two matrices would have the geometric effect of applying each matrix transformation separately onto , then adding the transformed vectors.
However, there are other operations that could also be considered addition for matrices, such as the direct sum and the Kronecker sum.
Entrywise sum
Two matrices must have an equal number of rows and columns to be added. In which case, the sum of two matrices A and B will be a matrix which has the same number of rows and columns as A and B. The sum of A and B, denoted , is computed by adding corresponding elements of A and B:
Or more concisely (assuming that ):
For example:
Similarly, it is also possible to subtract one matrix from another, as long as they have the same dimensions. The difference of A and B, denoted , is computed by subtracting elements of B from corresponding elements of A, and has the same dimensions as A and B. For example:
Direct sum
Another operation, which is used less often, is the direct sum (denoted by ⊕). The Kronecker sum is also denoted ⊕; the context should make the usage clear. The direct sum of any pair of matrices A of size m × n and B of size p × q is a matrix of size (m + p) × (n + q) defined as:
For instance,
The direct sum of matrices is a special type of block matrix. In particular, the direct sum of square matrices is a block diagonal matrix.
The adjacency matrix of the union of disjoint graphs (or multigraphs) is the direct sum of their adjacency matrices. Any element in the direct sum of two vector spaces of matrices can be represented as a direct sum of two matrices.
In general, the direct sum of n matrices is:
where the zeros are actually blocks of zeros (i.e., zero matrices).
Kronecker sum
The Kronecker sum is different from the direct sum, but is also denoted by ⊕. It is defined using the Kronecker product ⊗ and normal matrix addition. If A is n-by-n, B is m-by-m and denotes the k-by-k identity matrix then the Kronecker sum is defined by:
See also
Matrix multiplication
Vector addition
Notes
References
External links
Abstract nonsense: Direct Sum of Linear Transformations and Direct Sum of Matrices
Mathematics Source Library: Arithmetic Matrix Operations
Matrix Algebra and R
Linear algebra
Bilinear maps |
https://en.wikipedia.org/wiki/Hadamard%20product | In mathematics, the Hadamard product may refer to:
Hadamard product of two matrices, the matrix such that each entry is the product of the corresponding entries of the input matrices
Hadamard product of two power series, the power series whose coefficients are the product of the corresponding coefficients of the input series
a way of expressing an entire function of finite order
an infinite product expansion for the Riemann zeta function |
https://en.wikipedia.org/wiki/Matrix%20multiplication | In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices and is denoted as .
Matrix multiplication was first described by the French mathematician Jacques Philippe Marie Binet in 1812, to represent the composition of linear maps that are represented by matrices. Matrix multiplication is thus a basic tool of linear algebra, and as such has numerous applications in many areas of mathematics, as well as in applied mathematics, statistics, physics, economics, and engineering.
Computing matrix products is a central operation in all computational applications of linear algebra.
Notation
This article will use the following notational conventions: matrices are represented by capital letters in bold, e.g. ; vectors in lowercase bold, e.g. ; and entries of vectors and matrices are italic (they are numbers from a field), e.g. and . Index notation is often the clearest way to express definitions, and is used as standard in the literature. The entry in row , column of matrix is indicated by , or . In contrast, a single subscript, e.g. , is used to select a matrix (not a matrix entry) from a collection of matrices.
Definition
If is an matrix and is an matrix,
the matrix product (denoted without multiplication signs or dots) is defined to be the matrix
such that
for and .
That is, the entry of the product is obtained by multiplying term-by-term the entries of the th row of and the th column of , and summing these products. In other words, is the dot product of the th row of and the th column of .
Therefore, can also be written as
Thus the product is defined if and only if the number of columns in equals the number of rows in , in this case .
In most scenarios, the entries are numbers, but they may be any kind of mathematical objects for which an addition and a multiplication are defined, that are associative, and such that the addition is commutative, and the multiplication is distributive with respect to the addition. In particular, the entries may be matrices themselves (see block matrix).
Illustration
The figure to the right illustrates diagrammatically the product of two matrices and , showing how each intersection in the product matrix corresponds to a row of and a column of .
The values at the intersections, marked with circles in figure to the right, are:
Fundamental applications
Historically, matrix multiplication has been introduced for facilitating and clarifying computations in linear algebra. This strong relationship between matrix multiplication and linear algebra remains fundamental in all mathematics, as well as in physics, chemistry, engineering and |
https://en.wikipedia.org/wiki/Symmetric%20matrix | In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,
Because equal matrices have equal dimensions, only square matrices can be symmetric.
The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if denotes the entry in the th row and th column then
for all indices and
Every square diagonal matrix is symmetric, since all off-diagonal elements are zero. Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative.
In linear algebra, a real symmetric matrix represents a self-adjoint operator represented in an orthonormal basis over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them.
Example
The following matrix is symmetric:
Since .
Properties
Basic properties
The sum and difference of two symmetric matrices is symmetric.
This is not always true for the product: given symmetric matrices and , then is symmetric if and only if and commute, i.e., if .
For any integer , is symmetric if is symmetric.
If exists, it is symmetric if and only if is symmetric.
Rank of a symmetric matrix is equal to the number of non-zero eigenvalues of .
Decomposition into symmetric and skew-symmetric
Any square matrix can uniquely be written as sum of a symmetric and a skew-symmetric matrix. This decomposition is known as the Toeplitz decomposition. Let denote the space of matrices. If denotes the space of symmetric matrices and the space of skew-symmetric matrices then and , i.e.
where denotes the direct sum. Let then
Notice that and . This is true for every square matrix with entries from any field whose characteristic is different from 2.
A symmetric matrix is determined by scalars (the number of entries on or above the main diagonal). Similarly, a skew-symmetric matrix is determined by scalars (the number of entries above the main diagonal).
Matrix congruent to a symmetric matrix
Any matrix congruent to a symmetric matrix is again symmetric: if is a symmetric matrix, then so is for any matrix .
Symmetry implies normality
A (real-valued) symmetric matrix is necessarily a normal matrix.
Real symmetric matrices
Denote by the standard inner product on . The real matrix is symmetric if and only if
Since this definition is independent of the choice of basis, symmetry is a property that depends only on the linear operator A and a choice of inner product. This characterization of symmetry is useful, for example, in differential geometry, for each tangen |
https://en.wikipedia.org/wiki/Unitary%20matrix | In linear algebra, an invertible complex square matrix is unitary if its conjugate transpose is also its inverse, that is, if
where is the identity matrix.
In physics, especially in quantum mechanics, the conjugate transpose is referred to as the Hermitian adjoint of a matrix and is denoted by a dagger (†), so the equation above is written
For real numbers, the analogue of a unitary matrix is an orthogonal matrix. Unitary matrices have significant importance in quantum mechanics because they preserve norms, and thus, probability amplitudes.
Properties
For any unitary matrix of finite size, the following hold:
Given two complex vectors and , multiplication by preserves their inner product; that is, .
is normal ().
is diagonalizable; that is, is unitarily similar to a diagonal matrix, as a consequence of the spectral theorem. Thus, has a decomposition of the form where is unitary, and is diagonal and unitary.
. That is, will be on the unit circle of the complex plane.
Its eigenspaces are orthogonal.
can be written as , where indicates the matrix exponential, is the imaginary unit, and is a Hermitian matrix.
For any nonnegative integer , the set of all unitary matrices with matrix multiplication forms a group, called the unitary group .
Any square matrix with unit Euclidean norm is the average of two unitary matrices.
Equivalent conditions
If U is a square, complex matrix, then the following conditions are equivalent:
is unitary.
is unitary.
is invertible with .
The columns of form an orthonormal basis of with respect to the usual inner product. In other words, .
The rows of form an orthonormal basis of with respect to the usual inner product. In other words, .
is an isometry with respect to the usual norm. That is, for all , where .
is a normal matrix (equivalently, there is an orthonormal basis formed by eigenvectors of ) with eigenvalues lying on the unit circle.
Elementary constructions
2 × 2 unitary matrix
One general expression of a unitary matrix is
which depends on 4 real parameters (the phase of , the phase of , the relative magnitude between and , and the angle ). The form is configured so the determinant of such a matrix is
The sub-group of those elements with is called the special unitary group SU(2).
Among several alternative forms, the matrix can be written in this form:
where and above, and the angles can take any values.
By introducing and has the following factorization:
This expression highlights the relation between unitary matrices and orthogonal matrices of angle .
Another factorization is
Many other factorizations of a unitary matrix in basic matrices are possible.
See also
Hermitian matrix and
Skew-Hermitian matrix
Matrix decomposition
Orthogonal group O(n)
Special orthogonal group SO(n)
Orthogonal matrix
Semi-orthogonal matrix
Quantum logic gate
Special Unitary group SU(n)
Symplectic matrix
Unitary group U(n)
Unitary operator
Reference |
https://en.wikipedia.org/wiki/Short%20five%20lemma | In mathematics, especially homological algebra and other applications of abelian category theory, the short five lemma is a special case of the five lemma.
It states that for the following commutative diagram (in any abelian category, or in the category of groups), if the rows are short exact sequences, and if g and h are isomorphisms, then f is an isomorphism as well.
It follows immediately from the five lemma.
The essence of the lemma can be summarized as follows: if you have a homomorphism f from an object B to an object , and this homomorphism induces an isomorphism from a subobject A of B to a subobject of and also an isomorphism from the factor object B/A to /, then f itself is an isomorphism. Note however that the existence of f (such that the diagram commutes) has to be assumed from the start; two objects B and that simply have isomorphic sub- and factor objects need not themselves be isomorphic (for example, in the category of abelian groups, B could be the cyclic group of order four and the Klein four-group).
References
Homological algebra
Lemmas in category theory |
https://en.wikipedia.org/wiki/Riemann%20curvature%20tensor | In the mathematical field of differential geometry, the Riemann curvature tensor or Riemann–Christoffel tensor (after Bernhard Riemann and Elwin Bruno Christoffel) is the most common way used to express the curvature of Riemannian manifolds. It assigns a tensor to each point of a Riemannian manifold (i.e., it is a tensor field). It is a local invariant of Riemannian metrics which measures the failure of the second covariant derivatives to commute. A Riemannian manifold has zero curvature if and only if it is flat, i.e. locally isometric to the Euclidean space. The curvature tensor can also be defined for any pseudo-Riemannian manifold, or indeed any manifold equipped with an affine connection.
It is a central mathematical tool in the theory of general relativity, the modern theory of gravity, and the curvature of spacetime is in principle observable via the geodesic deviation equation. The curvature tensor represents the tidal force experienced by a rigid body moving along a geodesic in a sense made precise by the Jacobi equation.
Definition
Let (M, g) be a Riemannian or pseudo-Riemannian manifold, and be the space of all vector fields on M. We define the Riemann curvature tensor as a map by the following formula where is the Levi-Civita connection:
or equivalently
where [X, Y] is the Lie bracket of vector fields and is a commutator of differential operators. It turns out that the right-hand side actually only depends on the value of the vector fields at a given point, which is notable since the covariant derivative of a vector field also depends on the field values in a neighborhood of the point. Hence, is a -tensor field. For fixed , the linear transformation is also called the curvature transformation or endomorphism. Occasionally, the curvature tensor is defined with the opposite sign.
The curvature tensor measures noncommutativity of the covariant derivative, and as such is the integrability obstruction for the existence of an isometry with Euclidean space (called, in this context, flat space).
Since the Levi-Civita connection is torsion-free, the curvature can also be expressed in terms of the second covariant derivative
as
Thus, the curvature tensor measures the noncommutativity of the second covariant derivative. In abstract index notation, The Riemann curvature tensor is also the commutator of the covariant derivative of an arbitrary covector with itself:
This formula is often called the Ricci identity. This is the classical method used by Ricci and Levi-Civita to obtain an expression for the Riemann curvature tensor. This identity can be generalized to get the commutators for two covariant derivatives of arbitrary tensors as follows
This formula also applies to tensor densities without alteration, because for the Levi-Civita (not generic) connection one gets:
where
It is sometimes convenient to also define the purely covariant version of the curvature tensor by
Geometric meaning
Informally
One can |
https://en.wikipedia.org/wiki/Skurup%20Municipality | Skurup Municipality (Skurups kommun) is a municipality in Skåne County in southern Sweden. Its seat is located in the town Skurup. It is considered part of Greater Malmö by Statistics Sweden.
The present municipality was formed in 1971 when the market town (köping) of Skurup was merged with Rydsgård and Vemmenhög.
Localities
There are 4 urban areas (also called a Tätort or locality) in Skurup Municipality. In the table they are listed according to the size of the population as of December 31, 2005. The municipal seat is in bold characters.
Places of interest
Svaneholm Castle
Twin cities
Maszlow, Poland
Franzburg-Richtenberg, Germany
References
Statistics Sweden
External links
Skurup - Official site
Coat of arms
Municipalities of Skåne County |
https://en.wikipedia.org/wiki/Helix%2C%20Oregon | Helix is a city in Umatilla County, Oregon, United States. The population was 184 at the 2010 census. It is part of the Pendleton–Hermiston Micropolitan Statistical Area.
History
Helix, a geometry term and a part of the ear, was originally to be named Oxford, but authorities declined that option when the community's post office was to be named in 1880. The citizens then decided on Helix since a resident had recently had ear surgery. The author of Oregon Geographic Names, Lewis A. McArthur, had his doubts about the story.
Geography
According to the United States Census Bureau, the city has a total area of , all of it land.
Demographics
2010 census
As of the census of 2010, there were 184 people, 55 households, and 46 families living in the city. The population density was . There were 68 housing units at an average density of . The racial makeup of the city was 81.0% White, 8.7% Native American, 3.8% from other races, and 6.5% from two or more races. Hispanic or Latino of any race were 6.0% of the population.
There were 55 households, of which 43.6% had children under the age of 18 living with them, 63.6% were married couples living together, 10.9% had a female householder with no husband present, 9.1% had a male householder with no wife present, and 16.4% were non-families. 9.1% of all households were made up of individuals, and 1.8% had someone living alone who was 65 years of age or older. The average household size was 3.35 and the average family size was 3.54.
The median age in the city was 35.7 years. 34.8% of residents were under the age of 18; 6.5% were between the ages of 18 and 24; 26.1% were from 25 to 44; 21.3% were from 45 to 64; and 11.4% were 65 years of age or older. The gender makeup of the city was 51.6% male and 48.4% female.
2000 census
As of the census of 2000, there were 183 people, 62 households, and 46 families living in the city. The population density was . There were 68 housing units at an average density of . The racial makeup of the city was 94.54% White, 2.19% Native American, 0.55% from other races, and 2.73% from two or more races. Hispanic or Latino of any race were 2.73% of the population.
There were 62 households, out of which 48.4% had children under the age of 18 living with them, 62.9% were married couples living together, 8.1% had a female householder with no husband present, and 24.2% were non-families. 19.4% of all households were made up of individuals, and 8.1% had someone living alone who was 65 years of age or older. The average household size was 2.95 and the average family size was 3.47.
In the city, the population was spread out, with 38.8% under the age of 18, 3.8% from 18 to 24, 31.1% from 25 to 44, 14.8% from 45 to 64, and 11.5% who were 65 years of age or older. The median age was 30 years. For every 100 females, there were 105.6 males. For every 100 females age 18 and over, there were 93.1 males.
The median income for a household in the city was $32,292, and the median income for a fa |
https://en.wikipedia.org/wiki/Tuple | In mathematics, a tuple is a finite sequence or ordered list of numbers or, more generally, mathematical objects, which are called the elements of the tuple. An -tuple is a tuple of elements, where is a non-negative integer. There is only one 0-tuple, called the empty tuple. A 1-tuple and a 2-tuple are commonly called respectively a singleton and an ordered pair.
Tuple may be formally defined from ordered pairs by recurrence by starting from ordered pairs; indeed, a -tuple can be identified with the ordered pair of its first elements and its th element.
Tuples are usually written by listing the elements within parentheses "", separated by a comma and a space; for example, denotes a 5-tuple. Sometimes other symbols are used to surround the elements, such as square brackets "[ ]" or angle brackets "⟨ ⟩". Braces "{ }" are used to specify arrays in some programming languages but not in mathematical expressions, as they are the standard notation for sets. The term tuple can often occur when discussing other mathematical objects, such as vectors.
In computer science, tuples come in many forms. Most typed functional programming languages implement tuples directly as product types, tightly associated with algebraic data types, pattern matching, and destructuring assignment. Many programming languages offer an alternative to tuples, known as record types, featuring unordered elements accessed by label. A few programming languages combine ordered tuple product types and unordered record types into a single construct, as in C structs and Haskell records. Relational databases may formally identify their rows (records) as tuples.
Tuples also occur in relational algebra; when programming the semantic web with the Resource Description Framework (RDF); in linguistics; and in philosophy.
Etymology
The term originated as an abstraction of the sequence: single, couple/double, triple, quadruple, quintuple, sextuple, septuple, octuple, ..., ‑tuple, ..., where the prefixes are taken from the Latin names of the numerals. The unique 0-tuple is called the null tuple or empty tuple. A 1‑tuple is called a single (or singleton), a 2‑tuple is called an ordered pair or couple, and a 3‑tuple is called a triple (or triplet). The number can be any nonnegative integer. For example, a complex number can be represented as a 2‑tuple of reals, a quaternion can be represented as a 4‑tuple, an octonion can be represented as an 8‑tuple, and a sedenion can be represented as a 16‑tuple.
Although these uses treat ‑uple as the suffix, the original suffix was ‑ple as in "triple" (three-fold) or "decuple" (ten‑fold). This originates from medieval Latin plus (meaning "more") related to Greek ‑πλοῦς, which replaced the classical and late antique ‑plex (meaning "folded"), as in "duplex".
Properties
The general rule for the identity of two -tuples is
if and only if .
Thus a tuple has properties that distinguish it from a set:
A tuple may contain multiple instances of the same |
https://en.wikipedia.org/wiki/Parallelogram | In Euclidean geometry, a parallelogram is a simple (non-self-intersecting) quadrilateral with two pairs of parallel sides. The opposite or facing sides of a parallelogram are of equal length and the opposite angles of a parallelogram are of equal measure. The congruence of opposite sides and opposite angles is a direct consequence of the Euclidean parallel postulate and neither condition can be proven without appealing to the Euclidean parallel postulate or one of its equivalent formulations.
By comparison, a quadrilateral with at least one pair of parallel sides is a trapezoid in American English or a trapezium in British English.
The three-dimensional counterpart of a parallelogram is a parallelepiped.
The etymology (in Greek παραλληλ-όγραμμον, parallēl-ógrammon, a shape "of parallel lines") reflects the definition.
Special cases
Rectangle – A parallelogram with four angles of equal size (right angles).
Rhombus – A parallelogram with four sides of equal length. Any parallelogram that is neither a rectangle nor a rhombus was traditionally called a rhomboid but this term is not used in modern mathematics.
Square – A parallelogram with four sides of equal length and angles of equal size (right angles).
Characterizations
A simple (non-self-intersecting) quadrilateral is a parallelogram if and only if any one of the following statements is true:
Two pairs of opposite sides are parallel (by definition).
Two pairs of opposite sides are equal in length.
Two pairs of opposite angles are equal in measure.
The diagonals bisect each other.
One pair of opposite sides is parallel and equal in length.
Adjacent angles are supplementary.
Each diagonal divides the quadrilateral into two congruent triangles.
The sum of the squares of the sides equals the sum of the squares of the diagonals. (This is the parallelogram law.)
It has rotational symmetry of order 2.
The sum of the distances from any interior point to the sides is independent of the location of the point. (This is an extension of Viviani's theorem.)
There is a point X in the plane of the quadrilateral with the property that every straight line through X divides the quadrilateral into two regions of equal area.
Thus all parallelograms have all the properties listed above, and conversely, if just one of these statements is true in a simple quadrilateral, then it is a parallelogram.
Other properties
Opposite sides of a parallelogram are parallel (by definition) and so will never intersect.
The area of a parallelogram is twice the area of a triangle created by one of its diagonals.
The area of a parallelogram is also equal to the magnitude of the vector cross product of two adjacent sides.
Any line through the midpoint of a parallelogram bisects the area.
Any non-degenerate affine transformation takes a parallelogram to another parallelogram.
A parallelogram has rotational symmetry of order 2 (through 180°) (or order 4 if a square). If it also has exactly two lines of reflectional symmetry then it m |
https://en.wikipedia.org/wiki/Cholesky%20decomposition | In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924.
When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.
Statement
The Cholesky decomposition of a Hermitian positive-definite matrix A, is a decomposition of the form
where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition.
The converse holds trivially: if A can be written as LL* for some invertible L, lower triangular or otherwise, then A is Hermitian and positive definite.
When A is a real matrix (hence symmetric positive-definite), the factorization may be written
where L is a real lower triangular matrix with positive diagonal entries.
Positive semidefinite matrices
If a Hermitian matrix A is only positive semidefinite, instead of positive definite, then it still has a decomposition of the form where the diagonal entries of L are allowed to be zero.
The decomposition need not be unique, for example:
However, if the rank of A is r, then there is a unique lower triangular L with exactly r positive diagonal elements and n−r columns containing all zeroes.
Alternatively, the decomposition can be made unique when a pivoting choice is fixed. Formally, if A is an positive semidefinite matrix of rank r, then there is at least one permutation matrix P such that has a unique decomposition of the form with
,
where L1 is an lower triangular matrix with positive diagonal.
LDL decomposition
A closely related variant of the classical Cholesky decomposition is the LDL decomposition,
where L is a lower unit triangular (unitriangular) matrix, and D is a diagonal matrix.
That is, the diagonal elements of L are required to be 1 at the cost of introducing an additional diagonal matrix D in the decomposition.
The main advantage is that the LDL decomposition can be computed and used with essentially the same algorithms, but avoids extracting square roots.
For this reason, the LDL decomposition is often called the square-root-free Cholesky decomposition. For real matrices, the factorization has the form and is often referred to as LDLT decomposition (or LDLT decomposition, or LDL′). It is reminiscent of the eigendecomposition of real symmetric matrices, , but is quite different in practice because Λ and D are not similar matrices.
The LDL decomposition is related to the classical Cholesky decomposition of the form LL* as follows:
Conversely, g |
https://en.wikipedia.org/wiki/Fort%20Bliss | Fort Bliss is a United States Army post in New Mexico and Texas, with its headquarters in El Paso, Texas. Named in honor of LTC William Bliss (1815–1853), a mathematics professor who was the son-in-law of President Zachary Taylor, Ft. Bliss has an area of about ; it is the largest installation in FORSCOM (United States Army Forces Command) and second-largest in the Army overall (the largest being the adjacent White Sands Missile Range). The portion of the post located in El Paso County, Texas, is a census-designated place with a population of 8,591 as of the time of the 2010 census. Fort Bliss provides the largest contiguous tract () of restricted airspace in the Continental United States, used for missile and artillery training and testing, and at 992,000 acres boasts the largest maneuver area (ahead of the National Training Center, which has 642,000 acres). The garrison's land area is accounted at 1.12 million acres, ranging to the boundaries of the Lincoln National Forest and White Sands Missile Range in New Mexico. Fort Bliss also includes the Castner Range National Monument.
Units
Fort Bliss is home to the 1st Armored Division, which returned to US soil in 2011 after 40 years in Germany. The division is supported by the 1st Armored Division Sustainment Brigade. The installation is also home to Joint Task Force North (JTF), a joint service command. JTF North supports federal law enforcement agencies in the conduct of counterdrug/counter transnational organized crime operations; it facilitates DoD training in the United States Northern Command (USNORTHCOM) area of responsibility, to disrupt transnational criminal organizations and deter their freedom of action in order to protect the homeland and increase DoD unit readiness. The 32nd Army Air and Missile Defense Command (AAMDC) is a theater level Army air and missile defense multi component organization with a worldwide, 72 hour deployment mission. It is the Army Forces Command and Joint Force Land Component Commanders' (ARFOR / JFLCC) organization that performs critical theater air and missile defense planning, integration, coordination, and execution functions. The Joint Modernization Command (JMC) plans, prepares, and executes Joint Warfighting Assessments and other concept and capability assessments, provides objective analysis and feasible recommendations to enhance Multi Domain Command and Control and inform Army Modernization decisions. On order, JMC conducts directed assessments in support of the Cross Functional Teams of Army Futures Command.
1st Armored Division
1st Armored Division units include: 1st Brigade Combat Team, 1st Armored Division ("Ready First") is prepared to deploy, conduct decisive and sustainable land operations in support of a division, Joint Task Force, or Multinational Force. The Brigade will be trained and ready to conduct decisive action as part of Combined Arms Maneuver or Wide Area Security operations IOT disrupt or destroy enemy military forces, control |
https://en.wikipedia.org/wiki/Thorp%2C%20Washington | Thorp ( ) is an unincorporated community and census-designated place (CDP) in Kittitas County, Washington, United States. In 2015, the population was 317 according to statistics compiled by Data USA.
The town of Thorp is east of Seattle, northwest of Ellensburg, and southeast of Cle Elum. It is located at the narrow west end of the Kittitas Valley, where high elevation forests of the Cascade Range give way to cattle ranches surrounded by farmlands noted for timothy hay, alfalfa, vegetables, and fruit production.
Thorp is named for Fielden Mortimer Thorp, recognized as the first permanent white settler in the Kittitas Valley. He established a homestead at the approach to Taneum Canyon (, ) near the present-day town in 1868. Klála, an ancient Native American village and the largest indigenous settlement in the Kittitas Valley at the arrival of the first white settlers, was located about one mile above the current town site.
Geography
Thorp is located in central Kittitas County at (47.068006, -120.672687). According to the United States Census Bureau, the CDP has a total area of , all of it land.
The town site of Thorp is above the flood plain of the upper Yakima River at an elevation of . It is situated near the river's west bank directly opposite the Hayward Hill slide area and Clark Flats, near the southeastern approach to the Yakima River canyon at the foot of Thorp Prairie. To the west of the town is Taneum Canyon, and to the northwest are Elk Heights, Morrison Canyon and the Sunlight Waters private residential subdivision. Ellensburg, the county seat, is southeast of Thorp.
Northwest of Thorp at the junction of SR 10 and Thorp Highway, the Yakima River emerges from a canyon parallel to a basalt flow, the uppermost layers of which have been dated to 10.5 million years. The Thorp Prairie sits atop the basalt flows and ends at a deep canyon of Miocene columnar basalt structures carved by Swauk Creek whose headwaters are at Blewett Pass along US 97 to the north. The Thorp Prairie deposits were also delivered by the Thorp Glacial episode.
Topography
North and northeast of the town of Thorp along the Yakima River channel is the gradual upward lift of the Thorp Drift, marked by an elevation change due to the incline onto the terminal moraine that marks the furthest advance of the Thorp Glacial stage. Here the Thorp Gravels, which are named for the town of Thorp and the Thorp Glacial episode, are exposed along the ancient river channel in what is known as the "Slide Area". The gravels were formed at the terminus of the Thorp Glacial advance approximately 600,000 years ago.
The Thorp Gravels themselves are believed to be between 3 and 4 million years old. The whole structure is composed of individually layered belts of gravel and sand which are not well consolidated, continually weather, and are prone to continuing erosion and landslides averaging 30 degrees. The area is rich with wildlife, including bald eagles and osprey who hunt for prey |
https://en.wikipedia.org/wiki/Five%20lemma | In mathematics, especially homological algebra and other applications of abelian category theory, the five lemma is an important and widely used lemma about commutative diagrams.
The five lemma is not only valid for abelian categories but also works in the category of groups, for example.
The five lemma can be thought of as a combination of two other theorems, the four lemmas, which are dual to each other.
Statements
Consider the following commutative diagram in any abelian category (such as the category of abelian groups or the category of vector spaces over a given field) or in the category of groups.
The five lemma states that, if the rows are exact, m and p are isomorphisms, l is an epimorphism, and q is a monomorphism, then n is also an isomorphism.
The two four-lemmas state:
Proof
The method of proof we shall use is commonly referred to as diagram chasing. We shall prove the five lemma by individually proving each of the two four lemmas.
To perform diagram chasing, we assume that we are in a category of modules over some ring, so that we may speak of elements of the objects in the diagram and think of the morphisms of the diagram as functions (in fact, homomorphisms) acting on those elements.
Then a morphism is a monomorphism if and only if it is injective, and it is an epimorphism if and only if it is surjective.
Similarly, to deal with exactness, we can think of kernels and images in a function-theoretic sense.
The proof will still apply to any (small) abelian category because of Mitchell's embedding theorem, which states that any small abelian category can be represented as a category of modules over some ring.
For the category of groups, just turn all additive notation below into multiplicative notation, and note that commutativity of abelian group is never used.
So, to prove (1), assume that m and p are surjective and q is injective.
Let c′ be an element of C′.
Since p is surjective, there exists an element d in D with p(d) = t(c′).
By commutativity of the diagram, u(p(d)) = q(j(d)).
Since im t = ker u by exactness, 0 = u(t(c′)) = u(p(d)) = q(j(d)).
Since q is injective, j(d) = 0, so d is in ker j = im h.
Therefore, there exists c in C with h(c) = d.
Then t(n(c)) = p(h(c)) = t(c′). Since t is a homomorphism, it follows that t(c′ − n(c)) = 0.
By exactness, c′ − n(c) is in the image of s, so there exists b′ in B′ with s(b′) = c′ − n(c).
Since m is surjective, we can find b in B such that b′ = m(b).
By commutativity, n(g(b)) = s(m(b)) = c′ − n(c).
Since n is a homomorphism, n(g(b) + c) = n(g(b)) + n(c) = c′ − n(c) + n(c) = c′.
Therefore, n is surjective.
Then, to prove (2), assume that m and p are injective and l is surjective.
Let c in C be such that n(c) = 0.
t(n(c)) is then 0.
By commutativity, p(h(c)) = 0.
Since p is injective, h(c) = 0.
By exactness, there is an element b of B such that g(b) = c.
By commutativity, s(m(b)) = n(g(b)) = n(c) = 0.
By exactness, there is then an element a′ of A′ such |
https://en.wikipedia.org/wiki/Splitting%20lemma | In mathematics, and more specifically in homological algebra, the splitting lemma states that in any abelian category, the following statements are equivalent for a short exact sequence
If any of these statements holds, the sequence is called a split exact sequence, and the sequence is said to split.
In the above short exact sequence, where the sequence splits, it allows one to refine the first isomorphism theorem, which states that:
(i.e., isomorphic to the coimage of or cokernel of )
to:
where the first isomorphism theorem is then just the projection onto .
It is a categorical generalization of the rank–nullity theorem (in the form in linear algebra.
Proof for the category of abelian groups
and
First, to show that 3. implies both 1. and 2., we assume 3. and take as the natural projection of the direct sum onto , and take as the natural injection of into the direct sum.
To prove that 1. implies 3., first note that any member of B is in the set (). This follows since for all in , ; is in , and is in , since
Next, the intersection of and is 0, since if there exists in such that , and , then ; and therefore, .
This proves that is the direct sum of and . So, for all in , can be uniquely identified by some in , in , such that .
By exactness . The subsequence implies that is onto; therefore for any in there exists some such that . Therefore, for any c in C, exists k in ker t such that c = r(k), and r(ker t) = C.
If , then is in ; since the intersection of and , then . Therefore, the restriction is an isomorphism; and is isomorphic to .
Finally, is isomorphic to due to the exactness of ; so B is isomorphic to the direct sum of and , which proves (3).
To show that 2. implies 3., we follow a similar argument. Any member of is in the set ; since for all in , , which is in . The intersection of and is , since if and , then .
By exactness, , and since is an injection, is isomorphic to , so is isomorphic to . Since is a bijection, is an injection, and thus is isomorphic to . So is again the direct sum of and .
An alternative "abstract nonsense" proof of the splitting lemma may be formulated entirely in category theoretic terms.
Non-abelian groups
In the form stated here, the splitting lemma does not hold in the full category of groups, which is not an abelian category.
Partially true
It is partially true: if a short exact sequence of groups is left split or a direct sum (1. or 3.), then all of the conditions hold. For a direct sum this is clear, as one can inject from or project to the summands. For a left split sequence, the map gives an isomorphism, so is a direct sum (3.), and thus inverting the isomorphism and composing with the natural injection gives an injection splitting (2.).
However, if a short exact sequence of groups is right split (2.), then it need not be left split or a direct sum (neither 1. nor 3. follows): the problem is that the image of the right splitting need no |
https://en.wikipedia.org/wiki/Chain%20complex | In mathematics, a chain complex is an algebraic structure that consists of a sequence of abelian groups (or modules) and a sequence of homomorphisms between consecutive groups such that the image of each homomorphism is included in the kernel of the next. Associated to a chain complex is its homology, which describes how the images are included in the kernels.
A cochain complex is similar to a chain complex, except that its homomorphisms are in the opposite direction. The homology of a cochain complex is called its cohomology.
In algebraic topology, the singular chain complex of a topological space X is constructed using continuous maps from a simplex to X, and the homomorphisms of the chain complex capture how these maps restrict to the boundary of the simplex. The homology of this chain complex is called the singular homology of X, and is a commonly used invariant of a topological space.
Chain complexes are studied in homological algebra, but are used in several areas of mathematics, including abstract algebra, Galois theory, differential geometry and algebraic geometry. They can be defined more generally in abelian categories.
Definitions
A chain complex is a sequence of abelian groups or modules ..., A0, A1, A2, A3, A4, ... connected by homomorphisms (called boundary operators or differentials) , such that the composition of any two consecutive maps is the zero map. Explicitly, the differentials satisfy , or with indices suppressed, . The complex may be written out as follows.
The cochain complex is the dual notion to a chain complex. It consists of a sequence of abelian groups or modules ..., A0, A1, A2, A3, A4, ... connected by homomorphisms satisfying . The cochain complex may be written out in a similar fashion to the chain complex.
The index n in either An or An is referred to as the degree (or dimension). The difference between chain and cochain complexes is that, in chain complexes, the differentials decrease dimension, whereas in cochain complexes they increase dimension. All the concepts and definitions for chain complexes apply to cochain complexes, except that they will follow this different convention for dimension, and often terms will be given the prefix co-. In this article, definitions will be given for chain complexes when the distinction is not required.
A bounded chain complex is one in which almost all the An are 0; that is, a finite complex extended to the left and right by 0. An example is the chain complex defining the simplicial homology of a finite simplicial complex. A chain complex is bounded above if all modules above some fixed degree N are 0, and is bounded below if all modules below some fixed degree are 0. Clearly, a complex is bounded both above and below if and only if the complex is bounded.
The elements of the individual groups of a (co)chain complex are called (co)chains. The elements in the kernel of d are called (co)cycles (or closed elements), and the elements in the image of d are called (co |
https://en.wikipedia.org/wiki/Commutative%20diagram | In mathematics, and especially in category theory, a commutative diagram is a diagram such that all directed paths in the diagram with the same start and endpoints lead to the same result. It is said that commutative diagrams play the role in category theory that equations play in algebra.
Description
A commutative diagram often consists of three parts:
objects (also known as vertices)
morphisms (also known as arrows or edges)
paths or composites
Arrow symbols
In algebra texts, the type of morphism can be denoted with different arrow usages:
A monomorphism may be labeled with a or a .
An epimorphism may be labeled with a .
An isomorphism may be labeled with a .
The dashed arrow typically represents the claim that the indicated morphism exists (whenever the rest of the diagram holds); the arrow may be optionally labeled as .
If the morphism is in addition unique, then the dashed arrow may be labeled or .
The meanings of different arrows are not entirely standardized: the arrows used for monomorphisms, epimorphisms, and isomorphisms are also used for injections, surjections, and bijections, as well as the cofibrations, fibrations, and weak equivalences in a model category.
Verifying commutativity
Commutativity makes sense for a polygon of any finite number of sides (including just 1 or 2), and a diagram is commutative if every polygonal subdiagram is commutative.
Note that a diagram may be non-commutative, i.e., the composition of different paths in the diagram may not give the same result.
Examples
Example 1
In the left diagram, which expresses the first isomorphism theorem, commutativity of the triangle means that . In the right diagram, commutativity of the square means .
Example 2
In order for the diagram below to commute, three equalities must be satisfied:
Here, since the first equality follows from the last two, it suffices to show that (2) and (3) are true in order for the diagram to commute. However, since equality (3) generally does not follow from the other two, it is generally not enough to have only equalities (1) and (2) if one were to show that the diagram commutes.
Diagram chasing
Diagram chasing (also called diagrammatic search) is a method of mathematical proof used especially in homological algebra, where one establishes a property of some morphism by tracing the elements of a commutative diagram. A proof by diagram chasing typically involves the formal use of the properties of the diagram, such as injective or surjective maps, or exact sequences. A syllogism is constructed, for which the graphical display of the diagram is just a visual aid. It follows that one ends up "chasing" elements around the diagram, until the desired element or result is constructed or verified.
Examples of proofs by diagram chasing include those typically given for the five lemma, the snake lemma, the zig-zag lemma, and the nine lemma.
In higher category theory
In higher category theory, one considers not only objects a |
https://en.wikipedia.org/wiki/Nullity | Nullity may refer to:
Legal nullity, something without legal significance
Nullity (conflict), a legal declaration that no marriage had ever come into being
Mathematics
Nullity (linear algebra), the dimension of the kernel of a mathematical operator or null space of a matrix
Nullity (graph theory), the nullity of the adjacency matrix of a graph
Nullity, the difference between the size and rank of a subset in a matroid
Nullity, a concept in transreal arithmetic denoted by Φ, or similarly in wheel theory denoted by ⊥. |
https://en.wikipedia.org/wiki/Landau%27s%20function | In mathematics, Landau's function g(n), named after Edmund Landau, is defined for every natural number n to be the largest order of an element of the symmetric group Sn. Equivalently, g(n) is the largest least common multiple (lcm) of any partition of n, or the maximum number of times a permutation of n elements can be recursively applied to itself before it returns to its starting sequence.
For instance, 5 = 2 + 3 and lcm(2,3) = 6. No other partition of 5 yields a bigger lcm, so g(5) = 6. An element of order 6 in the group S5 can be written in cycle notation as (1 2) (3 4 5). Note that the same argument applies to the number 6, that is, g(6) = 6. There are arbitrarily long sequences of consecutive numbers n, n + 1, …, n + m on which the function g is constant.
The integer sequence g(0) = 1, g(1) = 1, g(2) = 2, g(3) = 3, g(4) = 4, g(5) = 6, g(6) = 6, g(7) = 12, g(8) = 15, ... is named after Edmund Landau, who proved in 1902 that
(where ln denotes the natural logarithm). Equivalently (using little-o notation), .
The statement that
for all sufficiently large n, where Li−1 denotes the inverse of the logarithmic integral function, is equivalent to the Riemann hypothesis.
It can be shown that
with the only equality between the functions at n = 0, and indeed
Notes
References
E. Landau, "Über die Maximalordnung der Permutationen gegebenen Grades [On the maximal order of permutations of given degree]", Arch. Math. Phys. Ser. 3, vol. 5, 1903.
W. Miller, "The maximum order of an element of a finite symmetric group", American Mathematical Monthly, vol. 94, 1987, pp. 497–506.
J.-L. Nicolas, "On Landau's function g(n)", in The Mathematics of Paul Erdős, vol. 1, Springer-Verlag, 1997, pp. 228–240.
External links
Group theory
Permutations
Arithmetic functions |
https://en.wikipedia.org/wiki/Gauss%E2%80%93Bonnet%20theorem | In the mathematical field of differential geometry, the Gauss–Bonnet theorem (or Gauss–Bonnet formula) is a fundamental formula which links the curvature of a surface to its underlying topology.
In the simplest application, the case of a triangle on a plane, the sum of its angles is 180 degrees. The Gauss–Bonnet theorem extends this to more complicated shapes and curved surfaces, connecting the local and global geometries.
The theorem is named after Carl Friedrich Gauss, who developed a version but never published it, and Pierre Ossian Bonnet, who published a special case in 1848.
Statement
Suppose is a compact two-dimensional Riemannian manifold with boundary . Let be the Gaussian curvature of , and let be the geodesic curvature of . Then
where is the element of area of the surface, and is the line element along the boundary of . Here, is the Euler characteristic of .
If the boundary is piecewise smooth, then we interpret the integral as the sum of the corresponding integrals along the smooth portions of the boundary, plus the sum of the angles by which the smooth portions turn at the corners of the boundary.
Many standard proofs use the theorem of turning tangents, which states roughly that the winding number of a Jordan curve is exactly ±1.
A simple example
Suppose is the northern hemisphere cut out from a sphere of radius . Its Euler characteristic is 1. On the left hand side of the theorem, we have and , because the boundary is the equator and the equator is a geodesic of the sphere. Then .
On the other hand, suppose we flatten the hemisphere to make it into a disk. This transformation is a homeomorphism, so the Euler characteristic is still 1. However, on the left hand side of the theorem we now have and , because a circumference is not a geodesic of the plane. Then .
Finally, take a sphere octant, also homeomorphic to the previous cases. Then . Now almost everywhere along the border, which is a geodesic triangle. But we have three right-angle corners, so .
Interpretation and significance
The theorem applies in particular to compact surfaces without boundary, in which case the integral
can be omitted. It states that the total Gaussian curvature of such a closed surface is equal to 2 times the Euler characteristic of the surface. Note that for orientable compact surfaces without boundary, the Euler characteristic equals , where is the genus of the surface: Any orientable compact surface without boundary is topologically equivalent to a sphere with some handles attached, and counts the number of handles.
If one bends and deforms the surface , its Euler characteristic, being a topological invariant, will not change, while the curvatures at some points will. The theorem states, somewhat surprisingly, that the total integral of all curvatures will remain the same, no matter how the deforming is done. So for instance if you have a sphere with a "dent", then its total curvature is 4 (the Euler characteristic of a sph |
https://en.wikipedia.org/wiki/Homological%20algebra | Homological algebra is the branch of mathematics that studies homology in a general algebraic setting. It is a relatively young discipline, whose origins can be traced to investigations in combinatorial topology (a precursor to algebraic topology) and abstract algebra (theory of modules and syzygies) at the end of the 19th century, chiefly by Henri Poincaré and David Hilbert.
Homological algebra is the study of homological functors and the intricate algebraic structures that they entail; its development was closely intertwined with the emergence of category theory. A central concept is that of chain complexes, which can be studied through both their homology and cohomology.
Homological algebra affords the means to extract information contained in these complexes and present it in the form of homological invariants of rings, modules, topological spaces, and other 'tangible' mathematical objects. A powerful tool for doing this is provided by spectral sequences.
It has played an enormous role in algebraic topology. Its influence has gradually expanded and presently includes commutative algebra, algebraic geometry, algebraic number theory, representation theory, mathematical physics, operator algebras, complex analysis, and the theory of partial differential equations. K-theory is an independent discipline which draws upon methods of homological algebra, as does the noncommutative geometry of Alain Connes.
History
Homological algebra began to be studied in its most basic form in the 1800s as a branch of topology, but it wasn't until the 1940s that it became an independent subject with the study of objects such as the ext functor and the tor functor, among others.
Chain complexes and homology
The notion of chain complex is central in homological algebra. An abstract chain complex is a sequence of abelian groups and group homomorphisms,
with the property that the composition of any two consecutive maps is zero:
The elements of Cn are called n-chains and the homomorphisms dn are called the boundary maps or differentials. The chain groups Cn may be endowed with extra structure; for example, they may be vector spaces or modules over a fixed ring R. The differentials must preserve the extra structure if it exists; for example, they must be linear maps or homomorphisms of R-modules. For notational convenience, restrict attention to abelian groups (more correctly, to the category Ab of abelian groups); a celebrated theorem by Barry Mitchell implies the results will generalize to any abelian category. Every chain complex defines two further sequences of abelian groups, the cycles Zn = Ker dn and the boundaries Bn = Im dn+1, where Ker d and Im d denote the kernel and the image of d. Since the composition of two consecutive boundary maps is zero, these groups are embedded into each other as
Subgroups of abelian groups are automatically normal; therefore we can define the nth homology group Hn(C) as the factor group of the n-cycles by the n-bou |
https://en.wikipedia.org/wiki/Maximum%20likelihood%20estimation | In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
If the likelihood function is differentiable, the derivative test for finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, the ordinary least squares estimator for a linear regression model maximizes the likelihood when the random errors are assumed to have normal distributions with the same variance.
From the perspective of Bayesian inference, MLE is generally equivalent to maximum a posteriori (MAP) estimation with uniform prior distributions (or a normal prior distribution with a standard deviation of infinity). In frequentist inference, MLE is a special case of an extremum estimator, with the objective function being the likelihood.
Principles
We model a set of observations as a random sample from an unknown joint probability distribution which is expressed in terms of a set of parameters. The goal of maximum likelihood estimation is to determine the parameters for which the observed data have the highest joint probability. We write the parameters governing the joint distribution as a vector so that this distribution falls within a parametric family where is called the parameter space, a finite-dimensional subset of Euclidean space. Evaluating the joint density at the observed data sample gives a real-valued function,
which is called the likelihood function. For independent and identically distributed random variables, will be the product of univariate density functions:
The goal of maximum likelihood estimation is to find the values of the model parameters that maximize the likelihood function over the parameter space, that is
Intuitively, this selects the parameter values that make the observed data most probable. The specific value that maximizes the likelihood function is called the maximum likelihood estimate. Further, if the function so defined is measurable, then it is called the maximum likelihood estimator. It is generally a function defined over the sample space, i.e. taking a given sample as its argument. A sufficient but not necessary condition for its existence is for the likelihood function to be continuous over a parameter space that is compact. For an open the likelihood function may increase without ever reaching a supremum value.
In practice, it is often convenient to work with the natural logarithm of the likelihood function, called the log-likelihood:
Since the logarithm is a |
https://en.wikipedia.org/wiki/Ronald%20Fisher | Sir Ronald Aylmer Fisher (17 February 1890 – 29 July 1962) was a British polymath who was active as a mathematician, statistician, biologist, geneticist, and academic. For his work in statistics, he has been described as "a genius who almost single-handedly created the foundations for modern statistical science" and "the single most important figure in 20th century statistics". In genetics, his work used mathematics to combine Mendelian genetics and natural selection; this contributed to the revival of Darwinism in the early 20th-century revision of the theory of evolution known as the modern synthesis, being the one to most comprehensively combine the ideas of Gregor Mendel and Charles Darwin. For his contributions to biology, Richard Dawkins proclaimed Fisher as "the greatest of Darwin’s successors". He is considered one of the founding fathers of Neo-Darwinism.
From 1919, he worked at the Rothamsted Experimental Station for 14 years; there, he analysed its immense body of data from crop experiments since the 1840s, and developed the analysis of variance (ANOVA). He established his reputation there in the following years as a biostatistician.
Together with J. B. S. Haldane and Sewall Wright, Fisher is known as one of the three principal founders of population genetics. He outlined Fisher's principle, the Fisherian runaway and sexy son hypothesis theories of sexual selection. His contributions to statistics include promoting the method of maximum likelihood and deriving the properties of maximum likelihood estimators, fiducial inference, the derivation of various sampling distributions, founding principles of the design of experiments, and much more.
Fisher held strong views on race and eugenics, insisting on racial differences. Although he was clearly a eugenicist, there is some debate as to whether Fisher supported scientific racism (see ). He was the Galton Professor of Eugenics at University College London and editor of the Annals of Eugenics.
Early life and education
Fisher was born in East Finchley in London, England, into a middle-class household; his father, George, was a successful partner in Robinson & Fisher, auctioneers and fine art dealers. He was one of twins, with the other twin being still-born and grew up the youngest, with three sisters and one brother. From 1896 until 1904 they lived at Inverforth House in London, where English Heritage installed a blue plaque in 2002, before moving to Streatham. His mother, Kate, died from acute peritonitis when he was 14, and his father lost his business 18 months later.
Lifelong poor eyesight caused his rejection by the British Army for World War I, but also developed his ability to visualize problems in geometrical terms, not in writing mathematical solutions, or proofs. He entered Harrow School age 14 and won the school's Neeld Medal in mathematics. In 1909, he won a scholarship to study Mathematics at Gonville and Caius College, Cambridge. In 1912, he gained a First in Mathematic |
https://en.wikipedia.org/wiki/List%20of%20statisticians | This list of statisticians lists people who have made notable contributions to the theories or application of statistics, or to the related fields of probability or machine learning. Also included are actuaries and demographers.
A
Aalen, Odd Olai (1947–1987)
Abbey, Helen (1915–2001)
Abbott, Edith (1876–1957)
Abelson, Robert P. (1928–2005)
Abramovitz, Moses (1912–2000)
Achenwall, Gottfried (1719–1772)
Adelstein, Abraham Manie (1916–1992)
Adkins, Dorothy (1912–1975)
Ahsan, Riaz (1951–2008)
Ahtime, Laura
Aitchison, Beatrice (1908–1997)
Aitchison, John (1926–2016)
Aitken, Alexander (1895–1967)
Akaike, Hirotsugu (1927–2009)
Aliaga, Martha (1937–2011)
Allan, Betty (1905–1952)
Allen, R. G. D. (1906–1983)
Allison, David B.
Altman, Doug (1948–2018)
Altman, Naomi
Amemiya, Takeshi (1938–)
Anderson, Oskar (1887–1960)
Anderson, Theodore Wilbur
Anderson-Cook, Christine (1966–)
de Andrade, Mariza
Anscombe, Francis (1918–2001)
Anselin, Luc
Antonovska, Svetlana (1952–2016)
Armitage, Peter (1924–)
Armstrong, Margaret
Arrow, Kenneth
Ash, Arlene
Ashby, Deborah (1959–)
Asher, Jana
Ashley-Cooper, Anthony
Austin, Oscar Phelps
Ayres, Leonard Porter
B
Backer, Julie E. (1890–1977)
Bahadur, Raghu Raj (1924–1997)
Bahn, Anita K. (1920–1980)
Bailar, Barbara A.
Bailey, Rosemary A. (1947–)
Bailey-Wilson, Joan (1953–)
Baker, Rose
Balding, David
Bandeen-Roche, Karen
Barber, Rina Foygel
Barnard, George Alfred (1915–2002)
Barnard, Mildred (1908–2000)
Barnett, William A.
Bartels, Julius
Bartlett, M. S. (1910–2002)
Bascand, Geoff
Basford, Kaye
Basu, Debabrata (1924–2001)
Bates, Nancy
Batcher, Mary
Baxter, Laurence (1954–1996)
Bayarri, M. J. (1956–2014)
Bayes, Thomas (1702–1761)
Beale, Calvin
Becker, Betsy
Bediako, Grace
Behm, Ernst
Benjamin, Bernard
Benzécri, Jean-Paul (1932–2019)
Berger, James
Berkson, Joseph (1899–1982)
Bernardo, José-Miguel
Berry, Don
Best, Alfred M. (1876–1958)
Best, Nicky
Betensky, Rebecca
Beveridge, William
Bhat, B. R.
Bhat, P. N. Mari
Bhat, U. Narayan
Bienaymé, Irénée-Jules
Bienias, Julia
Billard, Lynne (1943–)
Bingham, Christopher
Bird, Sheila (1952–)
Birnbaum, Allan (1923–1976)
Bishop, Yvonne (–2015)
Bisika, Thomas John
Bixby, Lenore E. (1914–1994)
Blackwell, David (1919–2010)
Blankenship, Erin
Bliss, Chester Ittner (1899–1979)
Block, Maurice
Bloom, David E.
Blumberg, Carol Joyce
Bock, Mary Ellen
Boente, Graciela
Bodio, Luigi
Bodmer, Walter
Bonferroni, Carlo Emilio (1892–1960)
Booth, Charles
Boreham, John
Borror, Connie M. (1966–2016)
Bortkiewicz, Ladislaus (1868–1931)
Bose, R. C. (1901–1987)
Botha, Roelof
Bottou, Léon
Bowley, Arthur Lyon (1869–1957)
Bowman, Kimiko O. (1927–2019)
Box, George E. P. (1919–2010)
Boyle, Phelim
Brad, Ion Ionescu de la (1818–1891)
Brady, Dorothy (1903–1977)
Brassey, Thomas
Braverman, Amy
Breiman, Leo
Breslow, Norman (1941–2015)
Brogan, Donna (1939–)
Brooks, Steve
Brown, Jennifer
Brown, Lawrence D. (1940–2018)
Broze, Laurence (1960–)
Buck, Caitlin E. |
https://en.wikipedia.org/wiki/Sufficient%20statistic | In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter". In particular, a statistic is sufficient for a family of probability distributions if the sample from which it is calculated gives no additional information than the statistic, as to which of those probability distributions is the sampling distribution.
A related concept is that of linear sufficiency, which is weaker than sufficiency but can be applied in some cases where there is no sufficient statistic, although it is restricted to linear estimators. The Kolmogorov structure function deals with individual finite data; the related notion there is the algorithmic sufficient statistic.
The concept is due to Sir Ronald Fisher in 1920. Stephen Stigler noted in 1973 that the concept of sufficiency had fallen out of favor in descriptive statistics because of the strong dependence on an assumption of the distributional form (see Pitman–Koopman–Darmois theorem below), but remained very important in theoretical work.
Background
Roughly, given a set of independent identically distributed data conditioned on an unknown parameter , a sufficient statistic is a function whose value contains all the information needed to compute any estimate of the parameter (e.g. a maximum likelihood estimate). Due to the factorization theorem (see below), for a sufficient statistic , the probability density can be written as . From this factorization, it can easily be seen that the maximum likelihood estimate of will interact with only through . Typically, the sufficient statistic is a simple function of the data, e.g. the sum of all the data points.
More generally, the "unknown parameter" may represent a vector of unknown quantities or may represent everything about the model that is unknown or not fully specified. In such a case, the sufficient statistic may be a set of functions, called a jointly sufficient statistic. Typically, there are as many functions as there are parameters. For example, for a Gaussian distribution with unknown mean and variance, the jointly sufficient statistic, from which maximum likelihood estimates of both parameters can be estimated, consists of two functions, the sum of all data points and the sum of all squared data points (or equivalently, the sample mean and sample variance).
In other words, the joint probability distribution of the data is conditionally independent of the parameter given the value of the sufficient statistic for the parameter. Both the statistic and the underlying parameter can be vectors.
Mathematical definition
A statistic t = T(X) is sufficient for underlying parameter θ precisely if the conditional probability distribution of the data X, given the statistic t = T(X), does not depend on the parameter θ.
Alternatively, one can say the statistic T(X) is su |
https://en.wikipedia.org/wiki/Magma%20%28algebra%29 | In abstract algebra, a magma, binar, or, rarely, groupoid is a basic kind of algebraic structure. Specifically, a magma consists of a set equipped with a single binary operation that must be closed by definition. No other properties are imposed.
History and terminology
The term groupoid was introduced in 1927 by Heinrich Brandt describing his Brandt groupoid (translated from the German ). The term was then appropriated by B. A. Hausmann and Øystein Ore (1937) in the sense (of a set with a binary operation) used in this article. In a couple of reviews of subsequent papers in Zentralblatt, Brandt strongly disagreed with this overloading of terminology. The Brandt groupoid is a groupoid in the sense used in category theory, but not in the sense used by Hausmann and Ore. Nevertheless, influential books in semigroup theory, including Clifford and Preston (1961) and Howie (1995) use groupoid in the sense of Hausmann and Ore. Hollings (2014) writes that the term groupoid is "perhaps most often used in modern mathematics" in the sense given to it in category theory.
According to Bergman and Hausknecht (1996): "There is no generally accepted word for a set with a not necessarily associative binary operation. The word groupoid is used by many universal algebraists, but workers in category theory and related areas object strongly to this usage because they use the same word to mean 'category in which all morphisms are invertible'. The term magma was used by Serre [Lie Algebras and Lie Groups, 1965]." It also appears in Bourbaki's .
Definition
A magma is a set M matched with an operation • that sends any two elements to another element, . The symbol • is a general placeholder for a properly defined operation. To qualify as a magma, the set and operation must satisfy the following requirement (known as the magma or closure axiom):
For all a, b in M, the result of the operation is also in M.
And in mathematical notation:
If • is instead a partial operation, then is called a partial magma or, more often, a partial groupoid.
Morphism of magmas
A morphism of magmas is a function mapping magma M to magma N that preserves the binary operation:
f (x •M y) = f(x) •N f(y),
where •M and •N denote the binary operation on M and N respectively.
Notation and combinatorics
The magma operation may be applied repeatedly, and in the general, non-associative case, the order matters, which is notated with parentheses. Also, the operation • is often omitted and notated by juxtaposition:
A shorthand is often used to reduce the number of parentheses, in which the innermost operations and pairs of parentheses are omitted, being replaced just with juxtaposition: . For example, the above is abbreviated to the following expression, still containing parentheses:
A way to avoid completely the use of parentheses is prefix notation, in which the same expression would be written . Another way, familiar to programmers, is postfix notation (reverse Polish notation |
https://en.wikipedia.org/wiki/Singular%20value%20decomposition | In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any matrix. It is related to the polar decomposition.
Specifically, the singular value decomposition of an complex matrix is a factorization of the form where is an complex unitary matrix, is an rectangular diagonal matrix with non-negative real numbers on the diagonal, is an complex unitary matrix, and is the conjugate transpose of . Such decomposition always exists for any complex matrix. If is real, then and can be guaranteed to be real orthogonal matrices; in such contexts, the SVD is often denoted .
The diagonal entries of are uniquely determined by and are known as the singular values of . The number of non-zero singular values is equal to the rank of . The columns of and the columns of are called left-singular vectors and right-singular vectors of , respectively. They form two sets of orthonormal bases and , and if they are sorted so that the singular values with value zero are all in the highest-numbered columns (or rows), the singular value decomposition can be written as where is the rank of .
The SVD is not unique. It is always possible to choose the decomposition so that the singular values are in descending order. In this case, (but not and ) is uniquely determined by .
The term sometimes refers to the compact SVD, a similar decomposition in which is square diagonal of size , where is the rank of , and has only the non-zero singular values. In this variant, is an semi-unitary matrix and is an semi-unitary matrix, such that
Mathematical applications of the SVD include computing the pseudoinverse, matrix approximation, and determining the rank, range, and null space of a matrix. The SVD is also extremely useful in all areas of science, engineering, and statistics, such as signal processing, least squares fitting of data, and process control.
Intuitive interpretations
Rotation, coordinate scaling, and reflection
In the special case when is an real square matrix, the matrices and can be chosen to be real matrices too. In that case, "unitary" is the same as "orthogonal". Then, interpreting both unitary matrices as well as the diagonal matrix, summarized here as , as a linear transformation of the space , the matrices and represent rotations or reflection of the space, while represents the scaling of each coordinate by the factor . Thus the SVD decomposition breaks down any linear transformation of into a composition of three geometrical transformations: a rotation or reflection (), followed by a coordinate-by-coordinate scaling (), followed by another rotation or reflection ().
In particular, if has a positive determinant, then and can be chosen to be both rotations with reflections, or both rotations without reflections. If the determinant is negative, exactly one of them w |
https://en.wikipedia.org/wiki/Homology%20%28mathematics%29 | In mathematics, homology is a general way of associating a sequence of algebraic objects, such as abelian groups or modules, with other mathematical objects such as topological spaces. Homology groups were originally defined in algebraic topology. Similar constructions are available in a wide variety of other contexts, such as abstract algebra, groups, Lie algebras, Galois theory, and algebraic geometry.
The original motivation for defining homology groups was the observation that two shapes can be distinguished by examining their holes. For instance, a circle is not a disk because the circle has a hole through it while the disk is solid, and the ordinary sphere is not a circle because the sphere encloses a two-dimensional hole while the circle encloses a one-dimensional hole. However, because a hole is "not there", it is not immediately obvious how to define a hole or how to distinguish different kinds of holes. Homology was originally a rigorous mathematical method for defining and categorizing holes in a manifold. Loosely speaking, a cycle is a closed submanifold, a boundary is a cycle which is also the boundary of a submanifold, and a homology class (which represents a hole) is an equivalence class of cycles modulo boundaries. A homology class is thus represented by a cycle which is not the boundary of any submanifold: the cycle represents a hole, namely a hypothetical manifold whose boundary would be that cycle, but which is "not there".
There are many different homology theories. A particular type of mathematical object, such as a topological space or a group, may have one or more associated homology theories. When the underlying object has a geometric interpretation as topological spaces do, the nth homology group represents behavior in dimension n. Most homology groups or modules may be formulated as derived functors on appropriate abelian categories, measuring the failure of a functor to be exact. From this abstract perspective, homology groups are determined by objects of a derived category.
Background
Origins
Homology theory can be said to start with the Euler polyhedron formula, or Euler characteristic. This was followed by Riemann's definition of genus and n-fold connectedness numerical invariants in 1857 and Betti's proof in 1871 of the independence of "homology numbers" from the choice of basis.
Homology itself was developed as a way to analyse and classify manifolds according to their cycles – closed loops (or more generally submanifolds) that can be drawn on a given n dimensional manifold but not continuously deformed into each other. These cycles are also sometimes thought of as cuts which can be glued back together, or as zippers which can be fastened and unfastened. Cycles are classified by dimension. For example, a line drawn on a surface represents a 1-cycle, a closed loop or (1-manifold), while a surface cut through a three-dimensional manifold is a 2-cycle.
Surfaces
On the ordinary sphere , the cycle b |
https://en.wikipedia.org/wiki/Harmonic%20series%20%28mathematics%29 | In mathematics, the harmonic series is the infinite series formed by summing all positive unit fractions:
The first terms of the series sum to approximately , where is the natural logarithm and is the Euler–Mascheroni constant. Because the logarithm has arbitrarily large values, the harmonic series does not have a finite limit: it is a divergent series. Its divergence was proven in the 14th century by Nicole Oresme using a precursor to the Cauchy condensation test for the convergence of infinite series. It can also be proven to diverge by comparing the sum to an integral, according to the integral test for convergence.
Applications of the harmonic series and its partial sums include Euler's proof that there are infinitely many prime numbers, the analysis of the coupon collector's problem on how many random trials are needed to provide a complete range of responses, the connected components of random graphs, the block-stacking problem on how far over the edge of a table a stack of blocks can be cantilevered, and the average case analysis of the quicksort algorithm.
History
The name of the harmonic series derives from the concept of overtones or harmonics in music: the wavelengths of the overtones of a vibrating string are etc., of the string's fundamental wavelength. Every term of the harmonic series after the first is the harmonic mean of the neighboring terms, so the terms form a harmonic progression; the phrases harmonic mean and harmonic progression likewise derive from music.
Beyond music, harmonic sequences have also had a certain popularity with architects. This was so particularly in the Baroque period, when architects used them to establish the proportions of floor plans, of elevations, and to establish harmonic relationships between both interior and exterior architectural details of churches and palaces.
The divergence of the harmonic series was first proven in 1350 by Nicole Oresme. Oresme's work, and the contemporaneous work of Richard Swineshead on a different series, marked the first appearance of infinite series other than the geometric series in mathematics. However, this achievement fell into obscurity. Additional proofs were published in the 17th century by Pietro Mengoli and by Jacob Bernoulli. Bernoulli credited his brother Johann Bernoulli for finding the proof, and it was later included in Johann Bernoulli's collected works.
The partial sums of the harmonic series were named harmonic numbers, and given their usual notation , in 1968 by Donald Knuth.
Definition and divergence
The harmonic series is the infinite series
in which the terms are all of the positive unit fractions. It is a divergent series: as more terms of the series are included in partial sums of the series, the values of these partial sums grow arbitrarily large, beyond any finite limit. Because it is a divergent series, it should be interpreted as a formal sum, an abstract mathematical expression combining the unit fractions, rather than as som |
https://en.wikipedia.org/wiki/P%20and%20R%20measures | P and R measures are the statistics used to evaluate the efficiency and effectiveness of business processes, particularly automated business processes.
The P measures are the process measures – these statistics that record the number of times things occur. Examples include:
the number of times an error loop is used
the number of times an approval loop is used
the average time to complete a particular task in the process
and show how efficient the process is.
The R measures are the results measures – these statistics record the 'outcomes' of the process. Examples include:
the number of occasions when the process completed correctly
the number of times rejections occurred
the number of times approval was not given
and show how effective the process is.
Business process management |
https://en.wikipedia.org/wiki/Kernel%20%28category%20theory%29 | In category theory and its applications to other branches of mathematics, kernels are a generalization of the kernels of group homomorphisms, the kernels of module homomorphisms and certain other kernels from algebra. Intuitively, the kernel of the morphism f : X → Y is the "most general" morphism k : K → X that yields zero when composed with (followed by) f.
Note that kernel pairs and difference kernels (also known as binary equalisers) sometimes go by the name "kernel"; while related, these aren't quite the same thing and are not discussed in this article.
Definition
Let C be a category.
In order to define a kernel in the general category-theoretical sense, C needs to have zero morphisms.
In that case, if f : X → Y is an arbitrary morphism in C, then a kernel of f is an equaliser of f and the zero morphism from X to Y.
In symbols:
ker(f) = eq(f, 0XY)
To be more explicit, the following universal property can be used. A kernel of f is an object K together with a morphism k : K → X such that:
f∘k is the zero morphism from K to Y;
Given any morphism : → X such that f∘ is the zero morphism, there is a unique morphism u : → K such that k∘u = .
As for every universal property, there is a unique isomorphism between two kernels of the same morphism, and the morphism k is always a monomorphism (in the categorical sense). So, it is common to talk of the kernel of a morphism. In concrete categories, one can thus take a subset of for K, in which case, the morphism k is the inclusion map. This allows one to talk of K as the kernel, since f is implicitly defined by K. There are non-concrete categories, where one can similarly define a "natural" kernel, such that K defines k implicitly.
Not every morphism needs to have a kernel, but if it does, then all its kernels are isomorphic in a strong sense: if k : K → X and are kernels of f : X → Y, then there exists a unique isomorphism φ : K → L such that ∘φ = k.
Examples
Kernels are familiar in many categories from abstract algebra, such as the category of groups or the category of (left) modules over a fixed ring (including vector spaces over a fixed field). To be explicit, if f : X → Y is a homomorphism in one of these categories, and K is its kernel in the usual algebraic sense, then K is a subalgebra of X and the inclusion homomorphism from K to X is a kernel in the categorical sense.
Note that in the category of monoids, category-theoretic kernels exist just as for groups, but these kernels don't carry sufficient information for algebraic purposes. Therefore, the notion of kernel studied in monoid theory is slightly different (see #Relationship to algebraic kernels below).
In the category of unital rings, there are no kernels in the category-theoretic sense; indeed, this category does not even have zero morphisms. Nevertheless, there is still a notion of kernel studied in ring theory that corresponds to kernels in the category of non-unital rings.
In the category of pointed topological spaces |
https://en.wikipedia.org/wiki/Enriched%20category | In category theory, a branch of mathematics, an enriched category generalizes the idea of a category by replacing hom-sets with objects from a general monoidal category. It is motivated by the observation that, in many practical applications, the hom-set often has additional structure that should be respected, e.g., that of being a vector space of morphisms, or a topological space of morphisms. In an enriched category, the set of morphisms (the hom-set) associated with every pair of objects is replaced by an object in some fixed monoidal category of "hom-objects". In order to emulate the (associative) composition of morphisms in an ordinary category, the hom-category must have a means of composing hom-objects in an associative manner: that is, there must be a binary operation on objects giving us at least the structure of a monoidal category, though in some contexts the operation may also need to be commutative and perhaps also to have a right adjoint (i.e., making the category symmetric monoidal or even symmetric closed monoidal, respectively).
Enriched category theory thus encompasses within the same framework a wide variety of structures including
ordinary categories where the hom-set carries additional structure beyond being a set. That is, there are operations on, or properties of morphisms that need to be respected by composition (e.g., the existence of 2-cells between morphisms and horizontal composition thereof in a 2-category, or the addition operation on morphisms in an abelian category)
category-like entities that don't themselves have any notion of individual morphism but whose hom-objects have similar compositional aspects (e.g., preorders where the composition rule ensures transitivity, or Lawvere's metric spaces, where the hom-objects are numerical distances and the composition rule provides the triangle inequality).
In the case where the hom-object category happens to be the category of sets with the usual cartesian product, the definitions of enriched category, enriched functor, etc... reduce to the original definitions from ordinary category theory.
An enriched category with hom-objects from monoidal category M is said to be an enriched category over M or an enriched category in M, or simply an M-category. Due to Mac Lane's preference for the letter V in referring to the monoidal category, enriched categories are also sometimes referred to generally as V-categories.
Definition
Let be a monoidal category. Then an enriched category C (alternatively, in situations where the choice of monoidal category needs to be explicit, a category enriched over M, or M-category), consists of
a class ob(C) of objects of C,
an object of M for every pair of objects a, b in C, used to define an arrow in C as an arrow in M,
an arrow in M designating an identity for every object a in C, and
an arrow in M designating a composition for each triple of objects a, b, c in C, used to define the composition of and in C as together with th |
https://en.wikipedia.org/wiki/Normal%20morphism | In category theory and its applications to mathematics, a normal monomorphism or conormal epimorphism is a particularly well-behaved type of morphism.
A normal category is a category in which every monomorphism is normal. A conormal category is one in which every epimorphism is conormal.
Definition
A monomorphism is normal if it is the kernel of some morphism, and an epimorphism is conormal if it is the cokernel of some morphism.
A category C is binormal if it's both normal and conormal.
But note that some authors will use the word "normal" only to indicate that C is binormal.
Examples
In the category of groups, a monomorphism f from H to G is normal if and only if its image is a normal subgroup of G. In particular, if H is a subgroup of G, then the inclusion map i from H to G is a monomorphism, and will be normal if and only if H is a normal subgroup of G. In fact, this is the origin of the term "normal" for monomorphisms.
On the other hand, every epimorphism in the category of groups is conormal (since it is the cokernel of its own kernel), so this category is conormal.
In an abelian category, every monomorphism is the kernel of its cokernel, and every epimorphism is the cokernel of its kernel.
Thus, abelian categories are always binormal.
The category of abelian groups is the fundamental example of an abelian category, and accordingly every subgroup of an abelian group is a normal subgroup.
References
Section I.14
Morphisms |
https://en.wikipedia.org/wiki/J.%20Robert%20Janes | Joseph Robert Janes (born May 23, 1935) is a Canadian author born in Toronto.
A mining engineer by profession, he taught geology, geography and high school mathematics and later geology at Brock University until he dedicated himself to writing full-time in 1970.
Janes has published more than 20 adult novels, five mystery novels for young adults, and textbooks on the subject of geology. His character-rich mysteries set in Occupied France during World War II, and featuring Chief Inspector Jean-Louis St-Cyr of the Sûreté and Detektiv Inspektor Hermann Kohler of the Nazi Gestapo, are his most popular works and have been critically acclaimed by The Wall Street Journal, amongst others, for their historical accuracy. The U.S.-based Western Society for French History used his writings as a study of the convergence of fiction with history.
St-Cyr and Kohler series
Mayhem (1992)
Carousel (1992)
Kaleidoscope (1993)
Salamander (1994)
Mannequin (1994)
Dollmaker (1995)
Stonekiller (1995)
Sandman (1996)
Gypsy (1997)
Madrigal (1999)
Beekeeper (2001)
Flykiller (2002)
Bellringer (2012)
Tapestry (2013)
Carnival (2014)
Clandestine (2015)
Other works
The Odd-Lot Boys and the Tree-Fort War (1976)
The Toy Shop (1981)
Danger on the River (1982)
The Watcher (1982)
The Third Story (1983)
The Hiding Place (1984)
Murder in the Market (1985)
Spies for Dinner (1985)
The Alice Factor (1991)
The Hunting Ground (2013)
Betrayal (2014)
The Sleeper (2015)
The Little Parachute (2016)
References
External links
J. Robert Janes Archive at McMaster University
Pierce, J. Kingston. “Solving Crimes in the Shadow of War” The Rap Sheet, May 30, 2012.
1935 births
Brock University alumni
Academic staff of Brock University
Canadian mystery writers
Living people |
https://en.wikipedia.org/wiki/Infinite%20set | In set theory, an infinite set is a set that is not a finite set. Infinite sets may be countable or uncountable.
Properties
The set of natural numbers (whose existence is postulated by the axiom of infinity) is infinite. It is the only set that is directly required by the axioms to be infinite. The existence of any other infinite set can be proved in Zermelo–Fraenkel set theory (ZFC), but only by showing that it follows from the existence of the natural numbers.
A set is infinite if and only if for every natural number, the set has a subset whose cardinality is that natural number.
If the axiom of choice holds, then a set is infinite if and only if it includes a countable infinite subset.
If a set of sets is infinite or contains an infinite element, then its union is infinite. The power set of an infinite set is infinite. Any superset of an infinite set is infinite. If an infinite set is partitioned into finitely many subsets, then at least one of them must be infinite. Any set which can be mapped onto an infinite set is infinite. The Cartesian product of an infinite set and a nonempty set is infinite. The Cartesian product of an infinite number of sets, each containing at least two elements, is either empty or infinite; if the axiom of choice holds, then it is infinite.
If an infinite set is a well-ordered set, then it must have a nonempty, nontrivial subset that has no greatest element.
In ZF, a set is infinite if and only if the power set of its power set is a Dedekind-infinite set, having a proper subset equinumerous to itself. If the axiom of choice is also true, then infinite sets are precisely the Dedekind-infinite sets.
If an infinite set is a well-orderable set, then it has many well-orderings which are non-isomorphic.
Important ideas discussed by David Burton in his book The History of Mathematics: An Introduction include how to define "elements" or parts of a set, how to define unique elements in the set, and how to prove infinity. Burton also discusses proofs for different types of infinity, including countable and uncountable sets. Topics used when comparing infinite and finite sets include ordered sets, cardinality, equivalency, coordinate planes, universal sets, mapping, subsets, continuity, and transcendence. Cantor's set ideas were influenced by trigonometry and irrational numbers. Other key ideas in infinite set theory mentioned by Burton, Paula, Narli and Rodger include real numbers such as , integers, and Euler's number.
Both Burton and Rogers use finite sets to start to explain infinite sets using proof concepts such as mapping, proof by induction, or proof by contradiction. Mathematical trees can also be used to understand infinite sets. Burton also discusses proofs of infinite sets including ideas such as unions and subsets.
In Chapter 12 of The History of Mathematics: An Introduction, Burton emphasizes how mathematicians such as Zermelo, Dedekind, Galileo, Kronecker, Cantor, and Bolzano investigated and influen |
https://en.wikipedia.org/wiki/Possibility | Possibility is the condition or fact of being possible. Latin origins of the word hint at ability.
Possibility may refer to:
Probability, the measure of the likelihood that an event will occur
Epistemic possibility, a topic in philosophy and modal logic
Possibility theory, a mathematical theory for dealing with certain types of uncertainty and is an alternative to probability theory
Subjunctive possibility, (also called alethic possibility) is a form of modality studied in modal logic.
Logical possibility, a proposition that will depend on the system of logic being considered, rather than on the violation of any single rule
Possible world, a complete and consistent way the world is or could have been
Other
Possible (Italy), a political party in Italy
Possible Peru, a political party in Peru
Possible Peru Alliance, an electoral alliance in Peru
Entertainment
Kim Possible, a US children's TV series
Kim Possible (character), the central character of the TV series
Kim Possible (video game series), games related to the TV series
The Possible, a 2006 film
The Possible (band), a Japanese band
Possibility Pictures, a film production company which made the 2010 film Letters to God
Possibility (album), a 1985 album by Akina Nakamori
"Possibility" (song), a 2009 song by Lykke Li
See also
Impossible (disambiguation)
Possibilities (disambiguation)
Possible Worlds (disambiguation)
:Category:Possibly living people
Absolutely (disambiguation)
Definitely (disambiguation)
Maybe (disambiguation) |
https://en.wikipedia.org/wiki/Parity%20%28mathematics%29 | In mathematics, parity is the property of an integer of whether it is even or odd. An integer is even if it is a multiple of two, and odd if it is not. For example, −4, 0, 82 are even because
By contrast, −3, 5, 7, 21 are odd numbers. The above definition of parity applies only to integer numbers, hence it cannot be applied to numbers like 1/2 or 4.201. See the section "Higher mathematics" below for some extensions of the notion of parity to a larger class of "numbers" or in other more general settings.
Even and odd numbers have opposite parities, e.g., 22 (even number) and 13 (odd number) have opposite parities. In particular, the parity of zero is even. Any two consecutive integers have opposite parity. A number (i.e., integer) expressed in the decimal numeral system is even or odd according to whether its last digit is even or odd. That is, if the last digit is 1, 3, 5, 7, or 9, then it is odd; otherwise it is even—as the last digit of any even number is 0, 2, 4, 6, or 8. The same idea will work using any even base. In particular, a number expressed in the binary numeral system is odd if its last digit is 1; and it is even if its last digit is 0. In an odd base, the number is even according to the sum of its digits—it is even if and only if the sum of its digits is even.
Definition
An even number is an integer of the form
where k is an integer; an odd number is an integer of the form
An equivalent definition is that an even number is divisible by 2:
and an odd number is not:
The sets of even and odd numbers can be defined as following:
The set of even numbers is a normal subgroup of and create the factor group . Parity can then be defined as a homomorphism from to where odd numbers are 1 and even numbers are 0. The consequences of this homomorphism are covered below.
Properties
The following laws can be verified using the properties of divisibility. They are a special case of rules in modular arithmetic, and are commonly used to check if an equality is likely to be correct by testing the parity of each side. As with ordinary arithmetic, multiplication and addition are commutative and associative in modulo 2 arithmetic, and multiplication is distributive over addition. However, subtraction in modulo 2 is identical to addition, so subtraction also possesses these properties, which is not true for normal integer arithmetic.
Addition and subtraction
even ± even = even;
even ± odd = odd;
odd ± odd = even;
Multiplication
even × even = even;
even × odd = even;
odd × odd = odd;
The structure ({even, odd}, +, ×) is in fact a field with two elements.
Division
The division of two whole numbers does not necessarily result in a whole number. For example, 1 divided by 4 equals 1/4, which is neither even nor odd, since the concepts of even and odd apply only to integers. But when the quotient is an integer, it will be even if and only if the dividend has more factors of two than the divisor.
History
The ancient Greeks considered 1, |
https://en.wikipedia.org/wiki/Counterexample | A counterexample is any exception to a generalization. In logic a counterexample disproves the generalization, and does so rigorously in the fields of mathematics and philosophy. For example, the fact that "student John Smith is not lazy" is a counterexample to the generalization "students are lazy", and both a counterexample to, and disproof of, the universal quantification "all students are lazy."
In mathematics, the term "counterexample" is also used (by a slight abuse) to refer to examples which illustrate the necessity of the full hypothesis of a theorem. This is most often done by considering a case where a part of the hypothesis is not satisfied and the conclusion of the theorem does not hold.
In mathematics
In mathematics, counterexamples are often used to prove the boundaries of possible theorems. By using counterexamples to show that certain conjectures are false, mathematical researchers can then avoid going down blind alleys and learn to modify conjectures to produce provable theorems. It is sometimes said that mathematical development consists primarily in finding (and proving) theorems and counterexamples.
Rectangle example
Suppose that a mathematician is studying geometry and shapes, and she wishes to prove certain theorems about them. She conjectures that "All rectangles are squares", and she is interested in knowing whether this statement is true or false.
In this case, she can either attempt to prove the truth of the statement using deductive reasoning, or she can attempt to find a counterexample of the statement if she suspects it to be false. In the latter case, a counterexample would be a rectangle that is not a square, such as a rectangle with two sides of length 5 and two sides of length 7. However, despite having found rectangles that were not squares, all the rectangles she did find had four sides. She then makes the new conjecture "All rectangles have four sides". This is logically weaker than her original conjecture, since every square has four sides, but not every four-sided shape is a square.
The above example explained — in a simplified way — how a mathematician might weaken her conjecture in the face of counterexamples, but counterexamples can also be used to demonstrate the necessity of certain assumptions and hypothesis. For example, suppose that after a while, the mathematician above settled on the new conjecture "All shapes that are rectangles and have four sides of equal length are squares". This conjecture has two parts to the hypothesis: the shape must be 'a rectangle' and must have 'four sides of equal length'. The mathematician then would like to know if she can remove either assumption, and still maintain the truth of her conjecture. This means that she needs to check the truth of the following two statements:
"All shapes that are rectangles are squares."
"All shapes that have four sides of equal length are squares".
A counterexample to (1) was already given above, and a counterexample to (2) i |
https://en.wikipedia.org/wiki/Stereographic%20projection | In mathematics, a stereographic projection is a perspective projection of the sphere, through a specific point on the sphere (the pole or center of projection), onto a plane (the projection plane) perpendicular to the diameter through the point. It is a smooth, bijective function from the entire sphere except the center of projection to the entire plane. It maps circles on the sphere to circles or lines on the plane, and is conformal, meaning that it preserves angles at which curves meet and thus locally approximately preserves shapes. It is neither isometric (distance preserving) nor equiareal (area preserving).
The stereographic projection gives a way to represent a sphere by a plane. The metric induced by the inverse stereographic projection from the plane to the sphere defines a geodesic distance between points in the plane equal to the spherical distance between the spherical points they represent. A two-dimensional coordinate system on the stereographic plane is an alternative setting for spherical analytic geometry instead of spherical polar coordinates or three-dimensional cartesian coordinates. This is the spherical analog of the Poincaré disk model of the hyperbolic plane.
Intuitively, the stereographic projection is a way of picturing the sphere as the plane, with some inevitable compromises. Because the sphere and the plane appear in many areas of mathematics and its applications, so does the stereographic projection; it finds use in diverse fields including complex analysis, cartography, geology, and photography. Sometimes stereographic computations are done graphically using a special kind of graph paper called a stereographic net, shortened to stereonet, or Wulff net.
History
The origin of the stereographic projection is not known, but it is believed to have been discovered by Greek astronomers sometime around the 3rd or 2nd century BC, and used for projecting the celestial sphere to the plane so that the motions of stars and planets could be analyzed using plane geometry. Its earliest extant description is found in Ptolemy's Planisphere (2nd century AD), but the methods described by Ptolemy are believed to have been known earlier by Hipparchus (2nd century BC), if not before; Apollonius's Conics (c. 200 BC) contains a theorem which is crucial in proving the property that the stereographic projection maps circles map to circles. Ptolemy refers to the use of the stereographic projection in a "horological instrument", perhaps the described by Vitruvius (1st century BC), of which some physical remnants still remain dating from the 1st century BC.
By the time of Theon of Alexandria (4th century), the planisphere had been combined with a dioptra to form the planispheric astrolabe ("star taker"), a capable portable device which could be used for measuring star positions and performing a wide variety of astronomical calculations. The astrolabe was in continuous use by Byzantine astronomers, and was significantly further developed by |
https://en.wikipedia.org/wiki/Examples%20of%20groups | Some elementary examples of groups in mathematics are given on Group (mathematics).
Further examples are listed here.
Permutations of a set of three elements
Consider three colored blocks (red, green, and blue), initially placed in the order RGB. Let a be the operation "swap the first block and the second block", and b be the operation "swap the second block and the third block".
We can write xy for the operation "first do y, then do x"; so that ab is the operation RGB → RBG → BRG, which could be described as "move the first two blocks one position to the right and put the third block into the first position". If we write e for "leave the blocks as they are" (the identity operation), then we can write the six permutations of the three blocks as follows:
e : RGB → RGB
a : RGB → GRB
b : RGB → RBG
ab : RGB → BRG
ba : RGB → GBR
aba : RGB → BGR
Note that aa has the effect RGB → GRB → RGB; so we can write aa = e. Similarly, bb = (aba)(aba) = e; (ab)(ba) = (ba)(ab) = e; so every element has an inverse.
By inspection, we can determine associativity and closure; note in particular that (ba)b = bab = b(ab).
Since it is built up from the basic operations a and b, we say that the set {a, b} generates this group. The group, called the symmetric group S3, has order 6, and is non-abelian (since, for example, ab ≠ ba).
Group of translations of the plane
A translation of the plane is a rigid movement of every point of the plane for a certain distance in a certain direction.
For instance "move in the North-East direction for 2 kilometres" is a translation of the plane.
Two translations such as a and b can be composed to form a new translation a ∘ b as follows: first follow the prescription of b, then that of a.
For instance, if
a = "move North-East for 3 kilometres"
and
b = "move South-East for 4 kilometres"
then
a ∘ b = "move to bearing 8.13° for 5 kilometres" (bearing is measured counterclockwise and from East)
Or, if
a = "move to bearing 36.87° for 3 kilometres" (bearing is measured counterclockwise and from East)
and
b = "move to bearing 306.87° for 4 kilometres" (bearing is measured counterclockwise and from East)
then
a ∘ b = "move East for 5 kilometres"
(see Pythagorean theorem for why this is so, geometrically).
The set of all translations of the plane with composition as the operation forms a group:
If a and b are translations, then a ∘ b is also a translation.
Composition of translations is associative: (a ∘ b) ∘ c = a ∘ (b ∘ c).
The identity element for this group is the translation with prescription "move zero kilometres in any direction".
The inverse of a translation is given by walking in the opposite direction for the same distance.
This is an abelian group and our first (nondiscrete) example of a Lie group: a group which is also a manifold.
Symmetry group of a square: dihedral group of order 8
Groups are very important to describe the symmetry of objects, be they geometrical (like a tetrahedron) or algebraic (like a |
https://en.wikipedia.org/wiki/Nilpotent%20group | In mathematics, specifically group theory, a nilpotent group G is a group that has an upper central series that terminates with G. Equivalently, its central series is of finite length or its lower central series terminates with {1}.
Intuitively, a nilpotent group is a group that is "almost abelian". This idea is motivated by the fact that nilpotent groups are solvable, and for finite nilpotent groups, two elements having relatively prime orders must commute. It is also true that finite nilpotent groups are supersolvable. The concept is credited to work in the 1930s by Russian mathematician Sergei Chernikov.
Nilpotent groups arise in Galois theory, as well as in the classification of groups. They also appear prominently in the classification of Lie groups.
Analogous terms are used for Lie algebras (using the Lie bracket) including nilpotent, lower central series, and upper central series.
Definition
The definition uses the idea of a central series for a group. The following are equivalent definitions for a nilpotent group :
For a nilpotent group, the smallest such that has a central series of length is called the nilpotency class of ; and is said to be nilpotent of class . (By definition, the length is if there are different subgroups in the series, including the trivial subgroup and the whole group.)
Equivalently, the nilpotency class of equals the length of the lower central series or upper central series.
If a group has nilpotency class at most , then it is sometimes called a nil- group.
It follows immediately from any of the above forms of the definition of nilpotency, that the trivial group is the unique group of nilpotency class , and groups of nilpotency class are exactly the non-trivial abelian groups.
Examples
As noted above, every abelian group is nilpotent.
For a small non-abelian example, consider the quaternion group Q8, which is a smallest non-abelian p-group. It has center {1, −1} of order 2, and its upper central series is {1}, {1, −1}, Q8; so it is nilpotent of class 2.
The direct product of two nilpotent groups is nilpotent.
All finite p-groups are in fact nilpotent (proof). The maximal class of a group of order pn is n (for example, any group of order 2 is nilpotent of class 1). The 2-groups of maximal class are the generalised quaternion groups, the dihedral groups, and the semidihedral groups.
Furthermore, every finite nilpotent group is the direct product of p-groups.
The multiplicative group of upper unitriangular n × n matrices over any field F is a nilpotent group of nilpotency class n − 1. In particular, taking n = 3 yields the Heisenberg group H, an example of a non-abelian infinite nilpotent group. It has nilpotency class 2 with central series 1, Z(H), H.
The multiplicative group of invertible upper triangular n × n matrices over a field F is not in general nilpotent, but is solvable.
Any nonabelian group G such that G/Z(G) is abelian has nilpotency class 2, with central series {1}, Z(G), G.
Th |
https://en.wikipedia.org/wiki/Riemannian%20manifold | In differential geometry, a Riemannian manifold or Riemannian space , so called after the German mathematician Bernhard Riemann, is a real, smooth manifold M equipped with a positive-definite inner product gp on the tangent space TpM at each point p.
The family gp of inner products is called a Riemannian metric (or Riemannian metric tensor). Riemannian geometry is the study of Riemannian manifolds.
A common convention is to take g to be smooth, which means that for any smooth coordinate chart on M, the n2 functions
are smooth functions. These functions are commonly designated as .
With further restrictions on the , one could also consider Lipschitz Riemannian metrics or measurable Riemannian metrics, among many other possibilities.
A Riemannian metric (tensor) makes it possible to define several geometric notions on a Riemannian manifold, such as angle at an intersection, length of a curve, area of a surface and higher-dimensional analogues (volume, etc.), extrinsic curvature of submanifolds, and intrinsic curvature of the manifold itself.
Introduction
In 1828, Carl Friedrich Gauss proved his Theorema Egregium ("remarkable theorem" in Latin), establishing an important property of surfaces. Informally, the theorem says that the curvature of a surface can be determined entirely by measuring distances along paths on the surface. That is, curvature does not depend on how the surface might be embedded in 3-dimensional space. See Differential geometry of surfaces. Bernhard Riemann extended Gauss's theory to higher-dimensional spaces called manifolds in a way that also allows distances and angles to be measured and the notion of curvature to be defined, again in a way that is intrinsic to the manifold and not dependent upon its embedding in higher-dimensional spaces. Albert Einstein used the theory of pseudo-Riemannian manifolds (a generalization of Riemannian manifolds) to develop his general theory of relativity. In particular, his equations for gravitation are constraints on the curvature of spacetime.
Definition
The tangent bundle of a smooth manifold assigns to each point of a vector space called the tangent space of at A Riemannian metric (by its definition) assigns to each a positive-definite inner product along with which comes a norm defined by The smooth manifold endowed with this metric is a Riemannian manifold, denoted .
When given a system of smooth local coordinates on given by real-valued functions the vectors
form a basis of the vector space for any Relative to this basis, one can define metric tensor "components" at each point by
One could consider these as individual functions or as a single matrix-valued function on note that the "Riemannian" assumption says that it is valued in the subset consisting of symmetric positive-definite matrices.
In terms of tensor algebra, the metric tensor can be written in terms of the dual basis of the cotangent bundle as
Isometries
If and are two Riemannian mani |
https://en.wikipedia.org/wiki/Special%20linear%20group | In mathematics, the special linear group of degree n over a field F is the set of matrices with determinant 1, with the group operations of ordinary matrix multiplication and matrix inversion. This is the normal subgroup of the general linear group given by the kernel of the determinant
where F× is the multiplicative group of F (that is, F excluding 0).
These elements are "special" in that they form an algebraic subvariety of the general linear group – they satisfy a polynomial equation (since the determinant is polynomial in the entries).
When F is a finite field of order q, the notation is sometimes used.
Geometric interpretation
The special linear group can be characterized as the group of volume and orientation preserving linear transformations of Rn; this corresponds to the interpretation of the determinant as measuring change in volume and orientation.
Lie subgroup
When F is R or C, is a Lie subgroup of of dimension . The Lie algebra of SL(n, F) consists of all matrices over F with vanishing trace. The Lie bracket is given by the commutator.
Topology
Any invertible matrix can be uniquely represented according to the polar decomposition as the product of a unitary matrix and a hermitian matrix with positive eigenvalues. The determinant of the unitary matrix is on the unit circle while that of the hermitian matrix is real and positive and since in the case of a matrix from the special linear group the product of these two determinants must be 1, then each of them must be 1. Therefore, a special linear matrix can be written as the product of a special unitary matrix (or special orthogonal matrix in the real case) and a positive definite hermitian matrix (or symmetric matrix in the real case) having determinant 1.
Thus the topology of the group is the product of the topology of SU(n) and the topology of the group of hermitian matrices of unit determinant with positive eigenvalues. A hermitian matrix of unit determinant and having positive eigenvalues can be uniquely expressed as the exponential of a traceless hermitian matrix, and therefore the topology of this is that of -dimensional Euclidean space. Since SU(n) is simply connected, we conclude that is also simply connected, for all n greater than or equal to 2.
The topology of is the product of the topology of SO(n) and the topology of the group of symmetric matrices with positive eigenvalues and unit determinant. Since the latter matrices can be uniquely expressed as the exponential of symmetric traceless matrices, then this latter topology is that of -dimensional Euclidean space. Thus, the group has the same fundamental group as SO(n), that is, Z for and Z2 for . In particular this means that , unlike , is not simply connected, for n greater than 1.
Relations to other subgroups of GL(n, A)
Two related subgroups, which in some cases coincide with SL, and in other cases are accidentally conflated with SL, are the commutator subgroup of GL, and the group generated by tra |
https://en.wikipedia.org/wiki/Ultrafinitism | In the philosophy of mathematics, ultrafinitism (also known as ultraintuitionism, strict formalism, strict finitism, actualism, predicativism, and strong finitism) is a form of finitism and intuitionism. There are various philosophies of mathematics that are called ultrafinitism. A major identifying property common among most of these philosophies is their objections to totality of number theoretic functions like exponentiation over natural numbers.
Main ideas
Like other finitists, ultrafinitists deny the existence of the infinite set of natural numbers, on the basis that it can never be completed. ( i.e. there is a largest natural number).
In addition, some ultrafinitists are concerned with acceptance of objects in mathematics that no one can construct in practice because of physical restrictions in constructing large finite mathematical objects.
Thus some ultrafinitists will deny or refrain from accepting the existence of large numbers, for example, the floor of the first Skewes's number, which is a huge number defined using the exponential function as exp(exp(exp(79))), or
The reason is that nobody has yet calculated what natural number is the floor of this real number, and it may not even be physically possible to do so. Similarly, (in Knuth's up-arrow notation) would be considered only a formal expression that does not correspond to a natural number. The brand of ultrafinitism concerned with physical realizability of mathematics is often called actualism.
Edward Nelson criticized the classical conception of natural numbers because of the circularity of its definition. In classical mathematics the natural numbers are defined as 0 and numbers obtained by the iterative applications of the successor function to 0. But the concept of natural number is already assumed for the iteration. In other words, to obtain a number like one needs to perform the successor function iteratively (in fact, exactly times) to 0.
Some versions of ultrafinitism are forms of constructivism, but most constructivists view the philosophy as unworkably extreme. The logical foundation of ultrafinitism is unclear; in his comprehensive survey Constructivism in Mathematics (1988), the constructive logician A. S. Troelstra dismissed it by saying "no satisfactory development exists at present." This was not so much a philosophical objection as it was an admission that, in a rigorous work of mathematical logic, there was simply nothing precise enough to include.
People associated with ultrafinitism
Serious work on ultrafinitism was led, from 1959 until his death in 2016, by Alexander Esenin-Volpin, who in 1961 sketched a program for proving the consistency of Zermelo–Fraenkel set theory in ultrafinite mathematics. Other mathematicians who have worked in the topic include Doron Zeilberger, Edward Nelson, Rohit Jivanlal Parikh, and Jean Paul Van Bendegem. The philosophy is also sometimes associated with the beliefs of Ludwig Wittgenstein, Robin Gandy, Petr Vopěnka, |
https://en.wikipedia.org/wiki/Principal%20ideal | In mathematics, specifically ring theory, a principal ideal is an ideal in a ring that is generated by a single element of through multiplication by every element of The term also has another, similar meaning in order theory, where it refers to an (order) ideal in a poset generated by a single element which is to say the set of all elements less than or equal to in
The remainder of this article addresses the ring-theoretic concept.
Definitions
a left principal ideal of is a subset of given by for some element
a right principal ideal of is a subset of given by for some element
a two-sided principal ideal of is a subset of given by for some element namely, the set of all finite sums of elements of the form
While this definition for two-sided principal ideal may seem more complicated than the others, it is necessary to ensure that the ideal remains closed under addition.
If is a commutative ring with identity, then the above three notions are all the same.
In that case, it is common to write the ideal generated by as or
Examples of non-principal ideal
Not all ideals are principal.
For example, consider the commutative ring of all polynomials in two variables and with complex coefficients. The ideal generated by and which consists of all the polynomials in that have zero for the constant term, is not principal. To see this, suppose that were a generator for Then and would both be divisible by which is impossible unless is a nonzero constant.
But zero is the only constant in so we have a contradiction.
In the ring the numbers where is even form a non-principal ideal. This ideal forms a regular hexagonal lattice in the complex plane. Consider and These numbers are elements of this ideal with the same norm (two), but because the only units in the ring are and they are not associates.
Related definitions
A ring in which every ideal is principal is called principal, or a principal ideal ring. A principal ideal domain (PID) is an integral domain in which every ideal is principal. Any PID is a unique factorization domain; the normal proof of unique factorization in the integers (the so-called fundamental theorem of arithmetic) holds in any PID.
Examples of principal ideal
The principal ideals in are of the form In fact, is a principal ideal domain, which can be shown as follows. Suppose where and consider the surjective homomorphisms Since is finite, for sufficiently large we have Thus which implies is always finitely generated. Since the ideal generated by any integers and is exactly by induction on the number of generators it follows that is principal.
However, all rings have principal ideals, namely, any ideal generated by exactly one element. For example, the ideal is a principal ideal of and is a principal ideal of In fact, and are principal ideals of any ring
Properties
Any Euclidean domain is a PID; the algorithm used to calculate greatest common divisors may be u |
https://en.wikipedia.org/wiki/Dedekind%20domain | In abstract algebra, a Dedekind domain or Dedekind ring, named after Richard Dedekind, is an integral domain in which every nonzero proper ideal factors into a product of prime ideals. It can be shown that such a factorization is then necessarily unique up to the order of the factors. There are at least three other characterizations of Dedekind domains that are sometimes taken as the definition: see below.
A field is a commutative ring in which there are no nontrivial proper ideals, so that any field is a Dedekind domain, however in a rather vacuous way. Some authors add the requirement that a Dedekind domain not be a field. Many more authors state theorems for Dedekind domains with the implicit proviso that they may require trivial modifications for the case of fields.
An immediate consequence of the definition is that every principal ideal domain (PID) is a Dedekind domain. In fact a Dedekind domain is a unique factorization domain (UFD) if and only if it is a PID.
The prehistory of Dedekind domains
In the 19th century it became a common technique to gain insight into integer solutions of polynomial equations using rings of algebraic numbers of higher degree. For instance, fix a positive integer . In the attempt to determine which integers are represented by the quadratic form , it is natural to factor the quadratic form into , the factorization taking place in the ring of integers of the quadratic field . Similarly, for a positive integer the polynomial (which is relevant for solving the Fermat equation ) can be factored over the ring , where is a primitive n-th root of unity.
For a few small values of and these rings of algebraic integers are PIDs, and this can be seen as an explanation of the classical successes of Fermat () and Euler (). By this time a procedure for determining whether the ring of all algebraic integers of a given quadratic field is a PID was well known to the quadratic form theorists. Especially, Gauss had looked at the case of imaginary quadratic fields: he found exactly nine values of for which the ring of integers is a PID and conjectured that there were no further values. (Gauss' conjecture was proven more than one hundred years later by Kurt Heegner, Alan Baker and Harold Stark.) However, this was understood (only) in the language of equivalence classes of quadratic forms, so that in particular the analogy between quadratic forms and the Fermat equation seems not to have been perceived. In 1847 Gabriel Lamé announced a solution of Fermat's Last Theorem for all ; that is, that the Fermat equation has no solutions in nonzero integers, but it turned out that his solution hinged on the assumption that the cyclotomic ring is a UFD. Ernst Kummer had shown three years before that this was not the case already for (the full, finite list of values for which is a UFD is now known). At the same time, Kummer developed powerful new methods to prove Fermat's Last Theorem at least for a large class of p |
https://en.wikipedia.org/wiki/Local%20field | In mathematics, a field K is called a (non-Archimedean) local field if it is complete with respect to a topology induced by a discrete valuation v and if its residue field k is finite. Equivalently, a local field is a locally compact topological field with respect to a non-discrete topology. Sometimes, real numbers R, and the complex numbers C (with their standard topologies) are also defined to be local fields; this is the convention we will adopt below. Given a local field, the valuation defined on it can be of either of two types, each one corresponds to one of the two basic types of local fields: those in which the valuation is Archimedean and those in which it is not. In the first case, one calls the local field an Archimedean local field, in the second case, one calls it a non-Archimedean local field. Local fields arise naturally in number theory as completions of global fields.
While Archimedean local fields have been quite well known in mathematics for at least 250 years, the first examples of non-Archimedean local fields, the fields of p-adic numbers for positive prime integer p, were introduced by Kurt Hensel at the end of the 19th century.
Every local field is isomorphic (as a topological field) to one of the following:
Archimedean local fields (characteristic zero): the real numbers R, and the complex numbers C.
Non-Archimedean local fields of characteristic zero: finite extensions of the p-adic numbers Qp (where p is any prime number).
Non-Archimedean local fields of characteristic p (for p any given prime number): the field of formal Laurent series Fq((T)) over a finite field Fq, where q is a power of p.
In particular, of importance in number theory, classes of local fields show up as the completions of algebraic number fields with respect to their discrete valuation corresponding to one of their maximal ideals. Research papers in modern number theory often consider a more general notion, requiring only that the residue field be perfect of positive characteristic, not necessarily finite. This article uses the former definition.
Induced absolute value
Given such an absolute value on a field K, the following topology can be defined on K: for a positive real number m, define the subset Bm of K by
Then, the b+Bm make up a neighbourhood basis of b in K.
Conversely, a topological field with a non-discrete locally compact topology has an absolute value defining its topology. It can be constructed using the Haar measure of the additive group of the field.
Basic features of non-Archimedean local fields
For a non-Archimedean local field F (with absolute value denoted by |·|), the following objects are important:
its ring of integers which is a discrete valuation ring, is the closed unit ball of F, and is compact;
the units in its ring of integers which forms a group and is the unit sphere of F;
the unique non-zero prime ideal in its ring of integers which is its open unit ball ;
a generator of called a uniformizer of ;
its re |
https://en.wikipedia.org/wiki/Quadric | In mathematics, a quadric or quadric surface (quadric hypersurface in higher dimensions), is a generalization of conic sections (ellipses, parabolas, and hyperbolas). It is a hypersurface (of dimension D) in a -dimensional space, and it is defined as the zero set of an irreducible polynomial of degree two in D + 1 variables; for example, in the case of conic sections. When the defining polynomial is not absolutely irreducible, the zero set is generally not considered a quadric, although it is often called a degenerate quadric or a reducible quadric.
In coordinates , the general quadric is thus defined by the algebraic equation
which may be compactly written in vector and matrix notation as:
where is a row vector, xT is the transpose of x (a column vector), Q is a matrix and P is a -dimensional row vector and R a scalar constant. The values Q, P and R are often taken to be over real numbers or complex numbers, but a quadric may be defined over any field.
A quadric is an affine algebraic variety, or, if it is reducible, an affine algebraic set. Quadrics may also be defined in projective spaces; see , below.
Euclidean plane
As the dimension of a Euclidean plane is two, quadrics in a Euclidean plane have dimension one and are thus plane curves. They are called conic sections, or conics.
Euclidean space
In three-dimensional Euclidean space, quadrics have dimension two, and are known as quadric surfaces. Their quadratic equations have the form
where are real numbers, and at least one of , , and is nonzero.
The quadric surfaces are classified and named by their shape, which corresponds to the orbits under affine transformations. That is, if an affine transformation maps a quadric onto another one, they belong to the same class, and share the same name and many properties.
The principal axis theorem shows that for any (possibly reducible) quadric, a suitable change of Cartesian coordinates or, equivalently, a Euclidean transformation allows putting the equation of the quadric into a unique simple form on which the class of the quadric is immediately visible. This form is called the normal form of the equation, since two quadrics have the same normal form if and only if there is a Euclidean transformation that maps one quadric to the other. The normal forms are as follows:
where the are either 1, –1 or 0, except which takes only the value 0 or 1.
Each of these 17 normal forms corresponds to a single orbit under affine transformations. In three cases there are no real points: (imaginary ellipsoid), (imaginary elliptic cylinder), and (pair of complex conjugate parallel planes, a reducible quadric). In one case, the imaginary cone, there is a single point (). If one has a line (in fact two complex conjugate intersecting planes). For one has two intersecting planes (reducible quadric). For one has a double plane. For one has two parallel planes (reducible quadric).
Thus, among the 17 normal forms, there are nine true quadrics: |
https://en.wikipedia.org/wiki/Centralizer%20and%20normalizer | In mathematics, especially group theory, the centralizer (also called commutant) of a subset S in a group G is the set of elements of G that commute with every element of S, or equivalently, such that conjugation by leaves each element of S fixed. The normalizer of S in G is the set of elements of G that satisfy the weaker condition of leaving the set fixed under conjugation. The centralizer and normalizer of S are subgroups of G. Many techniques in group theory are based on studying the centralizers and normalizers of suitable subsets S.
Suitably formulated, the definitions also apply to semigroups.
In ring theory, the centralizer of a subset of a ring is defined with respect to the semigroup (multiplication) operation of the ring. The centralizer of a subset of a ring R is a subring of R. This article also deals with centralizers and normalizers in a Lie algebra.
The idealizer in a semigroup or ring is another construction that is in the same vein as the centralizer and normalizer.
Definitions
Group and semigroup
The centralizer of a subset S of group (or semigroup) G is defined as
where only the first definition applies to semigroups.
If there is no ambiguity about the group in question, the G can be suppressed from the notation. When S = {a} is a singleton set, we write CG(a) instead of CG({a}). Another less common notation for the centralizer is Z(a), which parallels the notation for the center. With this latter notation, one must be careful to avoid confusion between the center of a group G, Z(G), and the centralizer of an element g in G, Z(g).
The normalizer of S in the group (or semigroup) G is defined as
where again only the first definition applies to semigroups. The definitions of centralizer and normalizer are similar but not identical. If g is in the centralizer of S and s is in S, then it must be that , but if g is in the normalizer, then for some t in S, with t possibly different from s. That is, elements of the centralizer of S must commute pointwise with S, but elements of the normalizer of S need only commute with S as a set. The same notational conventions mentioned above for centralizers also apply to normalizers. The normalizer should not be confused with the normal closure.
Clearly and both are subgroups of .
Ring, algebra over a field, Lie ring, and Lie algebra
If R is a ring or an algebra over a field, and S is a subset of R, then the centralizer of S is exactly as defined for groups, with R in the place of G.
If is a Lie algebra (or Lie ring) with Lie product [x, y], then the centralizer of a subset S of is defined to be
The definition of centralizers for Lie rings is linked to the definition for rings in the following way. If R is an associative ring, then R can be given the bracket product . Of course then if and only if . If we denote the set R with the bracket product as LR, then clearly the ring centralizer of S in R is equal to the Lie ring centralizer of S in LR.
The normalizer of a subset |
https://en.wikipedia.org/wiki/Hyperboloid | In geometry, a hyperboloid of revolution, sometimes called a circular hyperboloid, is the surface generated by rotating a hyperbola around one of its principal axes. A hyperboloid is the surface obtained from a hyperboloid of revolution by deforming it by means of directional scalings, or more generally, of an affine transformation.
A hyperboloid is a quadric surface, that is, a surface defined as the zero set of a polynomial of degree two in three variables. Among quadric surfaces, a hyperboloid is characterized by not being a cone or a cylinder, having a center of symmetry, and intersecting many planes into hyperbolas. A hyperboloid has three pairwise perpendicular axes of symmetry, and three pairwise perpendicular planes of symmetry.
Given a hyperboloid, one can choose a Cartesian coordinate system such that the hyperboloid is defined by one of the following equations:
or
The coordinate axes are axes of symmetry of the hyperboloid and the origin is the center of symmetry of the hyperboloid. In any case, the hyperboloid is asymptotic to the cone of the equations:
One has a hyperboloid of revolution if and only if Otherwise, the axes are uniquely defined (up to the exchange of the x-axis and the y-axis).
There are two kinds of hyperboloids. In the first case ( in the right-hand side of the equation): a one-sheet hyperboloid, also called a hyperbolic hyperboloid. It is a connected surface, which has a negative Gaussian curvature at every point. This implies near every point the intersection of the hyperboloid and its tangent plane at the point consists of two branches of curve that have distinct tangents at the point. In the case of the one-sheet hyperboloid, these branches of curves are lines and thus the one-sheet hyperboloid is a doubly ruled surface.
In the second case ( in the right-hand side of the equation): a two-sheet hyperboloid, also called an elliptic hyperboloid. The surface has two connected components and a positive Gaussian curvature at every point. The surface is convex in the sense that the tangent plane at every point intersects the surface only in this point.
Parametric representations
Cartesian coordinates for the hyperboloids can be defined, similar to spherical coordinates, keeping the azimuth angle , but changing inclination into hyperbolic trigonometric functions:
One-surface hyperboloid:
Two-surface hyperboloid:
The following parametric representation includes hyperboloids of one sheet, two sheets, and their common boundary cone, each with the -axis as the axis of symmetry:
For one obtains a hyperboloid of one sheet,
For a hyperboloid of two sheets, and
For a double cone.
One can obtain a parametric representation of a hyperboloid with a different coordinate axis as the axis of symmetry by shuffling the position of the term to the appropriate component in the equation above.
Generalised equations
More generally, an arbitrarily oriented hyperboloid, centered at , is defined by the equation
wher |
https://en.wikipedia.org/wiki/Paraboloid | In geometry, a paraboloid is a quadric surface that has exactly one axis of symmetry and no center of symmetry. The term "paraboloid" is derived from parabola, which refers to a conic section that has a similar property of symmetry.
Every plane section of a paraboloid by a plane parallel to the axis of symmetry is a parabola. The paraboloid is hyperbolic if every other plane section is either a hyperbola, or two crossing lines (in the case of a section by a tangent plane). The paraboloid is elliptic if every other nonempty plane section is either an ellipse, or a single point (in the case of a section by a tangent plane). A paraboloid is either elliptic or hyperbolic.
Equivalently, a paraboloid may be defined as a quadric surface that is not a cylinder, and has an implicit equation whose part of degree two may be factored over the complex numbers into two different linear factors. The paraboloid is hyperbolic if the factors are real; elliptic if the factors are complex conjugate.
An elliptic paraboloid is shaped like an oval cup and has a maximum or minimum point when its axis is vertical. In a suitable coordinate system with three axes , , and , it can be represented by the equation
where and are constants that dictate the level of curvature in the and planes respectively. In this position, the elliptic paraboloid opens upward.
A hyperbolic paraboloid (not to be confused with a hyperboloid) is a doubly ruled surface shaped like a saddle. In a suitable coordinate system, a hyperbolic paraboloid can be represented by the equation
In this position, the hyperbolic paraboloid opens downward along the -axis and upward along the -axis (that is, the parabola in the plane opens upward and the parabola in the plane opens downward).
Any paraboloid (elliptic or hyperbolic) is a translation surface, as it can be generated by a moving parabola directed by a second parabola.
Properties and applications
Elliptic paraboloid
In a suitable Cartesian coordinate system, an elliptic paraboloid has the equation
If , an elliptic paraboloid is a circular paraboloid or paraboloid of revolution. It is a surface of revolution obtained by revolving a parabola around its axis.
A circular paraboloid contains circles. This is also true in the general case (see Circular section).
From the point of view of projective geometry, an elliptic paraboloid is an ellipsoid that is tangent to the plane at infinity.
Plane sections
The plane sections of an elliptic paraboloid can be:
a parabola, if the plane is parallel to the axis,
a point, if the plane is a tangent plane.
an ellipse or empty, otherwise.
Parabolic reflector
On the axis of a circular paraboloid, there is a point called the focus (or focal point), such that, if the paraboloid is a mirror, light (or other waves) from a point source at the focus is reflected into a parallel beam, parallel to the axis of the paraboloid. This also works the other way around: a parallel beam of light that is parallel t |
https://en.wikipedia.org/wiki/Danica%20McKellar | Danica Mae McKellar (born January 3, 1975) is an American actress, mathematics writer, and education advocate. She played Winnie Cooper in the television series The Wonder Years from 1988 to 1993, and from 2010 to 2022, has voiced Miss Martian in the animated superhero series Young Justice.
In 2015, McKellar was cast in the Netflix original series Project Mc2. She appears in several television films for Hallmark Channel. She is the current voice of Judy Jetson from The Jetsons & WWE: Robo-WrestleMania! since 2017 following Janet Waldo's death the previous year.
In addition to her acting work, McKellar later wrote six non-fiction books, all dealing with mathematics: Math Doesn't Suck, Kiss My Math, Hot X: Algebra Exposed, Girls Get Curves: Geometry Takes Shape, which encourage middle-school and high-school girls to have confidence and succeed in mathematics, Goodnight, Numbers, and Do Not Open This Math Book.
Early life and education
McKellar was born in La Jolla, California. She moved with her family to Los Angeles when she was eight. Her mother Mahaila McKellar (née Tello) was a homemaker; her father Christopher McKellar is a real estate developer; her younger sister Crystal (b. 1976) is a lawyer. She is of paternal Scottish, French, German, Spanish, and Dutch descent and her mother is of Portuguese origin via the Azores and Madeira islands.
McKellar studied at the University of California, Los Angeles where she was a member of the Alpha Delta Pi sorority and earned a Bachelor of Science degree summa cum laude in Mathematics in 1998. As an undergraduate, she coauthored a scientific paper with Professor Lincoln Chayes and fellow student Brandy Winn titled "Percolation and Gibbs states multiplicity for ferromagnetic Ashkin–Teller models on ." Their results are termed the "Chayes–McKellar–Winn theorem". Later, when Chayes was asked to comment about the mathematical abilities of his student coauthors, he was quoted in The New York Times, "I thought that the two were really, really first-rate." For her past collaborative work on research papers, McKellar is currently assigned the Erdős number four, and her Erdős–Bacon number is six.
Acting career
The Wonder Years and early acting career
At age seven, McKellar enrolled in weekend acting classes for children at the Lee Strasberg Institute in Los Angeles. In her teens, she landed a prominent role in The Wonder Years, an American television comedy-drama that ran for six seasons on ABC, from 1988 to 1993. She played Gwendolyn "Winnie" Cooper, the main love interest of Kevin Arnold (played by Fred Savage) on the show. Her first kiss was with Fred Savage in an episode of The Wonder Years. She later said, "My first kiss was a pretty nerve-wracking experience! But we never kissed off screen, and pretty quickly our feelings turned into brother/sister, and stayed that way."
Later acting career
McKellar has said that she found it "difficult" to move from being a child actress to an adult actress. Sin |
https://en.wikipedia.org/wiki/Nonlinear%20system | In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems.
Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one.
In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it.
As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part of the system produce complex effects throughout. This nonlinearity is one of the reasons why accurate long-term forecasts are impossible with current technology.
Some authors use the term nonlinear science for the study of nonlinear systems. This term is disputed by others:
Definition
In mathematics, a linear map (or linear function) is one which satisfies both of the following properties:
Additivity or superposition principle:
Homogeneity:
Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous. The conditions of additivity and homogeneity are often combined in the superposition principle
An equation written as
is called linear if is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if and is a homogeneous function.
The definition is very general in that can be an |
https://en.wikipedia.org/wiki/Wreath%20product | In group theory, the wreath product is a special combination of two groups based on the semidirect product. It is formed by the action of one group on many copies of another group, somewhat analogous to exponentiation. Wreath products are used in the classification of permutation groups and also provide a way of constructing interesting examples of groups.
Given two groups and (sometimes known as the bottom and top), there exist two variants of the wreath product: the unrestricted wreath product and the restricted wreath product . The general form, denoted by or respectively, requires that acts on some set ; when unspecified, usually (a regular wreath product), though a different is sometimes implied. The two variants coincide when , , and are all finite. Either variant is also denoted as (with \wr for the LaTeX symbol) or A ≀ H (Unicode U+2240).
The notion generalizes to semigroups and, as such, is a central construction in the Krohn–Rhodes structure theory of finite semigroups.
Definition
Let be a group and let be a group acting on a set (on the left). The direct product of with itself indexed by is the set of sequences in , indexed by , with a group operation given by pointwise multiplication. The action of on can be extended to an action on by reindexing, namely by defining
for all and all .
Then the unrestricted wreath product of by is the semidirect product with the action of on given above. The subgroup of is called the base of the wreath product.
The restricted wreath product is constructed in the same way as the unrestricted wreath product except that one uses the direct sum as the base of the wreath product. In this case, the base consists of all sequences in with finitely many non-identity entries. The two definitions coincide when is finite.
In the most common case, , and acts on itself by left multiplication. In this case, the unrestricted and restricted wreath product may be denoted by and respectively. This is called the regular wreath product.
Notation and conventions
The structure of the wreath product of A by H depends on the H-set Ω and in case Ω is infinite it also depends on whether one uses the restricted or unrestricted wreath product. However, in literature the notation used may be deficient and one needs to pay attention to the circumstances.
In literature A≀ΩH may stand for the unrestricted wreath product A WrΩ H or the restricted wreath product A wrΩ H.
Similarly, A≀H may stand for the unrestricted regular wreath product A Wr H or the restricted regular wreath product A wr H.
In literature the H-set Ω may be omitted from the notation even if Ω ≠ H.
In the special case that H = Sn is the symmetric group of degree n it is common in the literature to assume that Ω = {1,...,n} (with the natural action of Sn) and then omit Ω from the notation. That is, A≀Sn commonly denotes A≀{1,...,n}Sn instead of the regular wreath product A≀SnSn. In the first case the base group is t |
https://en.wikipedia.org/wiki/Recurrence%20relation | In mathematics, a recurrence relation is an equation according to which the th term of a sequence of numbers is equal to some combination of the previous terms. Often, only previous terms of the sequence appear in the equation, for a parameter that is independent of ; this number is called the order of the relation. If the values of the first numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation.
In linear recurrences, the th term is equated to a linear function of the previous terms. A famous example is the recurrence for the Fibonacci numbers,
where the order is two and the linear function merely adds the two previous terms. This example is a linear recurrence with constant coefficients, because the coefficients of the linear function (1 and 1) are constants that do not depend on . For these recurrences, one can express the general term of the sequence as a closed-form expression of . As well, linear recurrences with polynomial coefficients depending on are also important, because many common elementary and special functions have a Taylor series whose coefficients satisfy such a recurrence relation (see holonomic function).
Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of .
The concept of a recurrence relation can be extended to multidimensional arrays, that is, indexed families that are indexed by tuples of natural numbers.
Definition
A recurrence relation is an equation that expresses each element of a sequence as a function of the preceding ones. More precisely, in the case where only the immediately preceding element is involved, a recurrence relation has the form
where
is a function, where is a set to which the elements of a sequence must belong. For any , this defines a unique sequence with as its first element, called the initial value.
It is easy to modify the definition for getting sequences starting from the term of index 1 or higher.
This defines recurrence relation of first order. A recurrence relation of order has the form
where is a function that involves consecutive elements of the sequence.
In this case, initial values are needed for defining a sequence.
Examples
Factorial
The factorial is defined by the recurrence relation
and the initial condition
This is an example of a linear recurrence with polynomial coefficients of order 1, with the simple polynomial
as its only coefficient.
Logistic map
An example of a recurrence relation is the logistic map:
with a given constant ; given the initial term , each subsequent term is determined by this relation.
Fibonacci numbers
The recurrence of order two satisfied by the Fibonacci numbers is the canonical example of a homogeneous linear recurrence relation with constant coefficients (see below). The Fibonacci sequence is defined using the recurrence
with initial conditions
Explicitly, the recurrence yields the equations
etc.
We obtain t |
https://en.wikipedia.org/wiki/Goldbach%27s%20weak%20conjecture | In number theory, Goldbach's weak conjecture, also known as the odd Goldbach conjecture, the ternary Goldbach problem, or the 3-primes problem, states that
Every odd number greater than 5 can be expressed as the sum of three primes. (A prime may be used more than once in the same sum.)
This conjecture is called "weak" because if Goldbach's strong conjecture (concerning sums of two primes) is proven, then this would also be true. For if every even number greater than 4 is the sum of two odd primes, adding 3 to each even number greater than 4 will produce the odd numbers greater than 7 (and 7 itself is equal to 2+2+3).
In 2013, Harald Helfgott released a proof of Goldbach's weak conjecture. As of 2018, the proof is widely accepted in the mathematics community, but it has not yet been published in a peer-reviewed journal. The proof was accepted for publication in the Annals of Mathematics Studies series in 2015, and has been undergoing further review and revision since; fully-refereed chapters in close to final form are being made public in the process.
Some state the conjecture as
Every odd number greater than 7 can be expressed as the sum of three odd primes.
This version excludes 7 = 2+2+3 because this requires the even prime 2. On odd numbers larger than 7 it is slightly stronger as it also excludes sums like 17 = 2+2+13, which are allowed in the other formulation. Helfgott's proof covers both versions of the conjecture. Like the other formulation, this one also immediately follows from Goldbach's strong conjecture.
Origins
The conjecture originated in correspondence between Christian Goldbach and Leonhard Euler. One formulation of the strong Goldbach conjecture, equivalent to the more common one in terms of sums of two primes, is
Every integer greater than 5 can be written as the sum of three primes.
The weak conjecture is simply this statement restricted to the case where the integer is odd (and possibly with the added requirement that the three primes in the sum be odd).
Timeline of results
In 1923, Hardy and Littlewood showed that, assuming the generalized Riemann hypothesis, the weak Goldbach conjecture is true for all sufficiently large odd numbers. In 1937, Ivan Matveevich Vinogradov eliminated the dependency on the generalised Riemann hypothesis and proved directly (see Vinogradov's theorem) that all sufficiently large odd numbers can be expressed as the sum of three primes. Vinogradov's original proof, as it used the ineffective Siegel–Walfisz theorem, did not give a bound for "sufficiently large"; his student K. Borozdkin (1956) derived that is large enough. The integer part of this number has 4,008,660 decimal digits, so checking every number under this figure would be completely infeasible.
In 1997, Deshouillers, Effinger, te Riele and Zinoviev published a result showing that the generalized Riemann hypothesis implies Goldbach's weak conjecture for all numbers. This result combines a general statement valid for numbers gr |
https://en.wikipedia.org/wiki/Integration%20by%20parts | In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation.
The integration by parts formula states:
Or, letting and while and the formula can be written more compactly:
Mathematician Brook Taylor discovered integration by parts, first publishing the idea in 1715. More general formulations of integration by parts exist for the Riemann–Stieltjes and Lebesgue–Stieltjes integrals. The discrete analogue for sequences is called summation by parts.
Theorem
Product of two functions
The theorem can be derived as follows. For two continuously differentiable functions and , the product rule states:
Integrating both sides with respect to ,
and noting that an indefinite integral is an antiderivative gives
where we neglect writing the constant of integration. This yields the formula for integration by parts:
or in terms of the differentials ,
This is to be understood as an equality of functions with an unspecified constant added to each side. Taking the difference of each side between two values and and applying the fundamental theorem of calculus gives the definite integral version:
The original integral contains the derivative ; to apply the theorem, one must find , the antiderivative of , then evaluate the resulting integral
Validity for less smooth functions
It is not necessary for and to be continuously differentiable. Integration by parts works if is absolutely continuous and the function designated is Lebesgue integrable (but not necessarily continuous). (If has a point of discontinuity then its antiderivative may not have a derivative at that point.)
If the interval of integration is not compact, then it is not necessary for to be absolutely continuous in the whole interval or for to be Lebesgue integrable in the interval, as a couple of examples (in which and are continuous and continuously differentiable) will show. For instance, if
is not absolutely continuous on the interval , but nevertheless
so long as is taken to mean the limit of as and so long as the two terms on the right-hand side are finite. This is only true if we choose Similarly, if
is not Lebesgue integrable on the interval , but nevertheless
with the same interpretation.
One can also easily come up with similar examples in which and are not continuously differentiable.
Further, if is a function of bounded variation on the segment and is differentiable on then
where denotes the signed measure corresponding to the function of bounded variation , and functions are extensions of to which are respectively of bounded va |
https://en.wikipedia.org/wiki/Equal | Equal(s) may refer to:
Mathematics
Equality (mathematics).
Equals sign (=), a mathematical symbol used to indicate equality.
Arts and entertainment
Equals (film), a 2015 American science fiction film
Equals (game), a board game
The Equals, a British pop group formed in 1965
"Equal", a 2016 song by Chrisette Michele from Milestone
"Equal", a 2022 song by Odesza featuring Låpsley from The Last Goodbye
"Equals", a 2009 song by Set Your Goals from This Will Be the Death of Us
Equal (TV series), a 2020 American docuseries on HBO
= (album), a 2021 album by Ed Sheeran
"=", a 2022 song by J-Hope from Jack in the Box
Other uses
Equal (sweetener), a brand of artificial sweetener.
EQUAL Community Initiative, an initiative within the European Social Fund of the European Union.
See also
Equality (disambiguation)
Equalizer (disambiguation)
Equalization (disambiguation) |
https://en.wikipedia.org/wiki/Free%20variables%20and%20bound%20variables | In mathematics, and in other disciplines involving formal languages, including mathematical logic and computer science, a variable may be said to be either free or bound. The terms are opposites. A free variable is a notation (symbol) that specifies places in an expression where substitution may take place and is not a parameter of this or any container expression. Some older books use the terms real variable and apparent variable for free variable and bound variable, respectively. The idea is related to a placeholder (a symbol that will later be replaced by some value), or a wildcard character that stands for an unspecified symbol.
In computer programming, the term free variable refers to variables used in a function that are neither local variables nor parameters of that function. The term non-local variable is often a synonym in this context.
An instance of a variable symbol is bound, in contrast, if the value of that variable symbol has been bound to a specific value or range of values in the domain of discourse or universe. This may be achieved through the use of logical quantifiers, variable-binding operators, or an explicit statement of allowed values for the variable (such as, "...where is a positive integer".) A variable symbol overall is bound if at least one occurrence of it is bound.pp.142--143 Since the same variable symbol may appear in multiple places in an expression, some occurrences of the variable symbol may be free while others are bound,p.78 hence "free" and "bound" are at first defined for occurrences and then generalized over all occurrences of said variable symbol in the expression. However it is done, the variable ceases to be an independent variable on which the value of the expression depends, whether that value be a truth value or the numerical result of a calculation, or, more generally, an element of an image set of a function.
While the domain of discourse in many contexts is understood, when an explicit range of values for the bound variable has not been given, it may be necessary to specify the domain in order to properly evaluate the expression. For example, consider the following expression in which both variables are bound by logical quantifiers:
This expression evaluates to false if the domain of and is the real numbers, but true if the domain is the complex numbers.
The term "dummy variable" is also sometimes used for a bound variable (more commonly in general mathematics than in computer science), but this should not be confused with the identically named but unrelated concept of dummy variable as used in statistics, most commonly in regression analysis.
Examples
Before stating a precise definition of free variable and bound variable, the following are some examples that perhaps make these two concepts clearer than the definition would:
In the expression
n is a free variable and k is a bound variable; consequently the value of this expression depends on the value of n, but there is nothing called |
https://en.wikipedia.org/wiki/Superposition | Superposition may refer to:
Science and mathematics
Law of superposition in geology and archaeology, which states that sedimentary layers are deposited in a time sequence, with the oldest on the bottom and the youngest on the top
Superposition calculus, used in logic for equational first-order reasoning
Superposition principle in physics and engineering, asserting the linearity of many physical systems, including:
Superposition theorem for electric circuits
Superposition of gravitational potentials
Dalton's law of partial pressures, superposition in fluid mechanics
Quantum superposition, in quantum physics
In chemistry, a property of two structures that have the same chirality
In Euclidean geometry, the principle of superposition is a method of proof
The Kolmogorov–Arnold superposition theorem, representing a multivariate function as a superposition of univariate functions
Music and art
"Superposition", a song by Young the Giant from the 2018 album Mirror Master
"Superposition", a song by Daniel Caesar from the 2019 album Case Study 01
In music theory, "reaching over"
See also
Superimposition (disambiguation)
Overlay (disambiguation)
Overlap (disambiguation) |
https://en.wikipedia.org/wiki/Ernest%20Ansermet | Ernest Alexandre Ansermet (; 11 November 1883 – 20 February 1969) was a Swiss conductor.
Biography
Ansermet was born in Vevey, Switzerland. Originally he was a mathematics professor, teaching at the University of Lausanne. He began conducting at the Casino in Montreux in 1912, and from 1915 to 1923 was the conductor for Diaghilev's Ballets Russes. Travelling in France for this, he met both Claude Debussy and Maurice Ravel, and consulted them on the performance of their works. During World War I, he met Igor Stravinsky, who was exiled in Switzerland, and from this meeting began the conductor's lifelong association with Russian music.
In 1918 Ansermet founded his own orchestra, the Orchestre de la Suisse Romande (OSR). He toured widely in Europe and America and became famous for accurate performances of difficult modern music, making first recordings of works such as Stravinsky's Capriccio with the composer as soloist. Ansermet was one of the first in the field of classical music to take jazz seriously, and in 1919 he wrote an article praising Sidney Bechet.
After World War II, Ansermet and his orchestra rose to international prominence through a long-term contract with Decca Records. From that time until his death, he recorded most of his repertoire, often two or three times. His interpretations were widely regarded as admirably clear and authoritative, though the orchestral playing did not always reach the highest international standards, and they differed notably from those of other famous 20th-century specialists, notably Pierre Monteux and Stravinsky himself. Ansermet disapproved of Stravinsky's practice of revising his works, and always played the original versions. Although famous for performing much modern music by other composers such as Arthur Honegger and Frank Martin, he avoided altogether the music of Arnold Schoenberg and his associates, even criticizing Stravinsky when he began to use twelve-tone techniques in his compositions. In Ansermet's book, Les fondements de la musique dans la conscience humaine (1961), he sought to prove, using Husserlian phenomenology and partly his own mathematical studies, that Schoenberg's idiom was false and irrational. He labeled it a "Jewish idea" and went on to say that "the Jew is a me who speaks as though he were an I," that the Jew "suffers from thoughts doubly misformed", thus making him "suitable for the handling of money", and sums up with the statement that "historic creation of Western music" would have developed just as well "without the Jew".
Ansermet's reputation suffered after the war because of his collaboration with the Nazis and he was boycotted in the new state of Israel.
In May 1954 Decca recorded Ansermet and the orchestra in Europe's first commercial stereophonic recordings. They went on to record the first stereo performance of the complete The Nutcracker by Tchaikovsky on LP (Artur Rodziński had already recorded a stereo performance on magnetic tape, but this had been rel |
https://en.wikipedia.org/wiki/Manbij | Manbij (, , ) is a city in the northeast of Aleppo Governorate in northern Syria, 30 kilometers (19 mi) west of the Euphrates. In the 2004 census by the Central Bureau of Statistics (CBS), Manbij had a population of nearly 100,000. The population of Manbij is largely Arab, with Kurdish, Turkmen, Circassian, and Chechen minorities. Many of its residents practice Naqshbandi Sufism.
On the course of the Syrian Civil War, the city was first captured by rebels in 2012, overrun by the Islamic State of Iraq and the Levant in 2014 and finally captured by the Syrian Democratic Forces (SDF) in 2016, bringing it into the Autonomous Administration of North and East Syria (AANES). Since 2018, after an agreement with the SDF, the Syrian Arab Army has been deployed on the city's periphery as a buffer between the Turkish occupation of Northern Syria and the AANES.
Etymology
Coins struck at the city before Alexander's conquest record the Aramean name of the city as Mnbg (meaning spring site). For the Assyrians it was known as Nappigu (). The place appears in Greek as Bambyce () and Pliny (v. 23) recorded its Syriac name as Mabog (ܡܒܘܓ) (also Mabbog, ). As a center of the worship of the Syrian goddess Atargatis, it became known to the Greeks as () 'city of the sanctuary', and finally as () 'holy city' (in ).
Cult of Atargatis
This worship of Atargatis was immortalized in De Dea Syria which has traditionally been attributed to Lucian of Samosata, who gave a full description of the religious cult of the shrine and the tank of sacred fish of Atargatis, of which Aelian also relates marvels. According to the De Dea Syria, the worship was of a phallic character, votaries offering little male figures of wood and bronze. There were also huge phalli set up like obelisks before the temple, which were ceremoniously climbed once a year and decorated.
The temple contained a holy chamber into which only priests were allowed to enter. A great bronze altar stood in front, set about with statues, and in the forecourt lived numerous sacred animals and birds (but not swine) used for sacrifice.
Some three hundred priests served the shrine and there were numerous minor ministrants. The lake was the centre of sacred festivities and it was customary for votaries to swim out and decorate an altar standing in the middle of the water. Self-mutilation and other orgies went on in the temple precinct, and there was an elaborate ritual on entering the city and first visiting the shrine.
History
Antiquity
The Arameans called the city "Mnbg" (Manbug). Manbij was part of the kingdom of Bit Adini and was annexed by the Assyrians in 856 BC. The Assyrian king Shalmaneser III renamed it Lita-Ashur and built a royal palace. The city was reconquered by the Assyrian king Tiglath-Pileser III in 738 BC. The sanctuary of Atargatis predates the Macedonian conquest, as it seems that the city was the center of a dynasty of Aramean priest-kings ruling at the very end of the Achaemenid Empire; two ki |
https://en.wikipedia.org/wiki/Linearity%20of%20differentiation | In calculus, the derivative of any linear combination of functions equals the same linear combination of the derivatives of the functions; this property is known as linearity of differentiation, the rule of linearity, or the superposition rule for differentiation. It is a fundamental property of the derivative that encapsulates in a single rule two simpler rules of differentiation, the sum rule (the derivative of the sum of two functions is the sum of the derivatives) and the constant factor rule (the derivative of a constant multiple of a function is the same constant multiple of the derivative). Thus it can be said that differentiation is linear, or the differential operator is a linear operator.
Statement and derivation
Let and be functions, with and constants. Now consider
By the sum rule in differentiation, this is
and by the constant factor rule in differentiation, this reduces to
Therefore,
Omitting the brackets, this is often written as:
Detailed proofs/derivations from definition
We can prove the entire linearity principle at once, or, we can prove the individual steps (of constant factor and adding) individually. Here, both will be shown.
Proving linearity directly also proves the constant factor rule, the sum rule, and the difference rule as special cases. The sum rule is obtained by setting both constant coefficients to . The difference rule is obtained by setting the first constant coefficient to and the second constant coefficient to . The constant factor rule is obtained by setting either the second constant coefficient or the second function to . (From a technical standpoint, the domain of the second function must also be considered - one way to avoid issues is setting the second function equal to the first function and the second constant coefficient equal to . One could also define both the second constant coefficient and the second function to be 0, where the domain of the second function is a superset of the first function, among other possibilities.)
On the contrary, if we first prove the constant factor rule and the sum rule, we can prove linearity and the difference rule. Proving linearity is done by defining the first and second functions as being two other functions being multiplied by constant coefficients. Then, as shown in the derivation from the previous section, we can first use the sum law while differentiation, and then use the constant factor rule, which will reach our conclusion for linearity. In order to prove the difference rule, the second function can be redefined as another function multiplied by the constant coefficient of . This would, when simplified, give us the difference rule for differentiation.
In the proofs/derivations below, the coefficients are used; they correspond to the coefficients above.
Linearity (directly)
Let . Let be functions. Let be a function, where is defined only where and are both defined. (In other words, the domain of is the intersection of the domains of a |
https://en.wikipedia.org/wiki/Power%20rule | In calculus, the power rule is used to differentiate functions of the form , whenever is a real number. Since differentiation is a linear operation on the space of differentiable functions, polynomials can also be differentiated using this rule. The power rule underlies the Taylor series as it relates a power series with a function's derivatives.
Statement of the power rule
Let be a function satisfying for all , where . Then,
The power rule for integration states that
for any real number . It can be derived by inverting the power rule for differentiation. In this equation C is any constant.
Proofs
Proof for real exponents
To start, we should choose a working definition of the value of where is any real number. Although it is feasible to define the value as the limit of a sequence of rational powers that approach the irrational power whenever we encounter such a power, or as the least upper bound of a set of rational powers less than the given power, this type of definition is not amenable to differentiation. It is therefore preferable to use a functional definition, which is usually taken to be for all values of where is the natural exponential function and is Euler's number. First, we may demonstrate that the derivative of
If then where is the natural logarithm function, the inverse function of the exponential function, as demonstrated by Euler. Since the latter two functions are equal for all values of their derivatives are also equal, whenever either derivative exists, so we have, by the chain rule,
or as was required.
Therefore, applying the chain rule to we see that
which simplifies to
When we may use the same definition with where we now have This necessarily leads to the same result. Note that because does not have a conventional definition when is not a rational number, irrational power functions are not well defined for negative bases. In addition, as rational powers of −1 with even denominators (in lowest terms) are not real numbers, these expressions are only real valued for rational powers with odd denominators (in lowest terms).
Finally, whenever the function is differentiable at the defining limit for the derivative is:
which yields 0 only when is a rational number with odd denominator (in lowest terms) and and 1 when For all other values of the expression is not well-defined for as was covered above, or is not a real number, so the limit does not exist as a real-valued derivative. For the two cases that do exist, the values agree with the value of the existing power rule at 0, so no exception need be made.
The exclusion of the expression (the case from our scheme of exponentiation is due to the fact that the function has no limit at (0,0), since approaches 1 as x approaches 0, while approaches 0 as y approaches 0. Thus, it would be problematic to ascribe any particular value to it, as the value would contradict one of the two cases, dependent on the application. It is traditionally |
https://en.wikipedia.org/wiki/Lucasian%20Professor%20of%20Mathematics | The Lucasian Chair of Mathematics () is a mathematics professorship in the University of Cambridge, England; its holder is known as the Lucasian Professor. The post was founded in 1663 by Henry Lucas, who was Cambridge University's Member of Parliament in 1639–1640, and it was officially established by King Charles II on 18 January 1664. It was described by The Daily Telegraph as one of the most prestigious academic posts in the world. Since its establishment, the professorship has been held by, among others, Isaac Newton, Charles Babbage, George Stokes, Joseph Larmor, Paul Dirac, and Stephen Hawking.
History
Henry Lucas, in his will, bequeathed his library of 4,000 volumes to the university and left instructions for the purchase of land whose yielding should provide £100 a year for the founding of a professorship.
Babbage applied for the vacancy in 1826, after Turton, but Airy was appointed. William Whewell (who considered applying, but preferred both Herschel and Babbage to himself) remarked that he would be the best professor, but that the heads of the colleges would not see that. Nonetheless, Babbage was appointed when the chair became free again two years later.
The current and 19th Lucasian Professor is Michael Cates, starting from 1 July 2015.
The previous holder of the post was theoretical physicist Michael Green who was a fellow in Clare Hall. He was appointed in October 2009, succeeding Stephen Hawking, who himself retired in September 2009, in the year of his 67th birthday, as required by the university. Green holds the position of Emeritus Lucasian Professor of Mathematics.
List of Lucasian professors
Cultural references
In the final episode of the science-fiction television series Star Trek: The Next Generation, one of the main characters, the android Data, holds the Lucasian Chair in the late 24th century, albeit in an alternate reality.
References
Further reading
Kevin Knox and Richard Noakes, From Newton to Hawking: A History of Cambridge University's Lucasian Professors of Mathematics
1663 establishments in England
Professorships at the University of Cambridge
Faculty of Mathematics, University of Cambridge
Professorships in mathematics
Mathematics education in the United Kingdom |
https://en.wikipedia.org/wiki/Constant%20of%20integration | In calculus, the constant of integration, often denoted by (or ), is a constant term added to an antiderivative of a function to indicate that the indefinite integral of (i.e., the set of all antiderivatives of ), on a connected domain, is only defined up to an additive constant. This constant expresses an ambiguity inherent in the construction of antiderivatives.
More specifically, if a function is defined on an interval, and is an antiderivative of then the set of all antiderivatives of is given by the functions where is an arbitrary constant (meaning that any value of would make a valid antiderivative). For that reason, the indefinite integral is often written as although the constant of integration might be sometimes omitted in lists of integrals for simplicity.
Origin
The derivative of any constant function is zero. Once one has found one antiderivative for a function adding or subtracting any constant will give us another antiderivative, because The constant is a way of expressing that every function with at least one antiderivative will have an infinite number of them.
Let and be two everywhere differentiable functions. Suppose that for every real number x. Then there exists a real number such that for every real number x.
To prove this, notice that So can be replaced by and by the constant function making the goal to prove that an everywhere differentiable function whose derivative is always zero must be constant:
Choose a real number and let For any x, the fundamental theorem of calculus, together with the assumption that the derivative of vanishes, implying that
thereby showing that is a constant function.
Two facts are crucial in this proof. First, the real line is connected. If the real line were not connected, we would not always be able to integrate from our fixed a to any given x. For example, if we were to ask for functions defined on the union of intervals [0,1] and [2,3], and if a were 0, then it would not be possible to integrate from 0 to 3, because the function is not defined between 1 and 2. Here, there will be two constants, one for each connected component of the domain. In general, by replacing constants with locally constant functions, we can extend this theorem to disconnected domains. For example, there are two constants of integration for and infinitely many for so for example, the general form for the integral of 1/x is:
Second, and were assumed to be everywhere differentiable. If and are not differentiable at even one point, then the theorem might fail. As an example, let be the Heaviside step function, which is zero for negative values of x and one for non-negative values of x, and let Then the derivative of is zero where it is defined, and the derivative of is always zero. Yet it's clear that and do not differ by a constant, even if it is assumed that and are everywhere continuous and almost everywhere differentiable the theorem still fails. As an example, tak |
https://en.wikipedia.org/wiki/Derive | Derive may refer to:
Derive (computer algebra system), a commercial system made by Texas Instruments
Dérive (magazine), an Austrian science magazine on urbanism
Dérive, a psychogeographical concept
Derived trait, or apomorphy
See also
Derivation (disambiguation)
Derivative (disambiguation) |
https://en.wikipedia.org/wiki/Inverse%20function%20rule | In calculus, the inverse function rule is a formula that expresses the derivative of the inverse of a bijective and differentiable function in terms of the derivative of . More precisely, if the inverse of is denoted as , where if and only if , then the inverse function rule is, in Lagrange's notation,
.
This formula holds in general whenever is continuous and injective on an interval , with being differentiable at () and where. The same formula is also equivalent to the expression
where denotes the unary derivative operator (on the space of functions) and denotes function composition.
Geometrically, a function and inverse function have graphs that are reflections, in the line . This reflection operation turns the gradient of any line into its reciprocal.
Assuming that has an inverse in a neighbourhood of and that its derivative at that point is non-zero, its inverse is guaranteed to be differentiable at and have a derivative given by the above formula.
The inverse function rule may also be expressed in Leibniz's notation. As that notation suggests,
This relation is obtained by differentiating the equation in terms of and applying the chain rule, yielding that:
considering that the derivative of with respect to is 1.
Derivation
Let be an invertible (bijective) function, let be in the domain of , and let be in the codomain of . Since f is a bijective function, is in the range of . This also means that is in the domain of , and that is in the codomain of . Since is an invertible function, we know that . The inverse function rule can be obtained by taking the derivative of this equation.
The right side is equal to 1 and the chain rule can be applied to the left side:
Rearranging then gives
Rather than using as the variable, we can rewrite this equation using as the input for , and we get the following:
Examples
(for positive ) has inverse .
At , however, there is a problem: the graph of the square root function becomes vertical, corresponding to a horizontal tangent for the square function.
(for real ) has inverse (for positive )
Additional properties
Integrating this relationship gives
This is only useful if the integral exists. In particular we need to be non-zero across the range of integration.
It follows that a function that has a continuous derivative has an inverse in a neighbourhood of every point where the derivative is non-zero. This need not be true if the derivative is not continuous.
Another very interesting and useful property is the following:
Where denotes the antiderivative of .
The inverse of the derivative of f(x) is also of interest, as it is used in showing the convexity of the Legendre transform.
Let then we have, assuming :This can be shown using the previous notation . Then we have:
Therefore:
By induction, we can generalize this result for any integer , with , the nth derivative of f(x), and , assuming :
Higher derivatives
The chain rule given above is obtain |
https://en.wikipedia.org/wiki/Euler%20characteristic | In mathematics, and more specifically in algebraic topology and polyhedral combinatorics, the Euler characteristic (or Euler number, or Euler–Poincaré characteristic) is a topological invariant, a number that describes a topological space's shape or structure regardless of the way it is bent. It is commonly denoted by (Greek lower-case letter chi).
The Euler characteristic was originally defined for polyhedra and used to prove various theorems about them, including the classification of the Platonic solids. It was stated for Platonic solids in 1537 in an unpublished manuscript by Francesco Maurolico. Leonhard Euler, for whom the concept is named, introduced it for convex polyhedra more generally but failed to rigorously prove that it is an invariant. In modern mathematics, the Euler characteristic arises from homology and, more abstractly, homological algebra.
Polyhedra
The Euler characteristic was classically defined for the surfaces of polyhedra, according to the formula
where , , and are respectively the numbers of vertices (corners), edges and faces in the given polyhedron. Any convex polyhedron's surface has Euler characteristic
This equation, stated by Euler in 1758,
is known as Euler's polyhedron formula. It corresponds to the Euler characteristic of the sphere (i.e. ), and applies identically to spherical polyhedra. An illustration of the formula on all Platonic polyhedra is given below.
The surfaces of nonconvex polyhedra can have various Euler characteristics:
For regular polyhedra, Arthur Cayley derived a modified form of Euler's formula using the density , vertex figure density and face density
This version holds both for convex polyhedra (where the densities are all 1) and the non-convex Kepler–Poinsot polyhedra.
Projective polyhedra all have Euler characteristic 1, like the real projective plane, while the surfaces of toroidal polyhedra all have Euler characteristic 0, like the torus.
Plane graphs
The Euler characteristic can be defined for connected plane graphs by the same formula as for polyhedral surfaces, where is the number of faces in the graph, including the exterior face.
The Euler characteristic of any plane connected graph is 2. This is easily proved by induction on the number of faces determined by , starting with a tree as the base case. For trees, and If has components (disconnected graphs), the same argument by induction on shows that One of the few graph theory papers of Cauchy also proves this result.
Via stereographic projection the plane maps to the 2-sphere, such that a connected graph maps to a polygonal decomposition of the sphere, which has Euler characteristic 2. This viewpoint is implicit in Cauchy's proof of Euler's formula given below.
Proof of Euler's formula
There are many proofs of Euler's formula. One was given by Cauchy in 1811, as follows. It applies to any convex polyhedron, and more generally to any polyhedron whose boundary is topologically equivalent to a sphere and |
https://en.wikipedia.org/wiki/Value%20at%20risk | Value at risk (VaR) is a measure of the risk of loss of investment/Capital. It estimates how much a set of investments might lose (with a given probability), given normal market conditions, in a set time period such as a day. VaR is typically used by firms and regulators in the financial industry to gauge the amount of assets needed to cover possible losses.
For a given portfolio, time horizon, and probability p, the p VaR can be defined informally as the maximum possible loss during that time after excluding all worse outcomes whose combined probability is at most p. This assumes mark-to-market pricing, and no trading in the portfolio.
For example, if a portfolio of stocks has a one-day 95% VaR of $1 million, that means that there is a 0.05 probability that the portfolio will fall in value by more than $1 million over a one-day period if there is no trading. Informally, a loss of $1 million or more on this portfolio is expected on 1 day out of 20 days (because of 5% probability).
More formally, p VaR is defined such that the probability of a loss greater than VaR is (at most) (1-p) while the probability of a loss less than VaR is (at least) p. A loss which exceeds the VaR threshold is termed a "VaR breach".
It is important to note that, for a fixed p, the p VaR does not assess the magnitude of loss when a VaR breach occurs and therefore is considered by some to be a questionable metric for risk management. For instance, assume someone makes a bet that flipping a coin seven times will not give seven heads. The terms are that they win $100 if this does not happen (with probability 127/128) and lose $12,700 if it does (with probability 1/128). That is, the possible loss amounts are $0 or $12,700. The 1% VaR is then $0, because the probability of any loss at all is 1/128 which is less than 1%. They are, however, exposed to a possible loss of $12,700 which can be expressed as the p VaR for any p ≤ 0.78125% (1/128).
VaR has four main uses in finance: risk management, financial control, financial reporting and computing regulatory capital. VaR is sometimes used in non-financial applications as well. However, it is a controversial risk management tool.
Important related ideas are economic capital, backtesting, stress testing, expected shortfall, and tail conditional expectation.
Details
Common parameters for VaR are 1% and 5% probabilities and one day and two week horizons, although other combinations are in use.
The reason for assuming normal markets and no trading, and to restricting loss to things measured in daily accounts, is to make the loss observable. In some extreme financial events it can be impossible to determine losses, either because market prices are unavailable or because the loss-bearing institution breaks up. Some longer-term consequences of disasters, such as lawsuits, loss of market confidence and employee morale and impairment of brand names can take a long time to play out, and may be hard to allocate among specific prio |
https://en.wikipedia.org/wiki/T1%20space | {{DISPLAYTITLE:T1 space}}
In topology and related branches of mathematics, a T1 space is a topological space in which, for every pair of distinct points, each has a neighborhood not containing the other point. An R0 space is one in which this holds for every pair of topologically distinguishable points. The properties T1 and R0 are examples of separation axioms.
Definitions
Let X be a topological space and let x and y be points in X. We say that x and y are if each lies in a neighbourhood that does not contain the other point.
X is called a T1 space if any two distinct points in X are separated.
X is called an R0 space if any two topologically distinguishable points in X are separated.
A T1 space is also called an accessible space or a space with Fréchet topology and an R0 space is also called a symmetric space. (The term also has an entirely different meaning in functional analysis. For this reason, the term T1 space is preferred. There is also a notion of a Fréchet–Urysohn space as a type of sequential space. The term also has another meaning.)
A topological space is a T1 space if and only if it is both an R0 space and a Kolmogorov (or T0) space (i.e., a space in which distinct points are topologically distinguishable). A topological space is an R0 space if and only if its Kolmogorov quotient is a T1 space.
Properties
If is a topological space then the following conditions are equivalent:
is a T1 space.
is a T0 space and an R0 space.
Points are closed in ; that is, for every point the singleton set is a closed subset of
Every subset of is the intersection of all the open sets containing it.
Every finite set is closed.
Every cofinite set of is open.
For every the fixed ultrafilter at converges only to
For every subset of and every point is a limit point of if and only if every open neighbourhood of contains infinitely many points of
Each map from the Sierpinski space to is trivial.
The map from the Sierpinski space to the single point has the lifting property with respect to the map from to the single point.
If is a topological space then the following conditions are equivalent: (where denotes the closure of )
is an R0 space.
Given any the closure of contains only the points that are topologically indistinguishable from
The Kolmogorov quotient of is T1.
For any is in the closure of if and only if is in the closure of
The specialization preorder on is symmetric (and therefore an equivalence relation).
The sets for form a partition of (that is, any two such sets are either identical or disjoint).
If is a closed set and is a point not in , then
Every neighbourhood of a point contains
Every open set is a union of closed sets.
For every the fixed ultrafilter at converges only to the points that are topologically indistinguishable from
In any topological space we have, as properties of any two points, the following implications
If the first arrow can be reversed the space is R0. If |
https://en.wikipedia.org/wiki/Separated%20sets | In topology and related branches of mathematics, separated sets are pairs of subsets of a given topological space that are related to each other in a certain way: roughly speaking, neither overlapping nor touching. The notion of when two sets are separated or not is important both to the notion of connected spaces (and their connected components) as well as to the separation axioms for topological spaces.
Separated sets should not be confused with separated spaces (defined below), which are somewhat related but different. Separable spaces are again a completely different topological concept.
Definitions
There are various ways in which two subsets and of a topological space can be considered to be separated. A most basic way in which two sets can be separated is if they are disjoint, that is, if their intersection is the empty set. This property has nothing to do with topology as such, but only set theory. Each of the properties below is stricter than disjointness, incorporating some topological information. The properties are presented in increasing order of specificity, each being a stronger notion than the preceding one.
A more restrictive property is that and are in if each is disjoint from the other's closure:
This property is known as the . Since every set is contained in its closure, two separated sets automatically must be disjoint. The closures themselves do not have to be disjoint from each other; for example, the intervals and are separated in the real line even though the point 1 belongs to both of their closures. A more general example is that in any metric space, two open balls and are separated whenever The property of being separated can also be expressed in terms of derived set (indicated by the prime symbol): and are separated when they are disjoint and each is disjoint from the other's derived set, that is, (As in the case of the first version of the definition, the derived sets and are not required to be disjoint from each other.)
The sets and are if there are neighbourhoods of and of such that and are disjoint. (Sometimes you will see the requirement that and be open neighbourhoods, but this makes no difference in the end.) For the example of and you could take and Note that if any two sets are separated by neighbourhoods, then certainly they are separated. If and are open and disjoint, then they must be separated by neighbourhoods; just take and For this reason, separatedness is often used with closed sets (as in the normal separation axiom).
The sets and are if there is a closed neighbourhood of and a closed neighbourhood of such that and are disjoint. Our examples, and are separated by closed neighbourhoods. You could make either or closed by including the point 1 in it, but you cannot make them both closed while keeping them disjoint. Note that if any two sets are separated by closed neighbourhoods, then certainly they are separated by neighbourhoods.
The sets |
https://en.wikipedia.org/wiki/Symmetric%20space%20%28disambiguation%29 | A symmetric space is, in differential geometry and representation theory, a smooth manifold whose group of symmetries contains an "inversion symmetry" about every point. Examples include:
Riemannian symmetric space
Hermitian symmetric space
Quaternion-Kähler symmetric space
Weakly symmetric space
In topology, symmetric space may also refer to:
R0 space, a topological space in which two topologically distinguishable points can be separated
In computational complexity theory, symmetric space may refer to undirected or reversible analogues of nondeterministic space complexity, for instance:
SL (complexity), the class of problems solvable in logarithmic symmetric space |
https://en.wikipedia.org/wiki/Hilbert%27s%20Nullstellensatz | In mathematics, Hilbert's Nullstellensatz (German for "theorem of zeros", or more literally, "zero-locus-theorem") is a theorem that establishes a fundamental relationship between geometry and algebra. This relationship is the basis of algebraic geometry. It relates algebraic sets to ideals in polynomial rings over algebraically closed fields. This relationship was discovered by David Hilbert, who proved the Nullstellensatz in his second major paper on invariant theory in 1893 (following his seminal 1890 paper in which he proved Hilbert's basis theorem).
Formulation
Let k be a field (such as the rational numbers) and K be an algebraically closed field extension (such as the complex numbers). Consider the polynomial ring and let I be an ideal in this ring. The algebraic set V(I) defined by this ideal consists of all n-tuples x = (x1,...,xn) in Kn such that f(x) = 0 for all f in I. Hilbert's Nullstellensatz states that if p is some polynomial in that vanishes on the algebraic set V(I), i.e. p(x) = 0 for all x in V(I), then there exists a natural number r such that pr is in I.
An immediate corollary is the weak Nullstellensatz: The ideal contains 1 if and only if the polynomials in I do not have any common zeros in Kn. It may also be formulated as follows: if I is a proper ideal in then V(I) cannot be empty, i.e. there exists a common zero for all the polynomials in the ideal in every algebraically closed extension of k. This is the reason for the name of the theorem, which can be proved easily from the 'weak' form using the Rabinowitsch trick. The assumption of considering common zeros in an algebraically closed field is essential here; for example, the elements of the proper ideal (X2 + 1) in do not have a common zero in
With the notation common in algebraic geometry, the Nullstellensatz can also be formulated as
for every ideal J. Here, denotes the radical of J and I(U) is the ideal of all polynomials that vanish on the set U.
In this way, taking we obtain an order-reversing bijective correspondence between the algebraic sets in Kn and the radical ideals of In fact, more generally, one has a Galois connection between subsets of the space and subsets of the algebra, where "Zariski closure" and "radical of the ideal generated" are the closure operators.
As a particular example, consider a point . Then . More generally,
Conversely, every maximal ideal of the polynomial ring (note that is algebraically closed) is of the form for some .
As another example, an algebraic subset W in Kn is irreducible (in the Zariski topology) if and only if is a prime ideal.
Proofs
There are many known proofs of the theorem. Some are non-constructive, such as the first one. Others are constructive, as based on algorithms for expressing or as a linear combination of the generators of the ideal.
Using Zariski's lemma
Zariski's lemma asserts that if a field is finitely generated as an associative algebra over a field , then it is a finite field e |
https://en.wikipedia.org/wiki/B%C3%A9zout%27s%20theorem | Bézout's theorem is a statement in algebraic geometry concerning the number of common zeros of polynomials in indeterminates. In its original form the theorem states that in general the number of common zeros equals the product of the degrees of the polynomials. It is named after Étienne Bézout.
In some elementary texts, Bézout's theorem refers only to the case of two variables, and asserts that, if two plane algebraic curves of degrees and have no component in common, they have intersection points, counted with their multiplicity, and including points at infinity and points with complex coordinates.
In its modern formulation, the theorem states that, if is the number of common points over an algebraically closed field of projective hypersurfaces defined by homogeneous polynomials in indeterminates, then is either infinite, or equals the product of the degrees of the polynomials. Moreover, the finite case occurs almost always.
In the case of two variables and in the case of affine hypersurfaces, if multiplicities and points at infinity are not counted, this theorem provides only an upper bound of the number of points, which is almost always reached. This bound is often referred to as the Bézout bound.
Bézout's theorem is fundamental in computer algebra and effective algebraic geometry, by showing that most problems have a computational complexity that is at least exponential in the number of variables. It follows that in these areas, the best complexity that can be hoped for will occur with algorithms that have a complexity which is polynomial in the Bézout bound.
History
In the case of plane curves, Bézout's theorem was essentially stated by Isaac Newton in his proof of lemma 28 of volume 1 of his Principia in 1687, where he claims that two curves have a number of intersection points given by the product of their degrees.
The general theorem was later published in 1779 in Étienne Bézout's Théorie générale des équations algébriques. He supposed the equations to be "complete", which in modern terminology would translate to generic. Since with generic polynomials, there are no points at infinity, and all multiplicities equal one, Bézout's formulation is correct, although his proof does not follow the modern requirements of rigor. This and the fact that the concept of intersection multiplicity was outside the knowledge of his time led to a sentiment expressed by some authors that his proof was neither correct nor the first proof to be given.
The proof of the statement that includes multiplicities requires an accurate definition of the intersection multiplicities, and was therefore not possible before the 20th century. The definitions of multiplicities that was given during the first half of the 20th century involved continuous and infinitesimal deformations. It follows that the proofs of this period apply only over the field of complex numbers. It is only in 1958 that Jean-Pierre Serre gave a purely algebraic definition of multiplic |
https://en.wikipedia.org/wiki/Karl%20Menger | Karl Menger (January 13, 1902 – October 5, 1985) was an Austrian–American mathematician, the son of the economist Carl Menger. In mathematics, Menger studied the theory of algebras and the dimension theory of low-regularity ("rough") curves and regions; in graph theory, he is credited with Menger's theorem. Outside of mathematics, Menger has substantial contributions to game theory and social sciences.
Biography
Karl Menger was a student of Hans Hahn and received his PhD from the University of Vienna in 1924. L. E. J. Brouwer invited Menger in 1925 to teach at the University of Amsterdam. In 1927, he returned to Vienna to accept a professorship there. In 1930 and 1931 he was visiting lecturer at Harvard University and the Rice Institute. From 1937 to 1946 he was a professor at the University of Notre Dame. From 1946 to 1971, he was a professor at Illinois Institute of Technology (IIT) in Chicago. In 1983, IIT awarded Menger a Doctor of Humane Letters and Sciences degree.
Contributions to mathematics
His most famous popular contribution was the Menger sponge (mistakenly known as Sierpinski's sponge), a three-dimensional version of the Sierpiński carpet. It is also related to the Cantor set.
With Arthur Cayley, Menger is considered one of the founders of distance geometry; especially by having formalized definitions of the notions of angle and of curvature in terms of directly measurable physical quantities, namely ratios of distance values. The characteristic mathematical expressions appearing in those definitions are Cayley–Menger determinants.
He was an active participant of the Vienna Circle, which had discussions in the 1920s on social science and philosophy. During that time, he published an influential result on the St. Petersburg paradox with applications to the utility theory in economics; this result has since been criticised as fundamentally misleading. Later he contributed to the development of game theory with Oskar Morgenstern.
Menger was a founding member of the Econometric Society.
Legacy
Menger's longest and last academic post was at the Illinois Institute of Technology, which hosts an annual IIT Karl Menger Lecture and offers the IIT Karl Menger Student Award to an exceptional student for scholarship each year.
See also
Distance geometry
Kuratowski's theorem
Selection principle
Travelling salesman problem
Notes
Further reading
Crilly, Tony, 2005, "Paul Urysohn and Karl Menger: papers on dimension theory" in Grattan-Guinness, I., ed., Landmark Writings in Western Mathematics. Elsevier: 844–55.
Golland, Louise and Sigmund, Karl "Exact Thought in a Demented Time: Karl Menger and his Viennese Mathematical Colloquium" The Mathematical Intelligencer 2000, Vol 22,1, 34-45
External links
1902 births
1985 deaths
20th-century American mathematicians
Austrian mathematicians
Vienna Circle
Harvard University staff
Rice University staff
Duke University faculty
University of Notre Dame faculty
Illinois Institute of Tec |
https://en.wikipedia.org/wiki/Pafnuty%20Chebyshev | Pafnuty Lvovich Chebyshev () ( – ) was a Russian mathematician and considered to be the founding father of Russian mathematics.
Chebyshev is known for his fundamental contributions to the fields of probability, statistics, mechanics, and number theory. A number of important mathematical concepts are named after him, including the Chebyshev inequality (which can be used to prove the weak law of large numbers), the Bertrand–Chebyshev theorem, Chebyshev polynomials, Chebyshev linkage, and Chebyshev bias.
Transcription
The surname Chebyshev has been transliterated in several different ways, like Tchebichef, Tchebychev, Tchebycheff, Tschebyschev, Tschebyschef, Tschebyscheff, Čebyčev, Čebyšev, Chebysheff, Chebychov, Chebyshov (according to native Russian speakers, this one provides the closest pronunciation in English to the correct pronunciation in old Russian), and Chebychev, a mixture between English and French transliterations considered erroneous. It is one of the most well known data-retrieval nightmares in mathematical literature. Currently, the English transliteration Chebyshev has gained widespread acceptance, except by the French, who prefer Tchebychev. The correct transliteration according to ISO 9 is Čebyšëv. The American Mathematical Society adopted the transcription Chebyshev in its Mathematical Reviews.
His first name comes from the Greek Paphnutius (Παφνούτιος), which in turn takes its origin in the Coptic Paphnuty (Ⲡⲁⲫⲛⲟⲩϯ), meaning "that who belongs to God" or simply "the man of God".
Biography
Early years
One of nine children, Chebyshev was born in the village of Okatovo in the district of Borovsk, province of Kaluga. His father, Lev Pavlovich, was a Russian nobleman and wealthy landowner. Pafnuty Lvovich was first educated at home by his mother Agrafena Ivanovna Pozniakova (in reading and writing) and by his cousin Avdotya Kvintillianovna Sukhareva (in French and arithmetic). Chebyshev mentioned that his music teacher also played an important role in his education, for she "raised his mind to exactness and analysis."
Trendelenburg's gait affected Chebyshev's adolescence and development. From childhood, he limped and walked with a stick and so his parents abandoned the idea of his becoming an officer in the family tradition. His disability prevented his playing many children's games and he devoted himself instead to mathematics.
In 1832, the family moved to Moscow, mainly to attend to the education of their eldest sons (Pafnuty and Pavel, who would become lawyers). Education continued at home and his parents engaged teachers of excellent reputation, including (for mathematics and physics) P.N. Pogorelski, held to be one of the best teachers in Moscow and who had taught (for example) the writer Ivan Sergeevich Turgenev.
University studies
In summer 1837, Chebyshev passed the registration examinations and, in September of that year, began his mathematical studies at the second philosophical department of Moscow Univer |
https://en.wikipedia.org/wiki/Emmy%20Noether | Amalie Emmy Noether (, ; ; 23 March 1882 – 14 April 1935) was a German mathematician who made many important contributions to abstract algebra. She discovered Noether's First and Second Theorems, which are fundamental in mathematical physics. She was described by Pavel Alexandrov, Albert Einstein, Jean Dieudonné, Hermann Weyl and Norbert Wiener as the most important woman in the history of mathematics. As one of the leading mathematicians of her time, she developed theories of rings, fields, and algebras. In physics, Noether's theorem explains the connection between symmetry and conservation laws.
Noether was born to a Jewish family in the Franconian town of Erlangen; her father was the mathematician Max Noether. She originally planned to teach French and English after passing the required examinations, but instead studied mathematics at the University of Erlangen, where her father lectured. After completing her doctorate in 1907 under the supervision of Paul Gordan, she worked at the Mathematical Institute of Erlangen without pay for seven years. At the time, women were largely excluded from academic positions. In 1915, she was invited by David Hilbert and Felix Klein to join the mathematics department at the University of Göttingen, a world-renowned center of mathematical research. The philosophical faculty objected, however, and she spent four years lecturing under Hilbert's name. Her habilitation was approved in 1919, allowing her to obtain the rank of Privatdozent.
Noether remained a leading member of the Göttingen mathematics department until 1933; her students were sometimes called the "Noether boys". In 1924, Dutch mathematician B. L. van der Waerden joined her circle and soon became the leading expositor of Noether's ideas; her work was the foundation for the second volume of his influential 1931 textbook, Moderne Algebra. By the time of her plenary address at the 1932 International Congress of Mathematicians in Zürich, her algebraic acumen was recognized around the world. The following year, Germany's Nazi government dismissed Jews from university positions, and Noether moved to the United States to take up a position at Bryn Mawr College in Pennsylvania, where she taught doctoral and post-graduate women including Marie Johanna Weiss, Ruth Stauffer, Grace Shover Quinn and Olga Taussky-Todd. At the same time, she lectured and performed research at the Institute for Advanced Study in Princeton, New Jersey.
Noether's mathematical work has been divided into three "epochs". In the first (1908–1919), she made contributions to the theories of algebraic invariants and number fields. Her work on differential invariants in the calculus of variations, Noether's theorem, has been called "one of the most important mathematical theorems ever proved in guiding the development of modern physics". In the second epoch (1920–1926), she began work that "changed the face of [abstract] algebra". In her classic 1921 paper Idealtheorie in Ringbereichen (Th |
https://en.wikipedia.org/wiki/Face%20%28geometry%29 | In solid geometry, a face is a flat surface (a planar region) that forms part of the boundary of a solid object; a three-dimensional solid bounded exclusively by faces is a polyhedron.
In more technical treatments of the geometry of polyhedra and higher-dimensional polytopes, the term is also used to mean an element of any dimension of a more general polytope (in any number of dimensions).
Polygonal face
In elementary geometry, a face is a polygon on the boundary of a polyhedron. Other names for a polygonal face include polyhedron side and Euclidean plane tile.
For example, any of the six squares that bound a cube is a face of the cube. Sometimes "face" is also used to refer to the 2-dimensional features of a 4-polytope. With this meaning, the 4-dimensional tesseract has 24 square faces, each sharing two of 8 cubic cells.
Number of polygonal faces of a polyhedron
Any convex polyhedron's surface has Euler characteristic
where V is the number of vertices, E is the number of edges, and F is the number of faces. This equation is known as Euler's polyhedron formula. Thus the number of faces is 2 more than the excess of the number of edges over the number of vertices. For example, a cube has 12 edges and 8 vertices, and hence 6 faces.
k-face
In higher-dimensional geometry, the faces of a polytope are features of all dimensions. A face of dimension k is called a k-face. For example, the polygonal faces of an ordinary polyhedron are 2-faces. In set theory, the set of faces of a polytope includes the polytope itself and the empty set, where the empty set is for consistency given a "dimension" of −1. For any n-polytope (n-dimensional polytope), −1 ≤ k ≤ n.
For example, with this meaning, the faces of a cube comprise the cube itself (3-face), its (square) facets (2-faces), (linear) edges (1-faces), (point) vertices (0-faces), and the empty set. The following are the faces of a 4-dimensional polytope:
4-face – the 4-dimensional 4-polytope itself
3-faces – 3-dimensional cells (polyhedral faces)
2-faces – 2-dimensional ridges (polygonal faces)
1-faces – 1-dimensional edges
0-faces – 0-dimensional vertices
the empty set, which has dimension −1
In some areas of mathematics, such as polyhedral combinatorics, a polytope is by definition convex. Formally, a face of a polytope P is the intersection of P with any closed halfspace whose boundary is disjoint from the interior of P. From this definition it follows that the set of faces of a polytope includes the polytope itself and the empty set.
In other areas of mathematics, such as the theories of abstract polytopes and star polytopes, the requirement for convexity is relaxed. Abstract theory still requires that the set of faces include the polytope itself and the empty set.
Cell or 3-face
A cell is a polyhedral element (3-face) of a 4-dimensional polytope or 3-dimensional tessellation, or higher. Cells are facets for 4-polytopes and 3-honeycombs.
Examples:
Facet or (n − 1)-face
In higher-dimensional |
https://en.wikipedia.org/wiki/Row%20and%20column%20vectors | In linear algebra, a column vector with elements is an matrix consisting of a single column of entries, for example,
Similarly, a row vector is a matrix for some , consisting of a single row of entries,
(Throughout this article, boldface is used for both row and column vectors.)
The transpose (indicated by ) of any row vector is a column vector, and the transpose of any column vector is a row vector:
and
The set of all row vectors with entries in a given field (such as the real numbers) forms an -dimensional vector space; similarly, the set of all column vectors with entries forms an -dimensional vector space.
The space of row vectors with entries can be regarded as the dual space of the space of column vectors with entries, since any linear functional on the space of column vectors can be represented as the left-multiplication of a unique row vector.
Notation
To simplify writing column vectors in-line with other text, sometimes they are written as row vectors with the transpose operation applied to them.
or
Some authors also use the convention of writing both column vectors and row vectors as rows, but separating row vector elements with commas and column vector elements with semicolons (see alternative notation 2 in the table below).
Operations
Matrix multiplication involves the action of multiplying each row vector of one matrix by each column vector of another matrix.
The dot product of two column vectors , considered as elements of a coordinate space, is equal to the matrix product of the transpose of with ,
By the symmetry of the dot product, the dot product of two column vectors is also equal to the matrix product of the transpose of with ,
The matrix product of a column and a row vector gives the outer product of two vectors , an example of the more general tensor product. The matrix product of the column vector representation of and the row vector representation of gives the components of their dyadic product,
which is the transpose of the matrix product of the column vector representation of and the row vector representation of ,
Matrix transformations
An matrix can represent a linear map and act on row and column vectors as the linear map's transformation matrix. For a row vector , the product is another row vector :
Another matrix can act on ,
Then one can write , so the matrix product transformation maps directly to . Continuing with row vectors, matrix transformations further reconfiguring -space can be applied to the right of previous outputs.
When a column vector is transformed to another column vector under an matrix action, the operation occurs to the left,
leading to the algebraic expression for the composed output from input. The matrix transformations mount up to the left in this use of a column vector for input to matrix transformation.
See also
Covariance and contravariance of vectors
Index notation
Vector of ones
Single-entry vector
Standard unit vector
Unit vec |
https://en.wikipedia.org/wiki/Wiener%20process | In mathematics, the Wiener process is a real-valued continuous-time stochastic process named in honor of American mathematician Norbert Wiener for his investigations on the mathematical properties of the one-dimensional Brownian motion. It is often also called Brownian motion due to its historical connection with the physical process of the same name originally observed by Scottish botanist Robert Brown. It is one of the best known Lévy processes (càdlàg stochastic processes with stationary independent increments) and occurs frequently in pure and applied mathematics, economics, quantitative finance, evolutionary biology, and physics.
The Wiener process plays an important role in both pure and applied mathematics. In pure mathematics, the Wiener process gave rise to the study of continuous time martingales. It is a key process in terms of which more complicated stochastic processes can be described. As such, it plays a vital role in stochastic calculus, diffusion processes and even potential theory. It is the driving process of Schramm–Loewner evolution. In applied mathematics, the Wiener process is used to represent the integral of a white noise Gaussian process, and so is useful as a model of noise in electronics engineering (see Brownian noise), instrument errors in filtering theory and disturbances in control theory.
The Wiener process has applications throughout the mathematical sciences. In physics it is used to study Brownian motion, the diffusion of minute particles suspended in fluid, and other types of diffusion via the Fokker–Planck and Langevin equations. It also forms the basis for the rigorous path integral formulation of quantum mechanics (by the Feynman–Kac formula, a solution to the Schrödinger equation can be represented in terms of the Wiener process) and the study of eternal inflation in physical cosmology. It is also prominent in the mathematical theory of finance, in particular the Black–Scholes option pricing model.
Characterisations of the Wiener process
The Wiener process is characterised by the following properties:
almost surely
has independent increments: for every the future increments are independent of the past values ,
has Gaussian increments: is normally distributed with mean and variance ,
has almost surely continuous paths: is almost surely continuous in .
That the process has independent increments means that if then and are independent random variables, and the similar condition holds for n increments.
An alternative characterisation of the Wiener process is the so-called Lévy characterisation that says that the Wiener process is an almost surely continuous martingale with and quadratic variation (which means that is also a martingale).
A third characterisation is that the Wiener process has a spectral representation as a sine series whose coefficients are independent N(0, 1) random variables. This representation can be obtained using the Karhunen–Loève theorem.
Another characteri |
https://en.wikipedia.org/wiki/Church%E2%80%93Rosser%20theorem | In lambda calculus, the Church–Rosser theorem states that, when applying reduction rules to terms, the ordering in which the reductions are chosen does not make a difference to the eventual result.
More precisely, if there are two distinct reductions or sequences of reductions that can be applied to the same term, then there exists a term that is reachable from both results, by applying (possibly empty) sequences of additional reductions. The theorem was proved in 1936 by Alonzo Church and J. Barkley Rosser, after whom it is named.
The theorem is symbolized by the adjacent diagram: If term a can be reduced to both b and c, then there must be a further term d (possibly equal to either b or c) to which both b and c can be reduced.
Viewing the lambda calculus as an abstract rewriting system, the Church–Rosser theorem states that the reduction rules of the lambda calculus are confluent. As a consequence of the theorem, a term in the lambda calculus has at most one normal form, justifying reference to "the normal form" of a given normalizable term.
History
In 1936, Alonzo Church and J. Barkley Rosser proved that the theorem holds for β-reduction in the λI-calculus (in which every abstracted variable must appear in the term's body). The proof method is known as "finiteness of developments", and it has additional consequences such as the Standardization Theorem, which relates to a method in which reductions can be performed from left to right to reach a normal form (if one exists). The result for the pure untyped lambda calculus was proved by D. E. Shroer in 1965.
Pure untyped lambda calculus
One type of reduction in the pure untyped lambda calculus for which the Church–Rosser theorem applies is β-reduction, in which a subterm of the form is contracted by the substitution . If β-reduction is denoted by and its reflexive, transitive closure by then the Church–Rosser theorem is that:
A consequence of this property is that two terms equal in must reduce to a common term:
The theorem also applies to η-reduction, in which a subterm is replaced by . It also applies to βη-reduction, the union of the two reduction rules.
Proof
For β-reduction, one proof method originates from William W. Tait and Per Martin-Löf. Say that a binary relation satisfies the diamond property if:
Then the Church–Rosser property is the statement that satisfies the diamond property. We introduce a new reduction whose reflexive transitive closure is and which satisfies the diamond property. By induction on the number of steps in the reduction, it thus follows that satisfies the diamond property.
The relation has the formation rules:
If and then and and
The η-reduction rule can be proved to be Church–Rosser directly. Then, it can be proved that β-reduction and η-reduction commute in the sense that:
If and then there exists a term such that and .
Hence we can conclude that βη-reduction is Church–Rosser.
Normalisation
A reduction rule that satisfies the Chu |
https://en.wikipedia.org/wiki/List%20of%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering | Latin and Greek letters are used in mathematics, science, engineering, and other areas where mathematical notation is used as symbols for constants, special functions, and also conventionally for variables representing certain quantities.
Some common conventions:
Intensive quantities in physics are usually denoted with minusculeswhile extensive are denoted with capital letters.
Most symbols are written in italics.
Vectors can be denoted in boldface.
Sets of numbers are typically bold or blackboard bold.
Latin
Greek
Other scripts
Hebrew
Cyrillic
Japanese
Modified Latin
Modified Greek
References
Mathematics-related lists
Physics-related lists |
https://en.wikipedia.org/wiki/Noetherian | In mathematics, the adjective Noetherian is used to describe objects that satisfy an ascending or descending chain condition on certain kinds of subobjects, meaning that certain ascending or descending sequences of subobjects must have finite length. Noetherian objects are named after Emmy Noether, who was the first to study the ascending and descending chain conditions for rings. Specifically:
Noetherian group, a group that satisfies the ascending chain condition on subgroups.
Noetherian ring, a ring that satisfies the ascending chain condition on ideals.
Noetherian module, a module that satisfies the ascending chain condition on submodules.
More generally, an object in a category is said to be Noetherian if there is no infinitely increasing filtration of it by subobjects. A category is Noetherian if every object in it is Noetherian.
Noetherian relation, a binary relation that satisfies the ascending chain condition on its elements.
Noetherian topological space, a topological space that satisfies the descending chain condition on closed sets.
Noetherian induction, also called well-founded induction, a proof method for binary relations that satisfy the descending chain condition.
Noetherian rewriting system, an abstract rewriting system that has no infinite chains.
Noetherian scheme, a scheme in algebraic geometry that admits a finite covering by open spectra of Noetherian rings.
See also
Artinian ring, a ring that satisfies the descending chain condition on ideals.
Mathematical analysis |
https://en.wikipedia.org/wiki/Trigonometric%20tables | In mathematics, tables of trigonometric functions are useful in a number of areas. Before the existence of pocket calculators, trigonometric tables were essential for navigation, science and engineering. The calculation of mathematical tables was an important area of study, which led to the development of the first mechanical computing devices.
Modern computers and pocket calculators now generate trigonometric function values on demand, using special libraries of mathematical code. Often, these libraries use pre-calculated tables internally, and compute the required value by using an appropriate interpolation method. Interpolation of simple look-up tables of trigonometric functions is still used in computer graphics, where only modest accuracy may be required and speed is often paramount.
Another important application of trigonometric tables and generation schemes is for fast Fourier transform (FFT) algorithms, where the same trigonometric function values (called twiddle factors) must be evaluated many times in a given transform, especially in the common case where many transforms of the same size are computed. In this case, calling generic library routines every time is unacceptably slow. One option is to call the library routines once, to build up a table of those trigonometric values that will be needed, but this requires significant memory to store the table. The other possibility, since a regular sequence of values is required, is to use a recurrence formula to compute the trigonometric values on the fly. Significant research has been devoted to finding accurate, stable recurrence schemes in order to preserve the accuracy of the FFT (which is very sensitive to trigonometric errors).
On-demand computation
Modern computers and calculators use a variety of techniques to provide trigonometric function values on demand for arbitrary angles (Kantabutra, 1996). One common method, especially on higher-end processors with floating-point units, is to combine a polynomial or rational approximation (such as Chebyshev approximation, best uniform approximation, Padé approximation, and typically for higher or variable precisions, Taylor and Laurent series) with range reduction and a table lookup — they first look up the closest angle in a small table, and then use the polynomial to compute the correction. Maintaining precision while performing such interpolation is nontrivial, but methods like Gal's accurate tables, Cody and Waite range reduction, and Payne and Hanek radian reduction algorithms can be used for this purpose. On simpler devices that lack a hardware multiplier, there is an algorithm called CORDIC (as well as related techniques) that is more efficient, since it uses only shifts and additions. All of these methods are commonly implemented in hardware for performance reasons.
The particular polynomial used to approximate a trigonometric function is generated ahead of time using some approximation of a minimax approximation algorithm.
For |
https://en.wikipedia.org/wiki/Application | Application may refer to:
Mathematics and computing
Application software, computer software designed to help the user to perform specific tasks
Application layer, an abstraction layer that specifies protocols and interface methods used in a communications network
Function application, in mathematics and computer science
Processes and documents
Application for employment, a form or forms that an individual seeking employment must fill out
College application, the process by which prospective students apply for entry into a college or university
Patent application, a document filed at a patent office to support the grant of a patent
Other uses
Application (virtue), a characteristic encapsulated in diligence
Topical application, the spreading or putting of medication to body surfaces
See also
Apply |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.