source stringlengths 31 168 | text stringlengths 51 3k |
|---|---|
https://en.wikipedia.org/wiki/Likelihood%20function | The likelihood function (often simply called the likelihood) is the joint probability (or probability density) of observed data viewed as a function of the parameters of a statistical model.
In maximum likelihood estimation, the arg max (over the parameter ) of the likelihood function serves as a point estimate for , while the Fisher information (often approximated by the likelihood's Hessian matrix) indicates the estimate's precision. Meanwhile in Bayesian statistics, parameter estimates are derived from the converse of the likelihood, the so-called posterior probability, which is calculated via Bayes' rule.
Definition
The likelihood function, parameterized by a (possibly multivariate) parameter , is usually defined differently for discrete and continuous probability distributions (a more general definition is discussed below). Given a probability density or mass function
where is a realization of the random variable , the likelihood function is
often written
In other words, when is viewed as a function of with fixed, it is a probability density function, and when viewed as a function of with fixed, it is a likelihood function. In the frequentist paradigm, the notation is often avoided and instead or are used to indicate that is regarded as a fixed unknown quantity rather than as a random variable being conditioned on.
The likelihood function does not specify the probability that is the truth, given the observed sample . Such an interpretation is a common error, with potentially disastrous consequences (see prosecutor's fallacy).
Discrete probability distribution
Let be a discrete random variable with probability mass function depending on a parameter . Then the function
considered as a function of , is the likelihood function, given the outcome of the random variable . Sometimes the probability of "the value of for the parameter value " is written as or . The likelihood is the probability that a particular outcome is observed when the true value of the parameter is , equivalent to the probability mass on ; it is not a probability density over the parameter . The likelihood, , should not be confused with , which is the posterior probability of given the data .
Given no event (no data), the likelihood is 1; any non-trivial event will have a lower likelihood.
Example
Consider a simple statistical model of a coin flip: a single parameter that expresses the "fairness" of the coin. The parameter is the probability that a coin lands heads up ("H") when tossed. can take on any value within the range 0.0 to 1.0. For a perfectly fair coin, .
Imagine flipping a fair coin twice, and observing two heads in two tosses ("HH"). Assuming that each successive coin flip is i.i.d., then the probability of observing HH is
Equivalently, the likelihood at given that "HH" was observed is 0.25:
This is not the same as saying that , a conclusion which could only be reached via Bayes' theorem given knowledge about the marginal p |
https://en.wikipedia.org/wiki/Borel%E2%80%93Cantelli%20lemma | In probability theory, the Borel–Cantelli lemma is a theorem about sequences of events. In general, it is a result in measure theory. It is named after Émile Borel and Francesco Paolo Cantelli, who gave statement to the lemma in the first decades of the 20th century. A related result, sometimes called the second Borel–Cantelli lemma, is a partial converse of the first Borel–Cantelli lemma. The lemma states that, under certain conditions, an event will have probability of either zero or one. Accordingly, it is the best-known of a class of similar theorems, known as zero-one laws. Other examples include Kolmogorov's zero–one law and the Hewitt–Savage zero–one law.
Statement of lemma for probability spaces
Let E1,E2,... be a sequence of events in some probability space.
The Borel–Cantelli lemma states:
Here, "lim sup" denotes limit supremum of the sequence of events, and each event is a set of outcomes. That is, lim sup En is the set of outcomes that occur infinitely many times within the infinite sequence of events (En). Explicitly,
The set lim sup En is sometimes denoted {En i.o. }, where "i.o." stands for "infinitely often". The theorem therefore asserts that if the sum of the probabilities of the events En is finite, then the set of all outcomes that are "repeated" infinitely many times must occur with probability zero. Note that no assumption of independence is required.
Example
Suppose (Xn) is a sequence of random variables with Pr(Xn = 0) = 1/n2 for each n. The probability that Xn = 0 occurs for infinitely many n is equivalent to the probability of the intersection of infinitely many [Xn = 0] events. The intersection of infinitely many such events is a set of outcomes common to all of them. However, the sum ΣPr(Xn = 0) converges to and so the Borel–Cantelli Lemma states that the set of outcomes that are common to infinitely many such events occurs with probability zero. Hence, the probability of Xn = 0 occurring for infinitely many n is 0. Almost surely (i.e., with probability 1), Xn is nonzero for all but finitely many n.
Proof
Let (En) be a sequence of events in some probability space.
The sequence of events is non-increasing:
By continuity from above,
By subadditivity,
By original assumption, As the series converges,
as required.
General measure spaces
For general measure spaces, the Borel–Cantelli lemma takes the following form:
Converse result
A related result, sometimes called the second Borel–Cantelli lemma, is a partial converse of the first Borel–Cantelli lemma. The lemma states: If the events En are independent and the sum of the probabilities of the En diverges to infinity, then the probability that infinitely many of them occur is 1. That is:
The assumption of independence can be weakened to pairwise independence, but in that case the proof is more difficult.
The infinite monkey theorem follows from the Second lemma.
Example
The lemma can be applied to give a covering theorem in Rn. Specifically , if |
https://en.wikipedia.org/wiki/Natural%20transformation | In category theory, a branch of mathematics, a natural transformation provides a way of transforming one functor into another while respecting the internal structure (i.e., the composition of morphisms) of the categories involved. Hence, a natural transformation can be considered to be a "morphism of functors". Informally, the notion of a natural transformation states that a particular map between functors can be done consistently over an entire category.
Indeed, this intuition can be formalized to define so-called functor categories. Natural transformations are, after categories and functors, one of the most fundamental notions of category theory and consequently appear in the majority of its applications.
Definition
If and are functors between the categories and , then a natural transformation from to is a family of morphisms that satisfies two requirements.
The natural transformation must associate, to every object in , a morphism between objects of . The morphism is called the component of at .
Components must be such that for every morphism in we have:
The last equation can conveniently be expressed by the commutative diagram
If both and are contravariant, the vertical arrows in the right diagram are reversed. If is a natural transformation from to , we also write or . This is also expressed by saying the family of morphisms is natural in .
If, for every object in , the morphism is an isomorphism in , then is said to be a (or sometimes natural equivalence or isomorphism of functors). Two functors and are called naturally isomorphic or simply isomorphic if there exists a natural isomorphism from to .
An infranatural transformation from to is simply a family of morphisms , for all in . Thus a natural transformation is an infranatural transformation for which for every morphism . The naturalizer of , nat, is the largest subcategory of containing all the objects of on which restricts to a natural transformation.
Examples
Opposite group
Statements such as
"Every group is naturally isomorphic to its opposite group"
abound in modern mathematics. We will now give the precise meaning of this statement as well as its proof. Consider the category
of all groups with group homomorphisms as morphisms. If is a group, we define
its opposite group as follows: is the same set as , and the operation is defined
by . All multiplications in are thus "turned around". Forming the opposite group becomes
a (covariant) functor from to if we define for any group homomorphism . Note that
is indeed a group homomorphism from to :
The content of the above statement is:
"The identity functor is naturally isomorphic to the opposite functor "
To prove this, we need to provide isomorphisms for every group , such that the above diagram commutes.
Set .
The formulas and
show that is a group homomorphism with inverse . To prove the naturality, we start with a group homomorphism
and show , i.e.
for all in . Th |
https://en.wikipedia.org/wiki/Likelihood-ratio%20test | In statistics, the likelihood-ratio test assesses the goodness of fit of two competing statistical models, specifically one found by maximization over the entire parameter space and another found after imposing some constraint, based on the ratio of their likelihoods. If the constraint (i.e., the null hypothesis) is supported by the observed data, the two likelihoods should not differ by more than sampling error. Thus the likelihood-ratio test tests whether this ratio is significantly different from one, or equivalently whether its natural logarithm is significantly different from zero.
The likelihood-ratio test, also known as Wilks test, is the oldest of the three classical approaches to hypothesis testing, together with the Lagrange multiplier test and the Wald test. In fact, the latter two can be conceptualized as approximations to the likelihood-ratio test, and are asymptotically equivalent. In the case of comparing two models each of which has no unknown parameters, use of the likelihood-ratio test can be justified by the Neyman–Pearson lemma. The lemma demonstrates that the test has the highest power among all competitors.
Definition
General
Suppose that we have a statistical model with parameter space . A null hypothesis is often stated by saying that the parameter is in a specified subset of . The alternative hypothesis is thus that is in the complement of , i.e. in , which is denoted by . The likelihood ratio test statistic for the null hypothesis is given by:
where the quantity inside the brackets is called the likelihood ratio. Here, the notation refers to the supremum. As all likelihoods are positive, and as the constrained maximum cannot exceed the unconstrained maximum, the likelihood ratio is bounded between zero and one.
Often the likelihood-ratio test statistic is expressed as a difference between the log-likelihoods
where
is the logarithm of the maximized likelihood function , and is the maximal value in the special case that the null hypothesis is true (but not necessarily a value that maximizes for the sampled data) and
denote the respective arguments of the maxima and the allowed ranges they're embedded in. Multiplying by −2 ensures mathematically that (by Wilks' theorem) converges asymptotically to being ²-distributed if the null hypothesis happens to be true. The finite sample distributions of likelihood-ratio tests are generally unknown.
The likelihood-ratio test requires that the models be nested – i.e. the more complex model can be transformed into the simpler model by imposing constraints on the former's parameters. Many common test statistics are tests for nested models and can be phrased as log-likelihood ratios or approximations thereof: e.g. the Z-test, the F-test, the G-test, and Pearson's chi-squared test; for an illustration with the one-sample t-test, see below.
If the models are not nested, then instead of the likelihood-ratio test, there is a generalization of the test that can usually be u |
https://en.wikipedia.org/wiki/Abelian%20category | In mathematics, an abelian category is a category in which morphisms and objects can be added and in which kernels and cokernels exist and have desirable properties. The motivating prototypical example of an abelian category is the category of abelian groups, Ab. The theory originated in an effort to unify several cohomology theories by Alexander Grothendieck and independently in the slightly earlier work of David Buchsbaum. Abelian categories are very stable categories; for example they are regular and they satisfy the snake lemma. The class of abelian categories is closed under several categorical constructions, for example, the category of chain complexes of an abelian category, or the category of functors from a small category to an abelian category are abelian as well. These stability properties make them inevitable in homological algebra and beyond; the theory has major applications in algebraic geometry, cohomology and pure category theory.
Definitions
A category is abelian if it is preadditive and
it has a zero object,
it has all binary biproducts,
it has all kernels and cokernels, and
all monomorphisms and epimorphisms are normal.
This definition is equivalent to the following "piecemeal" definition:
A category is preadditive if it is enriched over the monoidal category Ab of abelian groups. This means that all hom-sets are abelian groups and the composition of morphisms is bilinear.
A preadditive category is additive if every finite set of objects has a biproduct. This means that we can form finite direct sums and direct products. In Def. 1.2.6, it is required that an additive category have a zero object (empty biproduct).
An additive category is preabelian if every morphism has both a kernel and a cokernel.
Finally, a preabelian category is abelian if every monomorphism and every epimorphism is normal. This means that every monomorphism is a kernel of some morphism, and every epimorphism is a cokernel of some morphism.
Note that the enriched structure on hom-sets is a consequence of the first three axioms of the first definition. This highlights the foundational relevance of the category of Abelian groups in the theory and its canonical nature.
The concept of exact sequence arises naturally in this setting, and it turns out that exact functors, i.e. the functors preserving exact sequences in various senses, are the relevant functors between abelian categories. This exactness concept has been axiomatized in the theory of exact categories, forming a very special case of regular categories.
Examples
As mentioned above, the category of all abelian groups is an abelian category. The category of all finitely generated abelian groups is also an abelian category, as is the category of all finite abelian groups.
If R is a ring, then the category of all left (or right) modules over R is an abelian category. In fact, it can be shown that any small abelian category is equivalent to a full subcategory of such a category of modules (Mit |
https://en.wikipedia.org/wiki/Negative%20binomial%20distribution | In probability theory and statistics, the negative binomial distribution is a discrete probability distribution that models the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified (non-random) number of successes (denoted ) occurs. For example, we can define rolling a 6 on a die as a success, and rolling any other number as a failure, and ask how many failure rolls will occur before we see the third success (). In such a case, the probability distribution of the number of failures that appear will be a negative binomial distribution.
An alternative formulation is to model the number of total trials (instead of the number of failures). In fact, for a specified (non-random) number of successes (r), the number of failures (n − r) are random because the total trials (n) are random. For example, we could use the negative binomial distribution to model the number of days n (random) a certain machine works (specified by r) before it breaks down.
The Pascal distribution (after Blaise Pascal) and Polya distribution (for George Pólya) are special cases of the negative binomial distribution. A convention among engineers, climatologists, and others is to use "negative binomial" or "Pascal" for the case of an integer-valued stopping-time parameter () and use "Polya" for the real-valued case.
For occurrences of associated discrete events, like tornado outbreaks, the Polya distributions can be used to give more accurate models than the Poisson distribution by allowing the mean and variance to be different, unlike the Poisson. The negative binomial distribution has a variance , with the distribution becoming identical to Poisson in the limit for a given mean (i.e. when the failures are increasingly rare). This can make the distribution a useful overdispersed alternative to the Poisson distribution, for example for a robust modification of Poisson regression. In epidemiology, it has been used to model disease transmission for infectious diseases where the likely number of onward infections may vary considerably from individual to individual and from setting to setting. More generally, it may be appropriate where events have positively correlated occurrences causing a larger variance than if the occurrences were independent, due to a positive covariance term.
The term "negative binomial" is likely due to the fact that a certain binomial coefficient that appears in the formula for the probability mass function of the distribution can be written more simply with negative numbers.
Definitions
Imagine a sequence of independent Bernoulli trials: each trial has two potential outcomes called "success" and "failure." In each trial the probability of success is and of failure is . We observe this sequence until a predefined number of successes occurs. Then the random number of observed failures, , follows the negative binomial (or Pascal) distribution:
Probability mass function
The probability mass f |
https://en.wikipedia.org/wiki/Lp%20space | {{DISPLAYTITLE:Lp space}}
In mathematics, the spaces are function spaces defined using a natural generalization of the -norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue , although according to the Bourbaki group they were first introduced by Frigyes Riesz .
spaces form an important class of Banach spaces in functional analysis, and of topological vector spaces. Because of their key role in the mathematical analysis of measure and probability spaces, Lebesgue spaces are used also in the theoretical discussion of problems in physics, statistics, economics, finance, engineering, and other disciplines.
Applications
Statistics
In statistics, measures of central tendency and statistical dispersion, such as the mean, median, and standard deviation, are defined in terms of metrics, and measures of central tendency can be characterized as solutions to variational problems.
In penalized regression, "L1 penalty" and "L2 penalty" refer to penalizing either the norm of a solution's vector of parameter values (i.e. the sum of its absolute values), or its norm (its Euclidean length). Techniques which use an L1 penalty, like LASSO, encourage solutions where many parameters are zero. Techniques which use an L2 penalty, like ridge regression, encourage solutions where most parameter values are small. Elastic net regularization uses a penalty term that is a combination of the norm and the norm of the parameter vector.
Hausdorff–Young inequality
The Fourier transform for the real line (or, for periodic functions, see Fourier series), maps to (or to ) respectively, where and This is a consequence of the Riesz–Thorin interpolation theorem, and is made precise with the Hausdorff–Young inequality.
By contrast, if the Fourier transform does not map into
Hilbert spaces
Hilbert spaces are central to many applications, from quantum mechanics to stochastic calculus. The spaces and are both Hilbert spaces. In fact, by choosing a Hilbert basis i.e., a maximal orthonormal subset of or any Hilbert space, one sees that every Hilbert space is isometrically isomorphic to (same as above), i.e., a Hilbert space of type
The -norm in finite dimensions
The length of a vector in the -dimensional real vector space is usually given by the Euclidean norm:
The Euclidean distance between two points and is the length of the straight line between the two points. In many situations, the Euclidean distance is insufficient for capturing the actual distances in a given space. An analogy to this is suggested by taxi drivers in a grid street plan who should measure distance not in terms of the length of the straight line to their destination, but in terms of the rectilinear distance, which takes into account that streets are either orthogonal or parallel to each other. The class of -norms generalizes these two examples and has an abundance of applications in many parts of mathematics, physics, and |
https://en.wikipedia.org/wiki/Injective%20function | In mathematics, an injective function (also known as injection, or one-to-one function) is a function that maps distinct elements of its domain to distinct elements; that is, implies . (Equivalently, implies in the equivalent contrapositive statement.) In other words, every element of the function's codomain is the image of one element of its domain. The term must not be confused with that refers to bijective functions, which are functions such that each element in the codomain is an image of exactly one element in the domain.
A homomorphism between algebraic structures is a function that is compatible with the operations of the structures. For all common algebraic structures, and, in particular for vector spaces, an is also called a . However, in the more general context of category theory, the definition of a monomorphism differs from that of an injective homomorphism. This is thus a theorem that they are equivalent for algebraic structures; see for more details.
A function that is not injective is sometimes called many-to-one.
Definition
Let be a function whose domain is a set The function is said to be injective provided that for all and in if then ; that is, implies Equivalently, if then in the contrapositive statement.
Symbolically,
which is logically equivalent to the contrapositive,
Examples
For visual examples, readers are directed to the gallery section.
For any set and any subset the inclusion map (which sends any element to itself) is injective. In particular, the identity function is always injective (and in fact bijective).
If the domain of a function is the empty set, then the function is the empty function, which is injective.
If the domain of a function has one element (that is, it is a singleton set), then the function is always injective.
The function defined by is injective.
The function defined by is injective, because (for example) However, if is redefined so that its domain is the non-negative real numbers [0,+∞), then is injective.
The exponential function defined by is injective (but not surjective, as no real value maps to a negative number).
The natural logarithm function defined by is injective.
The function defined by is not injective, since, for example,
More generally, when and are both the real line then an injective function is one whose graph is never intersected by any horizontal line more than once. This principle is referred to as the .
Injections can be undone
Functions with left inverses are always injections. That is, given if there is a function such that for every , , then is injective. In this case, is called a retraction of Conversely, is called a section of
Conversely, every injection with a non-empty domain has a left inverse . It can be defined by choosing an element in the domain of and setting to the unique element of the pre-image (if it is non-empty) or to (otherwise).
The left inverse is not necessarily an inverse |
https://en.wikipedia.org/wiki/Inverse%20element | In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers.
Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that is a right inverse of . (An identity element is an element such that and for all and for which the left-hand sides are defined.)
When the operation is associative, if an element has both a left inverse and a right inverse, then these two inverses are equal and unique; they are called the inverse element or simply the inverse. Often an adjective is added for specifying the operation, such as in additive inverse, multiplicative inverse, and functional inverse. In this case (associative operation), an invertible element is an element that has an inverse. In a ring, an invertible element, also called a unit, is an element that is invertible under multiplication (this is not ambiguous, as every element is invertible under addition).
Inverses are commonly used in groupswhere every element is invertible, and ringswhere invertible elements are also called units. They are also commonly used for operations that are not defined for all possible operands, such as inverse matrices and inverse functions. This has been generalized to category theory, where, by definition, an isomorphism is an invertible morphism.
The word 'inverse' is derived from that means 'turned upside down', 'overturned'. This may take its origin from the case of fractions, where the (multiplicative) inverse is obtained by exchanging the numerator and the denominator (the inverse of is ).
Definitions and basic properties
The concepts of inverse element and invertible element are commonly defined for binary operations that are everywhere defined (that is, the operation is defined for any two elements of its domain). However, these concepts are commonly used with partial operations, that is operations that are not defined everywhere. Common examples are matrix multiplication, function composition and composition of morphisms in a category. It follows that the common definitions of associativity and identity element must be extended to partial operations; this is the object of the first subsections.
In this section, is a set (possibly a proper class) on which a partial operation (possibly total) is defined, which is denoted with
Associativity
A partial operation is associative if
for every in for which one of the members of the equality is defined; the equality means that the other member of the equality must also be defined.
Examples of non-total associative operations are multiplication of matrices of arbitrary size, and function composition.
Identity elements
Let be a possibly partial associative operation on a set .
An identity element, or simply an identity is an element such that
for every and for which the left-hand sides of the equalities are defined.
If and are two identity elements such that is defined, then (T |
https://en.wikipedia.org/wiki/Universal%20algebra | Universal algebra (sometimes called general algebra) is the field of mathematics that studies algebraic structures themselves, not examples ("models") of algebraic structures.
For instance, rather than take particular groups as the object of study, in universal algebra one takes the class of groups as an object of study.
Basic idea
In universal algebra, an algebra (or algebraic structure) is a set A together with a collection of operations on A. An n-ary operation on A is a function that takes n elements of A and returns a single element of A. Thus, a 0-ary operation (or nullary operation) can be represented simply as an element of A, or a constant, often denoted by a letter like a. A 1-ary operation (or unary operation) is simply a function from A to A, often denoted by a symbol placed in front of its argument, like ~x. A 2-ary operation (or binary operation) is often denoted by a symbol placed between its arguments (also called infix notation), like x ∗ y. Operations of higher or unspecified arity are usually denoted by function symbols, with the arguments placed in parentheses and separated by commas, like f(x,y,z) or f(x1,...,xn). One way of talking about an algebra, then, is by referring to it as an algebra of a certain type , where is an ordered sequence of natural numbers representing the arity of the operations of the algebra. However, some researchers also allow infinitary operations, such as where J is an infinite index set, which is an operation in the algebraic theory of complete lattices.
Equations
After the operations have been specified, the nature of the algebra is further defined by axioms, which in universal algebra often take the form of identities, or equational laws. An example is the associative axiom for a binary operation, which is given by the equation x ∗ (y ∗ z) = (x ∗ y) ∗ z. The axiom is intended to hold for all elements x, y, and z of the set A.
Varieties
A collection of algebraic structures defined by identities is called a variety or equational class.
Restricting one's study to varieties rules out:
quantification, including universal quantification () except before an equation, and existential quantification ()
logical connectives other than conjunction (∧)
relations other than equality, in particular inequalities, both and order relations
The study of equational classes can be seen as a special branch of model theory, typically dealing with structures having operations only (i.e. the type can have symbols for functions but not for relations other than equality), and in which the language used to talk about these structures uses equations only.
Not all algebraic structures in a wider sense fall into this scope. For example, ordered groups involve an ordering relation, so would not fall within this scope.
The class of fields is not an equational class because there is no type (or "signature") in which all field laws can be written as equations (inverses of elements are defined for all no |
https://en.wikipedia.org/wiki/Congruence | Congruence may refer to:
Mathematics
Congruence (geometry), being the same size and shape
Congruence or congruence relation, in abstract algebra, an equivalence relation on an algebraic structure that is compatible with the structure
In modular arithmetic, having the same remainder when divided by a specified integer
Ramanujan's congruences, congruences for the partition function, , first discovered by Ramanujan in 1919
Congruence subgroup, a subgroup defined by congruence conditions on the entries of a matrix group with integer entries
Congruence of squares, in number theory, a congruence commonly used in integer factorization algorithms
Matrix congruence, an equivalence relation between two matrices
Congruence (manifolds), in the theory of smooth manifolds, the set of integral curves defined by a nonvanishing vector field defined on the manifold
Congruence (general relativity), in general relativity, a congruence in a four-dimensional Lorentzian manifold that is interpreted physically as a model of space time, or a bundle of world lines
Zeller's congruence, an algorithm to calculate the day of the week for any date
Scissors congruence, related to Hilbert's third problem
Mineralogy and chemistry
In mineralogy and chemistry, the term congruent (or incongruent) may refer to:
Congruent dissolution: substances dissolve congruently when the composition of the solid and the dissolved solute stoichiometrically match
Congruent melting occurs during melting of a compound when the composition of the liquid that forms is the same as the composition of the solid
Incongruent transition, in chemistry, is a mass transition between two phases which involves a change in chemical composition
Psychology
In Carl Rogers' personality theory, the compliance between ideal self and actual self-see Carl Rogers#Incongruence
Mood congruence between feeling or emotion (in psychiatry and psychology)
Incongruity theory of humor
See also
Congruence bias, a type of cognitive bias, similar to confirmation bias
Congruence principle (disambiguation)
Hatch mark, geometric notation for congruent line segments
≅
≡ (disambiguation)
≃ |
https://en.wikipedia.org/wiki/Subalgebra | In mathematics, a subalgebra is a subset of an algebra, closed under all its operations, and carrying the induced operations.
"Algebra", when referring to a structure, often means a vector space or module equipped with an additional bilinear operation. Algebras in universal algebra are far more general: they are a common generalisation of all algebraic structures. "Subalgebra" can refer to either case.
Subalgebras for algebras over a ring or field
A subalgebra of an algebra over a commutative ring or field is a vector subspace which is closed under the multiplication of vectors. The restriction of the algebra multiplication makes it an algebra over the same ring or field. This notion also applies to most specializations, where the multiplication must satisfy additional properties, e.g. to associative algebras or to Lie algebras. Only for unital algebras is there a stronger notion, of unital subalgebra, for which it is also required that the unit of the subalgebra be the unit of the bigger algebra.
Example
The 2×2-matrices over the reals form a unital algebra in the obvious way. The 2×2-matrices for which all entries are zero, except for the first one on the diagonal, form a subalgebra. It is also unital, but it is not a unital subalgebra.
Subalgebras in universal algebra
In universal algebra, a subalgebra of an algebra A is a subset S of A that also has the structure of an algebra of the same type when the algebraic operations are restricted to S. If the axioms of a kind of algebraic structure is described by equational laws, as is typically the case in universal algebra, then the only thing that needs to be checked is that S is closed under the operations.
Some authors consider algebras with partial functions. There are various ways of defining subalgebras for these. Another generalization of algebras is to allow relations. These more general algebras are usually called structures, and they are studied in model theory and in theoretical computer science. For structures with relations there are notions of weak and of induced substructures.
Example
For example, the standard signature for groups in universal algebra is . (Inversion and unit are needed to get the right notions of homomorphism and so that the group laws can be expressed as equations.) Therefore, a subgroup of a group G is a subset S of G such that:
the identity e of G belongs to S (so that S is closed under the identity constant operation);
whenever x belongs to S, so does x−1 (so that S is closed under the inverse operation);
whenever x and y belong to S, so does (so that S is closed under the group's multiplication operation).
References
Universal algebra |
https://en.wikipedia.org/wiki/Kernel%20%28algebra%29 | In algebra, the kernel of a homomorphism (function that preserves the structure) is generally the inverse image of 0 (except for groups whose operation is denoted multiplicatively, where the kernel is the inverse image of 1). An important special case is the kernel of a linear map. The kernel of a matrix, also called the null space, is the kernel of the linear map defined by the matrix.
The kernel of a homomorphism is reduced to 0 (or 1) if and only if the homomorphism is injective, that is if the inverse image of every element consists of a single element. This means that the kernel can be viewed as a measure of the degree to which the homomorphism fails to be injective.
For some types of structure, such as abelian groups and vector spaces, the possible kernels are exactly the substructures of the same type. This is not always the case, and, sometimes, the possible kernels have received a special name, such as normal subgroup for groups and two-sided ideals for rings.
Kernels allow defining quotient objects (also called quotient algebras in universal algebra, and cokernels in category theory). For many types of algebraic structure, the fundamental theorem on homomorphisms (or first isomorphism theorem) states that image of a homomorphism is isomorphic to the quotient by the kernel.
The concept of a kernel has been extended to structures such that the inverse image of a single element is not sufficient for deciding whether a homomorphism is injective. In these cases, the kernel is a congruence relation.
This article is a survey for some important types of kernels in algebraic structures.
Survey of examples
Linear maps
Let V and W be vector spaces over a field (or more generally, modules over a ring) and let T be a linear map from V to W. If 0W is the zero vector of W, then the kernel of T is the preimage of the zero subspace {0W}; that is, the subset of V consisting of all those elements of V that are mapped by T to the element 0W. The kernel is usually denoted as , or some variation thereof:
Since a linear map preserves zero vectors, the zero vector 0V of V must belong to the kernel. The transformation T is injective if and only if its kernel is reduced to the zero subspace.
The kernel ker T is always a linear subspace of V. Thus, it makes sense to speak of the quotient space . The first isomorphism theorem for vector spaces states that this quotient space is naturally isomorphic to the image of T (which is a subspace of W). As a consequence, the dimension of V equals the dimension of the kernel plus the dimension of the image.
If V and W are finite-dimensional and bases have been chosen, then T can be described by a matrix M, and the kernel can be computed by solving the homogeneous system of linear equations . In this case, the kernel of T may be identified to the kernel of the matrix M, also called "null space" of M. The dimension of the null space, called the nullity of M, is given by the number of columns of M minus the ran |
https://en.wikipedia.org/wiki/Isomorphism%20theorems | In mathematics, specifically abstract algebra, the isomorphism theorems (also known as Noether's isomorphism theorems) are theorems that describe the relationship between quotients, homomorphisms, and subobjects. Versions of the theorems exist for groups, rings, vector spaces, modules, Lie algebras, and various other algebraic structures. In universal algebra, the isomorphism theorems can be generalized to the context of algebras and congruences.
History
The isomorphism theorems were formulated in some generality for homomorphisms of modules by Emmy Noether in her paper Abstrakter Aufbau der Idealtheorie in algebraischen Zahl- und Funktionenkörpern, which was published in 1927 in Mathematische Annalen. Less general versions of these theorems can be found in work of Richard Dedekind and previous papers by Noether.
Three years later, B.L. van der Waerden published his influential Moderne Algebra, the first abstract algebra textbook that took the groups-rings-fields approach to the subject. Van der Waerden credited lectures by Noether on group theory and Emil Artin on algebra, as well as a seminar conducted by Artin, Wilhelm Blaschke, Otto Schreier, and van der Waerden himself on ideals as the main references. The three isomorphism theorems, called homomorphism theorem, and two laws of isomorphism when applied to groups, appear explicitly.
Groups
We first present the isomorphism theorems of the groups.
Note on numbers and names
Below we present four theorems, labelled A, B, C and D. They are often numbered as "First isomorphism theorem", "Second..." and so on; however, there is no universal agreement on the numbering. Here we give some examples of the group isomorphism theorems in the literature. Notice that these theorems have analogs for rings and modules.
It is less common to include the Theorem D, usually known as the lattice theorem or the correspondence theorem, as one of isomorphism theorems, but when included, it is the last one.
Statement of the theorems
Theorem A (groups)
Let G and H be groups, and let f : G → H be a homomorphism. Then:
The kernel of f is a normal subgroup of G,
The image of f is a subgroup of H, and
The image of f is isomorphic to the quotient group G / ker(f).
In particular, if f is surjective then H is isomorphic to G / ker(f).
This theorem is usually called the first isomorphism theorem.
Theorem B (groups)
Let be a group. Let be a subgroup of , and let be a normal subgroup of . Then the following hold:
The product is a subgroup of ,
The subgroup is a normal subgroup of ,
The intersection is a normal subgroup of , and
The quotient groups and are isomorphic.
Technically, it is not necessary for to be a normal subgroup, as long as is a subgroup of the normalizer of in . In this case, is not a normal subgroup of , but is still a normal subgroup of the product .
This theorem is sometimes called the second isomorphism theorem, diamond theorem or the parallelogram theorem.
An applica |
https://en.wikipedia.org/wiki/Measure%20space | A measure space is a basic object of measure theory, a branch of mathematics that studies generalized notions of volumes. It contains an underlying set, the subsets of this set that are feasible for measuring (the -algebra) and the method that is used for measuring (the measure). One important example of a measure space is a probability space.
A measurable space consists of the first two components without a specific measure.
Definition
A measure space is a triple where
is a set
is a -algebra on the set
is a measure on
In other words, a measure space consists of a measurable space together with a measure on it.
Example
Set . The -algebra on finite sets such as the one above is usually the power set, which is the set of all subsets (of a given set) and is denoted by Sticking with this convention, we set
In this simple case, the power set can be written down explicitly:
As the measure, define by
so (by additivity of measures) and (by definition of measures).
This leads to the measure space It is a probability space, since The measure corresponds to the Bernoulli distribution with which is for example used to model a fair coin flip.
Important classes of measure spaces
Most important classes of measure spaces are defined by the properties of their associated measures. This includes
Probability spaces, a measure space where the measure is a probability measure
Finite measure spaces, where the measure is a finite measure
-finite measure spaces, where the measure is a -finite measure
Another class of measure spaces are the complete measure spaces.
References
Measure theory
Space (mathematics) |
https://en.wikipedia.org/wiki/Clifford%20algebra | In mathematics, a Clifford algebra is an algebra generated by a vector space with a quadratic form, and is a unital associative algebra. As -algebras, they generalize the real numbers, complex numbers, quaternions and several other hypercomplex number systems. The theory of Clifford algebras is intimately connected with the theory of quadratic forms and orthogonal transformations. Clifford algebras have important applications in a variety of fields including geometry, theoretical physics and digital image processing. They are named after the English mathematician William Kingdon Clifford (1845–1879).
The most familiar Clifford algebras, the orthogonal Clifford algebras, are also referred to as (pseudo-)Riemannian Clifford algebras, as distinct from symplectic Clifford algebras.
Introduction and basic properties
A Clifford algebra is a unital associative algebra that contains and is generated by a vector space over a field , where is equipped with a quadratic form . The Clifford algebra is the "freest" unital associative algebra generated by subject to the condition
where the product on the left is that of the algebra, and the is its multiplicative identity. The idea of being the "freest" or "most general" algebra subject to this identity can be formally expressed through the notion of a universal property, as done below.
When is a finite-dimensional real vector space and is nondegenerate, may be identified by the label , indicating that has an orthogonal basis with elements with , with , and where indicates that this is a Clifford algebra over the reals; i.e. coefficients of elements of the algebra are real numbers. This basis may be found by orthogonal diagonalization.
The free algebra generated by may be written as the tensor algebra , that is, the direct sum of the tensor product of copies of over all . Therefore one obtains a Clifford algebra as the quotient of this tensor algebra by the two-sided ideal generated by elements of the form for all elements . The product induced by the tensor product in the quotient algebra is written using juxtaposition (e.g. ). Its associativity follows from the associativity of the tensor product.
The Clifford algebra has a distinguished subspace , being the image of the embedding map. Such a subspace cannot in general be uniquely determined given only a -algebra isomorphic to the Clifford algebra.
If the characteristic of the ground field is not , then one can rewrite the fundamental identity above in the form
where
is the symmetric bilinear form associated with , via the polarization identity.
Quadratic forms and Clifford algebras in characteristic form an exceptional case. In particular, if it is not true that a quadratic form uniquely determines a symmetric bilinear form satisfying , nor that every quadratic form admits an orthogonal basis. Many of the statements in this article include the condition that the characteristic is not , and are false if this condition is remove |
https://en.wikipedia.org/wiki/Probability%20measure | In mathematics, a probability measure is a real-valued function defined on a set of events in a probability space that satisfies measure properties such as countable additivity. The difference between a probability measure and the more general notion of measure (which includes concepts like area or volume) is that a probability measure must assign value 1 to the entire probability space.
Intuitively, the additivity property says that the probability assigned to the union of two disjoint (mutually exclusive) events by the measure should be the sum of the probabilities of the events; for example, the value assigned to the outcome "1 or 2" in a throw of a dice should be the sum of the values assigned to the outcomes "1" and "2".
Probability measures have applications in diverse fields, from physics to finance and biology.
Definition
The requirements for a set function to be a probability measure on a probability space are that:
must return results in the unit interval returning for the empty set and for the entire space.
must satisfy the countable additivity property that for all countable collections of pairwise disjoint sets:
For example, given three elements 1, 2 and 3 with probabilities and the value assigned to is as in the diagram on the right.
The conditional probability based on the intersection of events defined as:
satisfies the probability measure requirements so long as is not zero.
Probability measures are distinct from the more general notion of fuzzy measures in which there is no requirement that the fuzzy values sum up to and the additive property is replaced by an order relation based on set inclusion.
Example applications
Market measures which assign probabilities to financial market spaces based on actual market movements are examples of probability measures which are of interest in mathematical finance; for example, in the pricing of financial derivatives. For instance, a risk-neutral measure is a probability measure which assumes that the current value of assets is the expected value of the future payoff taken with respect to that same risk neutral measure (i.e. calculated using the corresponding risk neutral density function), and discounted at the risk-free rate. If there is a unique probability measure that must be used to price assets in a market, then the market is called a complete market.
Not all measures that intuitively represent chance or likelihood are probability measures. For instance, although the fundamental concept of a system in statistical mechanics is a measure space, such measures are not always probability measures. In general, in statistical physics, if we consider sentences of the form "the probability of a system S assuming state A is p" the geometry of the system does not always lead to the definition of a probability measure under congruence, although it may do so in the case of systems with just one degree of freedom.
Probability measures are also used in mathematical bio |
https://en.wikipedia.org/wiki/Dedekind%20cut | In mathematics, Dedekind cuts, named after German mathematician Richard Dedekind but previously considered by Joseph Bertrand, are а method of construction of the real numbers from the rational numbers. A Dedekind cut is a partition of the rational numbers into two sets A and B, such that all elements of A are less than all elements of B, and A contains no greatest element. The set B may or may not have a smallest element among the rationals. If B has a smallest element among the rationals, the cut corresponds to that rational. Otherwise, that cut defines a unique irrational number which, loosely speaking, fills the "gap" between A and B. In other words, A contains every rational number less than the cut, and B contains every rational number greater than or equal to the cut. An irrational cut is equated to an irrational number which is in neither set. Every real number, rational or not, is equated to one and only one cut of rationals.
Dedekind cuts can be generalized from the rational numbers to any totally ordered set by defining a Dedekind cut as a partition of a totally ordered set into two non-empty parts A and B, such that A is closed downwards (meaning that for all a in A, x ≤ a implies that x is in A as well) and B is closed upwards, and A contains no greatest element. See also completeness (order theory).
It is straightforward to show that a Dedekind cut among the real numbers is uniquely defined by the corresponding cut among the rational numbers. Similarly, every cut of reals is identical to the cut produced by a specific real number (which can be identified as the smallest element of the B set). In other words, the number line where every real number is defined as a Dedekind cut of rationals is a complete continuum without any further gaps.
Definition
A Dedekind cut is a partition of the rationals into two subsets and such that
is nonempty.
(equivalently, is nonempty).
If , , and , then . ( is "closed downwards".)
If , then there exists a such that . ( does not contain a greatest element.)
By omitting the first two requirements, we formally obtain the extended real number line.
Representations
It is more symmetrical to use the (A, B) notation for Dedekind cuts, but each of A and B does determine the other. It can be a simplification, in terms of notation if nothing more, to concentrate on one "half" — say, the lower one — and call any downward closed set A without greatest element a "Dedekind cut".
If the ordered set S is complete, then, for every Dedekind cut (A, B) of S, the set B must have a minimal element b,
hence we must have that A is the interval (−∞, b), and B the interval [b, +∞).
In this case, we say that b is represented by the cut (A, B).
The important purpose of the Dedekind cut is to work with number sets that are not complete. The cut itself can represent a number not in the original collection of numbers (most often rational numbers). The cut can represent a number b, even though the numbers |
https://en.wikipedia.org/wiki/Inverse%20transform%20sampling | Inverse transform sampling (also known as inversion sampling, the inverse probability integral transform, the inverse transformation method, Smirnov transform, or the golden rule) is a basic method for pseudo-random number sampling, i.e., for generating sample numbers at random from any probability distribution given its cumulative distribution function.
Inverse transformation sampling takes uniform samples of a number between 0 and 1, interpreted as a probability, and then returns the smallest number such that for the cumulative distribution function of a random variable. For example, imagine that is the standard normal distribution with mean zero and standard deviation one. The table below shows samples taken from the uniform distribution and their representation on the standard normal distribution.
We are randomly choosing a proportion of the area under the curve and returning the number in the domain such that exactly this proportion of the area occurs to the left of that number. Intuitively, we are unlikely to choose a number in the far end of tails because there is very little area in them which would require choosing a number very close to zero or one.
Computationally, this method involves computing the quantile function of the distribution — in other words, computing the cumulative distribution function (CDF) of the distribution (which maps a number in the domain to a probability between 0 and 1) and then inverting that function. This is the source of the term "inverse" or "inversion" in most of the names for this method. Note that for a discrete distribution, computing the CDF is not in general too difficult: we simply add up the individual probabilities for the various points of the distribution. For a continuous distribution, however, we need to integrate the probability density function (PDF) of the distribution, which is impossible to do analytically for most distributions (including the normal distribution). As a result, this method may be computationally inefficient for many distributions and other methods are preferred; however, it is a useful method for building more generally applicable samplers such as those based on rejection sampling.
For the normal distribution, the lack of an analytical expression for the corresponding quantile function means that other methods (e.g. the Box–Muller transform) may be preferred computationally. It is often the case that, even for simple distributions, the inverse transform sampling method can be improved on: see, for example, the ziggurat algorithm and rejection sampling. On the other hand, it is possible to approximate the quantile function of the normal distribution extremely accurately using moderate-degree polynomials, and in fact the method of doing this is fast enough that inversion sampling is now the default method for sampling from a normal distribution in the statistical package R.
Formal statement
For any random variable , the random variable has the same law as , wh |
https://en.wikipedia.org/wiki/Topological%20vector%20space | In mathematics, a topological vector space (also called a linear topological space and commonly abbreviated TVS or t.v.s.) is one of the basic structures investigated in functional analysis.
A topological vector space is a vector space that is also a topological space with the property that the vector space operations (vector addition and scalar multiplication) are also continuous functions. Such a topology is called a and every topological vector space has a uniform topological structure, allowing a notion of uniform convergence and completeness. Some authors also require that the space is a Hausdorff space (although this article does not). One of the most widely studied categories of TVSs are locally convex topological vector spaces. This article focuses on TVSs that are not necessarily locally convex. Banach spaces, Hilbert spaces and Sobolev spaces are other well-known examples of TVSs.
Many topological vector spaces are spaces of functions, or linear operators acting on topological vector spaces, and the topology is often defined so as to capture a particular notion of convergence of sequences of functions.
In this article, the scalar field of a topological vector space will be assumed to be either the complex numbers or the real numbers unless clearly stated otherwise.
Motivation
Normed spaces
Every normed vector space has a natural topological structure: the norm induces a metric and the metric induces a topology.
This is a topological vector space because:
The vector addition map defined by is (jointly) continuous with respect to this topology. This follows directly from the triangle inequality obeyed by the norm.
The scalar multiplication map defined by where is the underlying scalar field of is (jointly) continuous. This follows from the triangle inequality and homogeneity of the norm.
Thus all Banach spaces and Hilbert spaces are examples of topological vector spaces.
Non-normed spaces
There are topological vector spaces whose topology is not induced by a norm, but are still of interest in analysis. Examples of such spaces are spaces of holomorphic functions on an open domain, spaces of infinitely differentiable functions, the Schwartz spaces, and spaces of test functions and the spaces of distributions on them. These are all examples of Montel spaces. An infinite-dimensional Montel space is never normable. The existence of a norm for a given topological vector space is characterized by Kolmogorov's normability criterion.
A topological field is a topological vector space over each of its subfields.
Definition
A topological vector space (TVS) is a vector space over a topological field (most often the real or complex numbers with their standard topologies) that is endowed with a topology such that vector addition and scalar multiplication are continuous functions (where the domains of these functions are endowed with product topologies). Such a topology is called a or a on
Every topological vector space is al |
https://en.wikipedia.org/wiki/Where%20Mathematics%20Comes%20From | Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being (hereinafter WMCF) is a book by George Lakoff, a cognitive linguist, and Rafael E. Núñez, a psychologist. Published in 2000, WMCF seeks to found a cognitive science of mathematics, a theory of embodied mathematics based on conceptual metaphor.
WMCF definition of mathematics
Mathematics makes up that part of the human conceptual system that is special in the following way:
It is precise, consistent, stable across time and human communities, symbolizable, calculable, generalizable, universally available, consistent within each of its subject matters, and effective as a general tool for description, explanation, and prediction in a vast number of everyday activities, [ranging from] sports, to building, business, technology, and science. - WMCF, pp. 50, 377
Nikolay Lobachevsky said "There is no branch of mathematics, however abstract, which may not some day be applied to phenomena of the real world." A common type of conceptual blending process would seem to apply to the entire mathematical procession.
Human cognition and mathematics
Lakoff and Núñez's avowed purpose is to begin laying the foundations for a truly scientific understanding of mathematics, one grounded in processes common to all human cognition. They find that four distinct but related processes metaphorically structure basic arithmetic: object collection, object construction, using a measuring stick, and moving along a path.
WMCF builds on earlier books by Lakoff (1987) and Lakoff and Johnson (1980, 1999), which analyze such concepts of metaphor and image schemata from second-generation cognitive science. Some of the concepts in these earlier books, such as the interesting technical ideas in Lakoff (1987), are absent from WMCF.
Lakoff and Núñez hold that mathematics results from the human cognitive apparatus and must therefore be understood in cognitive terms. WMCF advocates (and includes some examples of) a cognitive idea analysis of mathematics which analyzes mathematical ideas in terms of the human experiences, metaphors, generalizations, and other cognitive mechanisms giving rise to them. A standard mathematical education does not develop such idea analysis techniques because it does not pursue considerations of A) what structures of the mind allow it to do mathematics or B) the philosophy of mathematics.
Lakoff and Núñez start by reviewing the psychological literature, concluding that human beings appear to have an innate ability, called subitizing, to count, add, and subtract up to about 4 or 5. They document this conclusion by reviewing the literature, published in recent decades, describing experiments with infant subjects. For example, infants quickly become excited or curious when presented with "impossible" situations, such as having three toys appear when only two were initially present.
The authors argue that mathematics goes far beyond this very elementary level due to a large numb |
https://en.wikipedia.org/wiki/Hurwitz%20polynomial | In mathematics, a Hurwitz polynomial, named after Adolf Hurwitz, is a polynomial whose roots (zeros) are located in the left half-plane of the complex plane or on the imaginary axis, that is, the real part of every root is zero or negative. Such a polynomial must have coefficients that are positive real numbers. The term is sometimes restricted to polynomials whose roots have real parts that are strictly negative, excluding the imaginary axis (i.e., a Hurwitz stable polynomial).
A polynomial function P(s) of a complex variable s is said to be Hurwitz if the following conditions are satisfied:
1. P(s) is real when s is real.
2. The roots of P(s) have real parts which are zero or negative.
Hurwitz polynomials are important in control systems theory, because they represent the characteristic equations of stable linear systems. Whether a polynomial is Hurwitz can be determined by solving the equation to find the roots, or from the coefficients without solving the equation by the Routh–Hurwitz stability criterion.
Examples
A simple example of a Hurwitz polynomial is:
The only real solution is −1, because it factors as
In general, all quadratic polynomials with positive coefficients are Hurwitz.
This follows directly from the quadratic formula:
where, if the discriminant b2−4ac is less than zero, then the polynomial will have two complex-conjugate solutions with real part −b/2a, which is negative for positive a and b.
If the discriminant is equal to zero, there will be two coinciding real solutions at −b/2a. Finally, if the discriminant is greater than zero, there will be two real negative solutions,
because for positive a, b and c.
Properties
For a polynomial to be Hurwitz, it is necessary but not sufficient that all of its coefficients be positive (except for quadratic polynomials, which also imply sufficiency). A necessary and sufficient condition that a polynomial is Hurwitz is that it passes the Routh–Hurwitz stability criterion. A given polynomial can be efficiently tested to be Hurwitz or not by using the Routh continued fraction expansion technique.
References
Wayne H. Chen (1964) Linear Network Design and Synthesis, page 63, McGraw Hill.
Polynomials |
https://en.wikipedia.org/wiki/Fermat%20pseudoprime | In number theory, the Fermat pseudoprimes make up the most important class of pseudoprimes that come from Fermat's little theorem.
Definition
Fermat's little theorem states that if p is prime and a is coprime to p, then ap−1 − 1 is divisible by p. For an integer a > 1, if a composite integer x divides ax−1 − 1, then x is called a Fermat pseudoprime to base a.
In other words, a composite integer is a Fermat pseudoprime to base a if it successfully passes the Fermat primality test for the base a. The false statement that all numbers that pass the Fermat primality test for base 2, are prime, is called the Chinese hypothesis.
The smallest base-2 Fermat pseudoprime is 341. It is not a prime, since it equals 11·31, but it satisfies Fermat's little theorem: 2340 ≡ 1 (mod 341) and thus passes the
Fermat primality test for the base 2.
Pseudoprimes to base 2 are sometimes called Sarrus numbers, after P. F. Sarrus who discovered that 341 has this property, Poulet numbers, after P. Poulet who made a table of such numbers, or Fermatians .
A Fermat pseudoprime is often called a pseudoprime, with the modifier Fermat being understood.
An integer x that is a Fermat pseudoprime for all values of a that are coprime to x is called a Carmichael number.
Properties
Distribution
There are infinitely many pseudoprimes to any given base a > 1. In 1904, Cipolla showed how to produce an infinite number of pseudoprimes base a > 1: Let A = (ap - 1)/(a - 1) and let B = (ap + 1)/(a + 1), where p is any odd prime. Then n = AB is composite, and is a pseudoprime to base a. For example, if a = 2 and p = 5, then A = 31, B = 11, and n = 341 is a pseudoprime to base 2.
In fact, there are infinitely many strong pseudoprimes to any base greater than 1 (see Theorem 1 of
) and infinitely many Carmichael numbers, but they are comparatively rare. There are three pseudoprimes to base 2 below 1000, 245 below one million, and 21853 less than 25·109. There are 4842 strong pseudoprimes base 2 and 2163 Carmichael numbers below this limit (see Table 1 of ).
Starting at 17·257, the product of consecutive Fermat numbers is a base-2 pseudoprime, and so are all Fermat composites and Mersenne composites.
Factorizations
The factorizations of the 60 Poulet numbers up to 60787, including 13 Carmichael numbers (in bold), are in the following table.
A Poulet number all of whose divisors d divide 2d − 2 is called a super-Poulet number. There are infinitely many Poulet numbers which are not super-Poulet Numbers.
Smallest Fermat pseudoprimes
The smallest pseudoprime for each base a ≤ 200 is given in the following table; the colors mark the number of prime factors. Unlike in the definition at the start of the article, pseudoprimes below a are excluded in the table. (For that to allow pseudoprimes below a, see )
List of Fermat pseudoprimes in fixed base n
For more information (base 31 to 100), see to , and for all bases up to 150, see table of Fermat pseudoprimes (text in German), this page d |
https://en.wikipedia.org/wiki/Exponential%20distribution | In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.
The exponential distribution is not the same as the class of exponential families of distributions. This is a large class of probability distributions that includes the exponential distribution as one of its members, but also includes many other distributions, like the normal, binomial, gamma, and Poisson distributions.
Definitions
Probability density function
The probability density function (pdf) of an exponential distribution is
Here λ > 0 is the parameter of the distribution, often called the rate parameter. The distribution is supported on the interval . If a random variable X has this distribution, we write .
The exponential distribution exhibits infinite divisibility.
Cumulative distribution function
The cumulative distribution function is given by
Alternative parametrization
The exponential distribution is sometimes parametrized in terms of the scale parameter , which is also the mean:
Properties
Mean, variance, moments, and median
The mean or expected value of an exponentially distributed random variable X with rate parameter λ is given by
In light of the examples given below, this makes sense: if you receive phone calls at an average rate of 2 per hour, then you can expect to wait half an hour for every call.
The variance of X is given by
so the standard deviation is equal to the mean.
The moments of X, for are given by
The central moments of X, for are given by
where !n is the subfactorial of n
The median of X is given by
where refers to the natural logarithm. Thus the absolute difference between the mean and median is
in accordance with the median-mean inequality.
Memorylessness property of exponential random variable
An exponentially distributed random variable T obeys the relation
This can be seen by considering the complementary cumulative distribution function:
When T is interpreted as the waiting time for an event to occur relative to some initial time, this relation implies that, if T is conditioned on a failure to observe the event over some initial period of time s, the distribution of the remaining waiting time is the same as the original unconditional distribution. For example, if an event has not occurred after 30 seconds, the conditional probability that occurrence will take at least 10 more seconds is equal to the unconditional probability of observing the event more than 10 seconds after the initial time.
The exponential distribution and the geome |
https://en.wikipedia.org/wiki/Geometric%20distribution | In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions:
The probability distribution of the number of Bernoulli trials needed to get one success, supported on the set ;
The probability distribution of the number of failures before the first success, supported on the set .
Which of these is called the geometric distribution is a matter of convention and convenience.
These two different geometric distributions should not be confused with each other. Often, the name shifted geometric distribution is adopted for the former one (distribution of ); however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly.
The geometric distribution gives the probability that the first occurrence of success requires independent trials, each with success probability . If the probability of success on each trial is , then the probability that the -th trial is the first success is
for
The above form of the geometric distribution is used for modeling the number of trials up to and including the first success. By contrast, the following form of the geometric distribution is used for modeling the number of failures until the first success:
for
In either case, the sequence of probabilities is a geometric sequence.
For example, suppose an ordinary die is thrown repeatedly until the first time a "1" appears. The probability distribution of the number of times it is thrown is supported on the infinite set and is a geometric distribution with .
The geometric distribution is denoted by Geo(p) where .
Definitions
Consider a sequence of trials, where each trial has only two possible outcomes (designated failure and success). The probability of success is assumed to be the same for each trial. In such a sequence of trials, the geometric distribution is useful to model the number of failures before the first success since the experiment can have an indefinite number of trials until success, unlike the binomial distribution which has a set number of trials. The distribution gives the probability that there are zero failures before the first success, one failure before the first success, two failures before the first success, and so on.
Assumptions: When is the geometric distribution an appropriate model?
The geometric distribution is an appropriate model if the following assumptions are true.
The phenomenon being modelled is a sequence of independent trials.
There are only two possible outcomes for each trial, often designated success or failure.
The probability of success, p, is the same for every trial.
If these conditions are true, then the geometric random variable Y is the count of the number of failures before the first success. The possible number of failures before the first success is 0, 1, 2, 3, and so on. In the graphs above, this formulation is shown on the right.
An alternative formulation is that the geometric random var |
https://en.wikipedia.org/wiki/Gerhard%20Gentzen | Gerhard Karl Erich Gentzen (24 November 1909 – 4 August 1945) was a German mathematician and logician. He made major contributions to the foundations of mathematics, proof theory, especially on natural deduction and sequent calculus. He died of starvation in a Czech prison camp in Prague in 1945, having been interned as a German national after the Second World War.
Life and career
Gentzen was a student of Paul Bernays at the University of Göttingen. Bernays was fired as "non-Aryan" in April 1933 and therefore Hermann Weyl formally acted as his supervisor. Gentzen joined the Sturmabteilung in November 1933, although he was by no means compelled to do so. Nevertheless, he kept in contact with Bernays until the beginning of the Second World War. In 1935, he corresponded with Abraham Fraenkel in Jerusalem and was implicated by the Nazi teachers' union as one who "keeps contacts to the Chosen People." In 1935 and 1936, Hermann Weyl, head of the Göttingen mathematics department in 1933 until his resignation under Nazi pressure, made strong efforts to bring him to the Institute for Advanced Study in Princeton.
Between November 1935 and 1939 he was an assistant of David Hilbert in Göttingen. Gentzen joined the Nazi Party in 1937. In April 1939 Gentzen swore the oath of loyalty to Adolf Hitler as part of his academic appointment. From 1943 he was a teacher at the German Charles-Ferdinand University of Prague. Under a contract from the SS, Gentzen worked for the V-2 project.
Gentzen was arrested during the citizens uprising against the occupying German forces on 5 May 1945. He, along with the rest of the staff of the German University in Prague were detained in a Soviet prison camp, where he died of starvation on 4 August 1945.
Work
Gentzen's main work was on the foundations of mathematics, in proof theory, specifically natural deduction and the sequent calculus. His cut-elimination theorem is the cornerstone of proof-theoretic semantics, and some philosophical remarks in his "Investigations into Logical Deduction", together with Ludwig Wittgenstein's later work, constitute the starting point for inferential role semantics.
One of Gentzen's papers had a second publication in the ideological Deutsche Mathematik that was founded by Ludwig Bieberbach who promoted "Aryan" mathematics.
Gentzen proved the consistency of the Peano axioms in a paper published in 1936. In his Habilitationsschrift, finished in 1939, he determined the proof-theoretical strength of Peano arithmetic. This was done by a direct proof of the unprovability of the principle of transfinite induction, used in his 1936 proof of consistency, within Peano arithmetic. The principle can, however, be expressed in arithmetic, so that a direct proof of Gödel's incompleteness theorem followed. Gödel used a coding procedure to construct an unprovable formula of arithmetic. Gentzen's proof was published in 1943 and marked the beginning of ordinal proof theory.
Publications
(Lecture h |
https://en.wikipedia.org/wiki/Kazimierz%20Kuratowski | Kazimierz Kuratowski (; 2 February 1896 – 18 June 1980) was a Polish mathematician and logician. He was one of the leading representatives of the Warsaw School of Mathematics. He worked as a professor at the University of Warsaw and at the Mathematical Institute of the Polish Academy of Sciences (IM PAN). Between 1946 and 1953, he served as President of the Polish Mathematical Society.
He is primarily known for his contributions to set theory, topology, measure theory and graph theory. Some of the notable mathematical concepts bearing Kuratowski's name include Kuratowski's theorem, Kuratowski closure axioms, Kuratowski-Zorn lemma and Kuratowski's intersection theorem.
Biography and studies
Kazimierz Kuratowski was born in Warsaw, (then part of Congress Poland controlled by the Russian Empire), on 2 February 1896, into an assimilated Jewish family. He was a son of Marek Kuratow, a barrister, and Róża Karzewska. He completed a Warsaw secondary school, which was named after general Paweł Chrzanowski. In 1913, he enrolled in an engineering course at the University of Glasgow in Scotland, in part because he did not wish to study in Russian; instruction in Polish was prohibited. He completed only one year of study when the outbreak of World War I precluded any further enrolment. In 1915, Russian forces withdrew from Warsaw and Warsaw University was reopened with Polish as the language of instruction. Kuratowski restarted his university education there the same year, this time in mathematics. He obtained his Ph.D. in 1921, in the newly established Second Polish Republic.
Doctoral thesis
In autumn 1921 Kuratowski was awarded the Ph.D. degree for his groundbreaking work. His thesis statement consisted of two parts. One was devoted to an axiomatic construction of topology via the closure axioms. This first part (republished in a slightly modified form in 1922) has been cited in hundreds of scientific articles.
The second part of Kuratowski's thesis was devoted to continua irreducible between two points. This was the subject of a French doctoral thesis written by Zygmunt Janiszewski. Since Janiszewski was deceased, Kuratowski's supervisor was Stefan Mazurkiewicz. Kuratowski's thesis solved certain problems in set theory raised by a Belgian mathematician, Charles-Jean Étienne Gustave Nicolas, Baron de la Vallée Poussin.
Academic career until World War II
Two years later, in 1923, Kuratowski was appointed deputy professor of mathematics at Warsaw University. He was then appointed a full professor of mathematics at Lwów Polytechnic in Lwów, in 1927. He was the head of the Mathematics department there until 1933. Kuratowski was also dean of the department twice. In 1929, Kuratowski became a member of the Warsaw Scientific Society
While Kuratowski associated with many of the scholars of the Lwów School of Mathematics, such as Stefan Banach and Stanislaw Ulam, and the circle of mathematicians based around the Scottish Café he kept close connections with |
https://en.wikipedia.org/wiki/Simpson%27s%20paradox | Simpson's paradox is a phenomenon in probability and statistics in which a trend appears in several groups of data but disappears or reverses when the groups are combined. This result is often encountered in social-science and medical-science statistics, and is particularly problematic when frequency data are unduly given causal interpretations. The paradox can be resolved when confounding variables and causal relations are appropriately addressed in the statistical modeling (e.g., through cluster analysis).
Simpson's paradox has been used to illustrate the kind of misleading results that the misuse of statistics can generate.
Edward H. Simpson first described this phenomenon in a technical paper in 1951, but the statisticians Karl Pearson (in 1899) and Udny Yule (in 1903) had mentioned similar effects earlier. The name Simpson's paradox was introduced by Colin R. Blyth in 1972. It is also referred to as Simpson's reversal, the Yule–Simpson effect, the amalgamation paradox, or the reversal paradox.
Mathematician Jordan Ellenberg argues that Simpson's paradox is misnamed as "there's no contradiction involved, just two different ways to think about the same data" and suggests that its lesson "isn't really to tell us which viewpoint to take but to insist that we keep both the parts and the whole in mind at once."
Examples
UC Berkeley gender bias
One of the best-known examples of Simpson's paradox comes from a study of gender bias among graduate school admissions to University of California, Berkeley. The admission figures for the fall of 1973 showed that men applying were more likely than women to be admitted, and the difference was so large that it was unlikely to be due to chance.
However, when taking into account the information about departments being applied to, the different rejection percentages reveal the different difficulty of getting into the department, and at the same time it showed that women tended to apply to more competitive departments with lower rates of admission, even among qualified applicants (such as in the English department), whereas men tended to apply to less competitive departments with higher rates of admission (such as in the engineering department). The pooled and corrected data showed a "small but statistically significant bias in favor of women".
The data from the six largest departments are listed below:
The entire data showed total of 4 out of 85 departments to be significantly biased against women, while 6 to be significantly biased against men (not all present in the 'six largest departments' table above). Notably, the numbers of biased departments were not the basis for the conclusion, but rather it was the gender admissions pooled across all departments, while weighing by each department's rejection rate across all of its applicants.
Kidney stone treatment
Another example comes from a real-life medical study comparing the success rates of two treatments for kidney stones. The table below shows the suc |
https://en.wikipedia.org/wiki/Rafael%20E.%20N%C3%BA%C3%B1ez | Rafael E. Núñez is a professor of cognitive science at the University of California, San Diego and a proponent of embodied cognition. He co-authored Where Mathematics Comes From with George Lakoff.
External links
Academic home page
Rafael E. Núñez, Eve Sweetser (2006). "With the Future Behind Them: Convergent Evidence From Aymara Language and Gesture in the Crosslinguistic Comparison of Spatial Construals of Time". (An analysis of the temporal vision in the Aymara culture.)
Mathematical cognition researchers
American mathematicians
Living people
University of California, San Diego faculty
Year of birth missing (living people)
20th-century American writers
21st-century American writers |
https://en.wikipedia.org/wiki/Philosophy%20of%20mathematics | The philosophy of mathematics is the branch of philosophy that studies the assumptions, foundations, and implications of mathematics. It aims to understand the nature and methods of mathematics, and find out the place of mathematics in people's lives. The logical and structural nature of mathematics makes this branch of philosophy broad and unique.
The philosophy of mathematics has two major themes: mathematical realism and mathematical anti-realism.
History
The origin of mathematics is of arguments and disagreements. Whether the birth of mathematics was by chance or induced by necessity during the development of similar subjects, such as physics, remains an area of contention.
Many thinkers have contributed their ideas concerning the nature of mathematics. Today, some philosophers of mathematics aim to give accounts of this form of inquiry and its products as they stand, while others emphasize a role for themselves that goes beyond simple interpretation to critical analysis. There are traditions of mathematical philosophy in both Western philosophy and Eastern philosophy. Western philosophies of mathematics go as far back as Pythagoras, who described the theory "everything is mathematics" (mathematicism), Plato, who paraphrased Pythagoras, and studied the ontological status of mathematical objects, and Aristotle, who studied logic and issues related to infinity (actual versus potential).
Greek philosophy on mathematics was strongly influenced by their study of geometry. For example, at one time, the Greeks held the opinion that 1 (one) was not a number, but rather a unit of arbitrary length. A number was defined as a multitude. Therefore, 3, for example, represented a certain multitude of units, and was thus "truly" a number. At another point, a similar argument was made that 2 was not a number but a fundamental notion of a pair. These views come from the heavily geometric straight-edge-and-compass viewpoint of the Greeks: just as lines drawn in a geometric problem are measured in proportion to the first arbitrarily drawn line, so too are the numbers on a number line measured in proportion to the arbitrary first "number" or "one".
These earlier Greek ideas of numbers were later upended by the discovery of the irrationality of the square root of two. Hippasus, a disciple of Pythagoras, showed that the diagonal of a unit square was incommensurable with its (unit-length) edge: in other words he proved there was no existing (rational) number that accurately depicts the proportion of the diagonal of the unit square to its edge. This caused a significant re-evaluation of Greek philosophy of mathematics. According to legend, fellow Pythagoreans were so traumatized by this discovery that they murdered Hippasus to stop him from spreading his heretical idea. Simon Stevin was one of the first in Europe to challenge Greek ideas in the 16th century. Beginning with Leibniz, the focus shifted strongly to the relationship between mathematics and logic. T |
https://en.wikipedia.org/wiki/Banach%20fixed-point%20theorem | In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem or contractive mapping theorem or Banach-Caccioppoli theorem) is an important tool in the theory of metric spaces; it guarantees the existence and uniqueness of fixed points of certain self-maps of metric spaces, and provides a constructive method to find those fixed points. It can be understood as an abstract formulation of Picard's method of successive approximations. The theorem is named after Stefan Banach (1892–1945) who first stated it in 1922.
Statement
Definition. Let be a metric space. Then a map is called a contraction mapping on X if there exists such that
for all
Banach Fixed Point Theorem. Let be a non-empty complete metric space with a contraction mapping Then T admits a unique fixed-point in X (i.e. ). Furthermore, can be found as follows: start with an arbitrary element and define a sequence by for Then .
Remark 1. The following inequalities are equivalent and describe the speed of convergence:
Any such value of q is called a Lipschitz constant for , and the smallest one is sometimes called "the best Lipschitz constant" of .
Remark 2. for all is in general not enough to ensure the existence of a fixed point, as is shown by the map
which lacks a fixed point. However, if is compact, then this weaker assumption does imply the existence and uniqueness of a fixed point, that can be easily found as a minimizer of , indeed, a minimizer exists by compactness, and has to be a fixed point of . It then easily follows that the fixed point is the limit of any sequence of iterations of .
Remark 3. When using the theorem in practice, the most difficult part is typically to define properly so that
Proof
Let be arbitrary and define a sequence by setting xn = T(xn−1). We first note that for all we have the inequality
This follows by induction on n, using the fact that T is a contraction mapping. Then we can show that is a Cauchy sequence. In particular, let such that m > n:
Let ε > 0 be arbitrary. Since q ∈ [0, 1), we can find a large so that
Therefore, by choosing m and n greater than N we may write:
This proves that the sequence is Cauchy. By completeness of (X,d), the sequence has a limit Furthermore, must be a fixed point of T:
As a contraction mapping, T is continuous, so bringing the limit inside T was justified. Lastly, T cannot have more than one fixed point in (X,d), since any pair of distinct fixed points p1 and p2 would contradict the contraction of T:
Applications
A standard application is the proof of the Picard–Lindelöf theorem about the existence and uniqueness of solutions to certain ordinary differential equations. The sought solution of the differential equation is expressed as a fixed point of a suitable integral operator which changes continuous functions into continuous functions. The Banach fixed-point theorem is then used to show that this integral operator has a unique fixed point.
One con |
https://en.wikipedia.org/wiki/Euler%27s%20identity | In mathematics, Euler's identity (also known as Euler's equation) is the equality
where
is Euler's number, the base of natural logarithms,
is the imaginary unit, which by definition satisfies , and
is pi, the ratio of the circumference of a circle to its diameter.
Euler's identity is named after the Swiss mathematician Leonhard Euler. It is a special case of Euler's formula when evaluated for . Euler's identity is considered to be an exemplar of mathematical beauty as it shows a profound connection between the most fundamental numbers in mathematics. In addition, it is directly used in a proof that is transcendental, which implies the impossibility of squaring the circle.
Mathematical beauty
Euler's identity is often cited as an example of deep mathematical beauty. Three of the basic arithmetic operations occur exactly once each: addition, multiplication, and exponentiation. The identity also links five fundamental mathematical constants:
The number 0, the additive identity.
The number 1, the multiplicative identity.
The number ( = 3.1415...), the fundamental circle constant.
The number ( = 2.718...), also known as Euler's number, which occurs widely in mathematical analysis.
The number , the imaginary unit of the complex numbers.
Furthermore, the equation is given in the form of an expression set equal to zero, which is common practice in several areas of mathematics.
Stanford University mathematics professor Keith Devlin has said, "like a Shakespearean sonnet that captures the very essence of love, or a painting that brings out the beauty of the human form that is far more than just skin deep, Euler's equation reaches down into the very depths of existence". And Paul Nahin, a professor emeritus at the University of New Hampshire, who has written a book dedicated to Euler's formula and its applications in Fourier analysis, describes Euler's identity as being "of exquisite beauty".
Mathematics writer Constance Reid has opined that Euler's identity is "the most famous formula in all mathematics". And Benjamin Peirce, a 19th-century American philosopher, mathematician, and professor at Harvard University, after proving Euler's identity during a lecture, stated that the identity "is absolutely paradoxical; we cannot understand it, and we don't know what it means, but we have proved it, and therefore we know it must be the truth".
A poll of readers conducted by The Mathematical Intelligencer in 1990 named Euler's identity as the "most beautiful theorem in mathematics". In another poll of readers that was conducted by Physics World in 2004, Euler's identity tied with Maxwell's equations (of electromagnetism) as the "greatest equation ever".
At least three books in popular mathematics have been published about Euler's identity:
Dr. Euler's Fabulous Formula: Cures Many Mathematical Ills, by Paul Nahin (2011)
A Most Elegant Equation: Euler's formula and the beauty of mathematics, by David Stipp (2017)
Euler's Pioneering Equation: The m |
https://en.wikipedia.org/wiki/Sacred%20geometry | Sacred geometry ascribes symbolic and sacred meanings to certain geometric shapes and certain geometric proportions. It is associated with the belief of a divine creator of the universal geometer. The geometry used in the design and construction of religious structures such as churches, temples, mosques, religious monuments, altars, and tabernacles has sometimes been considered sacred. The concept applies also to sacred spaces such as temenoi, sacred groves, village greens, pagodas and holy wells, Mandala Gardens and the creation of religious and spiritual art.
As worldview and cosmology
The belief that a god created the universe according to a geometric plan has ancient origins. Plutarch attributed the belief to Plato, writing that "Plato said god geometrizes continually" (Convivialium disputationum, liber 8,2). In modern times, the mathematician Carl Friedrich Gauss adapted this quote, saying "God arithmetizes".
Johannes Kepler (1571–1630) believed in the geometric underpinnings of the cosmos. Harvard mathematician Shing-Tung Yau expressed a belief in the centrality of geometry in 2010:
"Lest one conclude that geometry is little more than a well-calibrated ruler – and this is no knock against the ruler, which happens to be a technology I admire – geometry is one of the main avenues available to us for probing the universe. Physics and cosmology have been, almost by definition, absolutely crucial for making sense of the universe. Geometry's role in this may be less obvious, but is equally vital. I would go so far as to say that geometry not only deserves a place at the table alongside physics and cosmology, but in many ways it is the table."
Natural forms
According to Stephen Skinner, the study of sacred geometry has its roots in the study of nature, and the mathematical principles at work therein. Many forms observed in nature can be related to geometry; for example, the chambered nautilus grows at a constant rate and so its shell forms a logarithmic spiral to accommodate that growth without changing shape. Also, honeybees construct hexagonal cells to hold their honey. These and other correspondences are sometimes interpreted in terms of sacred geometry and considered to be further proof of the natural significance of geometric forms.
Representations in Art and architecture
Geometric ratios, and geometric figures were often employed in the designs of ancient Egyptian, ancient Indian, Greek and Roman architecture. Medieval European cathedrals also incorporated symbolic geometry. Indian and Himalayan spiritual communities often constructed temples and fortifications on design plans of mandala and yantra. Mandala Vaatikas or Sacred Gardens were designed using the same principles.
Many of the sacred geometry principles of the human body and of ancient architecture were compiled into the Vitruvian Man drawing by Leonardo da Vinci. The latter drawing was itself based on the much older writings of the Roman architect Vitruvius.
In Buddhism
|
https://en.wikipedia.org/wiki/Continued%20fraction | In mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integer part and another reciprocal, and so on. In a finite continued fraction (or terminated continued fraction), the iteration/recursion is terminated after finitely many steps by using an integer in lieu of another continued fraction. In contrast, an infinite continued fraction is an infinite expression. In either case, all integers in the sequence, other than the first, must be positive. The integers are called the coefficients or terms of the continued fraction.
It is generally assumed that the numerator of all of the fractions is 1. If arbitrary values and/or functions are used in place of one or more of the numerators or the integers in the denominators, the resulting expression is a generalized continued fraction. When it is necessary to distinguish the first form from generalized continued fractions, the former may be called a simple or regular continued fraction, or said to be in canonical form.
Continued fractions have a number of remarkable properties related to the Euclidean algorithm for integers or real numbers. Every rational number has two closely related expressions as a finite continued fraction, whose coefficients can be determined by applying the Euclidean algorithm to . The numerical value of an infinite continued fraction is irrational; it is defined from its infinite sequence of integers as the limit of a sequence of values for finite continued fractions. Each finite continued fraction of the sequence is obtained by using a finite prefix of the infinite continued fraction's defining sequence of integers. Moreover, every irrational number is the value of a unique infinite regular continued fraction, whose coefficients can be found using the non-terminating version of the Euclidean algorithm applied to the incommensurable values and 1. This way of expressing real numbers (rational and irrational) is called their continued fraction representation.
The term continued fraction may also refer to representations of rational functions, arising in their analytic theory. For this use of the term, see Padé approximation and Chebyshev rational functions.
Motivation and notation
Consider, for example, the rational number , which is around 4.4624. As a first approximation, start with 4, which is the integer part; . The fractional part is the reciprocal of which is about 2.1628. Use the integer part, 2, as an approximation for the reciprocal to obtain a second approximation of ;
the remaining fractional part, , is the reciprocal of , and is around 6.1429. Use 6 as an approximation for this to obtain as an approximation for and , about 4.4615, as the third approximation. Further, . Finally, the fractional part, , is the reciprocal of 7, so its approximation in this scheme, 7, is exact () and pro |
https://en.wikipedia.org/wiki/Catalan%27s%20constant | In mathematics, Catalan's constant , is defined by
where is the Dirichlet beta function. Its numerical value is approximately
It is not known whether is irrational, let alone transcendental. has been called "arguably the most basic constant whose irrationality and transcendence (though strongly
suspected) remain unproven".
Catalan's constant was named after Eugène Charles Catalan, who found quickly-converging series for its calculation and published a memoir on it in 1865.
Uses
In low-dimensional topology, Catalan's constant is 1/4 of the volume of an ideal hyperbolic octahedron, and therefore 1/4 of the hyperbolic volume of the complement of the Whitehead link. It is 1/8 of the volume of the complement of the Borromean rings.
In combinatorics and statistical mechanics, it arises in connection with counting domino tilings, spanning trees, and Hamiltonian cycles of grid graphs.
In number theory, Catalan's constant appears in a conjectured formula for the asymptotic number of primes of the form according to Hardy and Littlewood's Conjecture F. However, it is an unsolved problem (one of Landau's problems) whether there are even infinitely many primes of this form.
Catalan's constant also appears in the calculation of the mass distribution of spiral galaxies.
Known digits
The number of known digits of Catalan's constant has increased dramatically during the last decades. This is due both to the increase of performance of computers as well as to algorithmic improvements.
Integral identities
As Seán Stewart writes, "There is a rich and seemingly endless source of definite integrals that
can be equated to or expressed in terms of Catalan's constant." Some of these expressions include:
where the last three formulas are related to Malmsten's integrals.
If is the complete elliptic integral of the first kind, as a function of the elliptic modulus , then
If is the complete elliptic integral of the second kind, as a function of the elliptic modulus , then
With the gamma function
The integral
is a known special function, called the inverse tangent integral, and was extensively studied by Srinivasa Ramanujan.
Relation to other special functions
appears in values of the second polygamma function, also called the trigamma function, at fractional arguments:
Simon Plouffe gives an infinite collection of identities between the trigamma function, 2 and Catalan's constant; these are expressible as paths on a graph.
Catalan's constant occurs frequently in relation to the Clausen function, the inverse tangent integral, the inverse sine integral, the Barnes -function, as well as integrals and series summable in terms of the aforementioned functions.
As a particular example, by first expressing the inverse tangent integral in its closed form – in terms of Clausen functions – and then expressing those Clausen functions in terms of the Barnes -function, the following expression is obtained (see Clausen function for more):
If one defines t |
https://en.wikipedia.org/wiki/Euler%20numbers | In mathematics, the Euler numbers are a sequence En of integers defined by the Taylor series expansion
,
where is the hyperbolic cosine function. The Euler numbers are related to a special value of the Euler polynomials, namely:
The Euler numbers appear in the Taylor series expansions of the secant and hyperbolic secant functions. The latter is the function in the definition. They also occur in combinatorics, specifically when counting the number of alternating permutations of a set with an even number of elements.
Examples
The odd-indexed Euler numbers are all zero. The even-indexed ones have alternating signs. Some values are:
{|
|E0 ||=||align=right| 1
|-
|E2 ||=||align=right| −1
|-
|E4 ||=||align=right| 5
|-
|E6 ||=||align=right| −61
|-
|E8 ||=||align=right|
|-
|E10 ||=||align=right|
|-
|E12 ||=||align=right|
|-
|E14 ||=||align=right|
|-
|E16 ||=||align=right|
|-
|E18 ||=||align=right|
|}
Some authors re-index the sequence in order to omit the odd-numbered Euler numbers with value zero, or change all signs to positive . This article adheres to the convention adopted above.
Explicit formulas
In terms of Stirling numbers of the second kind
Following two formulas express the Euler numbers in terms of Stirling numbers of the second kind
where denotes the Stirling numbers of the second kind, and denotes the rising factorial.
As a double sum
Following two formulas express the Euler numbers as double sums
As an iterated sum
An explicit formula for Euler numbers is:
where denotes the imaginary unit with .
As a sum over partitions
The Euler number can be expressed as a sum over the even partitions of ,
as well as a sum over the odd partitions of ,
where in both cases and
is a multinomial coefficient. The Kronecker deltas in the above formulas restrict the sums over the s to and to , respectively.
As an example,
As a determinant
is given by the determinant
As an integral
is also given by the following integrals:
Congruences
W. Zhang obtained the following combinational identities concerning the Euler numbers, for any prime , we have
W. Zhang and Z. Xu proved that, for any prime and integer , we have
where is the Euler's totient function.
Asymptotic approximation
The Euler numbers grow quite rapidly for large indices as
they have the following lower bound
Euler zigzag numbers
The Taylor series of is
where is the Euler zigzag numbers, beginning with
1, 1, 1, 2, 5, 16, 61, 272, 1385, 7936, 50521, 353792, 2702765, 22368256, 199360981, 1903757312, 19391512145, 209865342976, 2404879675441, 29088885112832, ...
For all even ,
where is the Euler number; and for all odd ,
where is the Bernoulli number.
For every n,
See also
Bell number
Bernoulli number
Dirichlet beta function
Euler–Mascheroni constant
References
External links
Eponymous numbers in mathematics
Integer sequences
Leonhard Euler |
https://en.wikipedia.org/wiki/Half-line | Half-line may refer to:
Half-line (geometry), half of a line
Alliterative verse#Metrical form, half of a line of poetry |
https://en.wikipedia.org/wiki/Logarithmic%20integral%20function | In mathematics, the logarithmic integral function or integral logarithm li(x) is a special function. It is relevant in problems of physics and has number theoretic significance. In particular, according to the prime number theorem, it is a very good approximation to the prime-counting function, which is defined as the number of prime numbers less than or equal to a given value .
Integral representation
The logarithmic integral has an integral representation defined for all positive real numbers ≠ 1 by the definite integral
Here, denotes the natural logarithm. The function has a singularity at , and the integral for is interpreted as a Cauchy principal value,
Offset logarithmic integral
The offset logarithmic integral or Eulerian logarithmic integral is defined as
As such, the integral representation has the advantage of avoiding the singularity in the domain of integration.
Equivalently,
Special values
The function li(x) has a single positive zero; it occurs at x ≈ 1.45136 92348 83381 05028 39684 85892 02744 94930... ; this number is known as the Ramanujan–Soldner constant.
li(Li^-1(0)) = li(2) ≈ 1.045163 780117 492784 844588 889194 613136 522615 578151...
This is where is the incomplete gamma function. It must be understood as the Cauchy principal value of the function.
Series representation
The function li(x) is related to the exponential integral Ei(x) via the equation
which is valid for x > 0. This identity provides a series representation of li(x) as
where γ ≈ 0.57721 56649 01532 ... is the Euler–Mascheroni constant. A more rapidly convergent series by Ramanujan is
Asymptotic expansion
The asymptotic behavior for x → ∞ is
where is the big O notation. The full asymptotic expansion is
or
This gives the following more accurate asymptotic behaviour:
As an asymptotic expansion, this series is not convergent: it is a reasonable approximation only if the series is truncated at a finite number of terms, and only large values of x are employed. This expansion follows directly from the asymptotic expansion for the exponential integral.
This implies e.g. that we can bracket li as:
for all .
Number theoretic significance
The logarithmic integral is important in number theory, appearing in estimates of the number of prime numbers less than a given value. For example, the prime number theorem states that:
where denotes the number of primes smaller than or equal to .
Assuming the Riemann hypothesis, we get the even stronger:
In fact, the Riemann hypothesis is equivalent to the statement that:
for any .
For small , but the difference changes sign an infinite number of times as increases, and the first time this happens is somewhere between 1019 and 1.4×10316.
See also
Jørgen Pedersen Gram
Skewes' number
List of integrals of logarithmic functions
References
Special hypergeometric functions
Integrals |
https://en.wikipedia.org/wiki/Closed%20set | In geometry, topology, and related branches of mathematics, a closed set is a set whose complement is an open set. In a topological space, a closed set can be defined as a set which contains all its limit points. In a complete metric space, a closed set is a set which is closed under the limit operation.
This should not be confused with a closed manifold.
Equivalent definitions
By definition, a subset of a topological space is called if its complement is an open subset of ; that is, if A set is closed in if and only if it is equal to its closure in Equivalently, a set is closed if and only if it contains all of its limit points. Yet another equivalent definition is that a set is closed if and only if it contains all of its boundary points.
Every subset is always contained in its (topological) closure in which is denoted by that is, if then Moreover, is a closed subset of if and only if
An alternative characterization of closed sets is available via sequences and nets. A subset of a topological space is closed in if and only if every limit of every net of elements of also belongs to In a first-countable space (such as a metric space), it is enough to consider only convergent sequences, instead of all nets. One value of this characterization is that it may be used as a definition in the context of convergence spaces, which are more general than topological spaces. Notice that this characterization also depends on the surrounding space because whether or not a sequence or net converges in depends on what points are present in
A point in is said to be a subset if (or equivalently, if belongs to the closure of in the topological subspace meaning where is endowed with the subspace topology induced on it by ).
Because the closure of in is thus the set of all points in that are close to this terminology allows for a plain English description of closed subsets:
a subset is closed if and only if it contains every point that is close to it.
In terms of net convergence, a point is close to a subset if and only if there exists some net (valued) in that converges to
If is a topological subspace of some other topological space in which case is called a of then there exist some point in that is close to (although not an element of ), which is how it is possible for a subset to be closed in but to be closed in the "larger" surrounding super-space
If and if is topological super-space of then is always a (potentially proper) subset of which denotes the closure of in indeed, even if is a closed subset of (which happens if and only if ), it is nevertheless still possible for to be a proper subset of However, is a closed subset of if and only if for some (or equivalently, for every) topological super-space of
Closed sets can also be used to characterize continuous functions: a map is continuous if and only if for every subset ; this can be reworded in plain English as: is cont |
https://en.wikipedia.org/wiki/Monster%20group | In the area of abstract algebra known as group theory, the monster group M (also known as the Fischer–Griess monster, or the friendly giant) is the largest sporadic simple group, having order
2463205976112133171923293141475971
= 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000
≈ 8.
The finite simple groups have been completely classified. Every such group belongs to one of 18 countably infinite families or is one of 26 sporadic groups that do not follow such a systematic pattern. The monster group contains 20 sporadic groups (including itself) as subquotients. Robert Griess, who proved the existence of the monster in 1982, has called those 20 groups the happy family, and the remaining six exceptions pariahs.
It is difficult to give a good constructive definition of the monster because of its complexity. Martin Gardner wrote a popular account of the monster group in his June 1980 Mathematical Games column in Scientific American.
History
The monster was predicted by Bernd Fischer (unpublished, about 1973) and Robert Griess as a simple group containing a double cover of Fischer's baby monster group as a centralizer of an involution. Within a few months, the order of M was found by Griess using the Thompson order formula, and Fischer, Conway, Norton and Thompson discovered other groups as subquotients, including many of the known sporadic groups, and two new ones: the Thompson group and the Harada–Norton group. The character table of the monster, a 194-by-194 array, was calculated in 1979 by Fischer and Donald Livingstone using computer programs written by Michael Thorne. It was not clear in the 1970s whether the monster actually existed. Griess constructed M as the automorphism group of the Griess algebra, a 196,884-dimensional commutative nonassociative algebra over the real numbers; he first announced his construction in Ann Arbor on January 14, 1980. In his 1982 paper, he referred to the monster as the Friendly Giant, but this name has not been generally adopted. John Conway and Jacques Tits subsequently simplified this construction.
Griess's construction showed that the monster exists. Thompson showed that its uniqueness (as a simple group satisfying certain conditions coming from the classification of finite simple groups) would follow from the existence of a 196,883-dimensional faithful representation. A proof of the existence of such a representation was announced by Norton, though he never published the details. Griess, Meierfrankenfeld, and Segev gave the first complete published proof of the uniqueness of the monster (more precisely, they showed that a group with the same centralizers of involutions as the monster is isomorphic to the monster).
The monster was a culmination of the development of sporadic simple groups and can be built from any two of three subquotients: the Fischer group Fi24, the baby monster, and the Conway group Co1.
The Schur multiplier and the outer automorphism group of |
https://en.wikipedia.org/wiki/Atlas%20%28topology%29 | In mathematics, particularly topology, an atlas is a concept used to describe a manifold. An atlas consists of individual charts that, roughly speaking, describe individual regions of the manifold. If the manifold is the surface of the Earth, then an atlas has its more common meaning. In general, the notion of atlas underlies the formal definition of a manifold and related structures such as vector bundles and other fiber bundles.
Charts
The definition of an atlas depends on the notion of a chart. A chart for a topological space M (also called a coordinate chart, coordinate patch, coordinate map, or local frame) is a homeomorphism from an open subset U of M to an open subset of a Euclidean space. The chart is traditionally recorded as the ordered pair .
Formal definition of atlas
An atlas for a topological space is an indexed family of charts on which covers (that is, ). If for some fixed n, the image of each chart is an open subset of n-dimensional Euclidean space, then is said to be an n-dimensional manifold.
The plural of atlas is atlases, although some authors use atlantes.
An atlas on an -dimensional manifold is called an adequate atlas if the image of each chart is either or , is a locally finite open cover of , and , where is the open ball of radius 1 centered at the origin and is the closed half space. Every second-countable manifold admits an adequate atlas. Moreover, if is an open covering of the second-countable manifold then there is an adequate atlas on such that is a refinement of .
Transition maps
A transition map provides a way of comparing two charts of an atlas. To make this comparison, we consider the composition of one chart with the inverse of the other. This composition is not well-defined unless we restrict both charts to the intersection of their domains of definition. (For example, if we have a chart of Europe and a chart of Russia, then we can compare these two charts on their overlap, namely the European part of Russia.)
To be more precise, suppose that and are two charts for a manifold M such that is non-empty.
The transition map is the map defined by
Note that since and are both homeomorphisms, the transition map is also a homeomorphism.
More structure
One often desires more structure on a manifold than simply the topological structure. For example, if one would like an unambiguous notion of differentiation of functions on a manifold, then it is necessary to construct an atlas whose transition functions are differentiable. Such a manifold is called differentiable. Given a differentiable manifold, one can unambiguously define the notion of tangent vectors and then directional derivatives.
If each transition function is a smooth map, then the atlas is called a smooth atlas, and the manifold itself is called smooth. Alternatively, one could require that the transition maps have only k continuous derivatives in which case the atlas is said to be .
Very generally, if each transition fun |
https://en.wikipedia.org/wiki/Simple%20group | In mathematics, a simple group is a nontrivial group whose only normal subgroups are the trivial group and the group itself. A group that is not simple can be broken into two smaller groups, namely a nontrivial normal subgroup and the corresponding quotient group. This process can be repeated, and for finite groups one eventually arrives at uniquely determined simple groups, by the Jordan–Hölder theorem.
The complete classification of finite simple groups, completed in 2004, is a major milestone in the history of mathematics.
Examples
Finite simple groups
The cyclic group of congruence classes modulo 3 (see modular arithmetic) is simple. If is a subgroup of this group, its order (the number of elements) must be a divisor of the order of which is 3. Since 3 is prime, its only divisors are 1 and 3, so either is , or is the trivial group. On the other hand, the group is not simple. The set of congruence classes of 0, 4, and 8 modulo 12 is a subgroup of order 3, and it is a normal subgroup since any subgroup of an abelian group is normal. Similarly, the additive group of the integers is not simple; the set of even integers is a non-trivial proper normal subgroup.
One may use the same kind of reasoning for any abelian group, to deduce that the only simple abelian groups are the cyclic groups of prime order. The classification of nonabelian simple groups is far less trivial. The smallest nonabelian simple group is the alternating group of order 60, and every simple group of order 60 is isomorphic to . The second smallest nonabelian simple group is the projective special linear group PSL(2,7) of order 168, and every simple group of order 168 is isomorphic to PSL(2,7).
Infinite simple groups
The infinite alternating group , i.e. the group of even finitely supported permutations of the integers, is simple. This group can be written as the increasing union of the finite simple groups with respect to standard embeddings . Another family of examples of infinite simple groups is given by , where is an infinite field and .
It is much more difficult to construct finitely generated infinite simple groups. The first existence result is non-explicit; it is due to Graham Higman and consists of simple quotients of the Higman group. Explicit examples, which turn out to be finitely presented, include the infinite Thompson groups and . Finitely presented torsion-free infinite simple groups were constructed by Burger and Mozes.
Classification
There is as yet no known classification for general (infinite) simple groups, and no such classification is expected.
Finite simple groups
The finite simple groups are important because in a certain sense they are the "basic building blocks" of all finite groups, somewhat similar to the way prime numbers are the basic building blocks of the integers. This is expressed by the Jordan–Hölder theorem which states that any two composition series of a given group have the same length and the same factors, up to |
https://en.wikipedia.org/wiki/Srinivasa%20Ramanujan | Srinivasa Ramanujan ( ; born Srinivasa Ramanujan Aiyangar, ; 22 December 188726 April 1920) was an Indian mathematician. Though he had almost no formal training in pure mathematics, he made substantial contributions to mathematical analysis, number theory, infinite series, and continued fractions, including solutions to mathematical problems then considered unsolvable.
Ramanujan initially developed his own mathematical research in isolation. According to Hans Eysenck, "he tried to interest the leading professional mathematicians in his work, but failed for the most part. What he had to show them was too novel, too unfamiliar, and additionally presented in unusual ways; they could not be bothered". Seeking mathematicians who could better understand his work, in 1913 he began a postal correspondence with the English mathematician G. H. Hardy at the University of Cambridge, England. Recognising Ramanujan's work as extraordinary, Hardy arranged for him to travel to Cambridge. In his notes, Hardy commented that Ramanujan had produced groundbreaking new theorems, including some that "defeated me completely; I had never seen anything in the least like them before", and some recently proven but highly advanced results.
During his short life, Ramanujan independently compiled nearly 3,900 results (mostly identities and equations). Many were completely novel; his original and highly unconventional results, such as the Ramanujan prime, the Ramanujan theta function, partition formulae and mock theta functions, have opened entire new areas of work and inspired a vast amount of further research. Of his thousands of results, all but a dozen or two have now been proven correct. The Ramanujan Journal, a scientific journal, was established to publish work in all areas of mathematics influenced by Ramanujan, and his notebooks—containing summaries of his published and unpublished results—have been analysed and studied for decades since his death as a source of new mathematical ideas. As late as 2012, researchers continued to discover that mere comments in his writings about "simple properties" and "similar outputs" for certain findings were themselves profound and subtle number theory results that remained unsuspected until nearly a century after his death. He became one of the youngest Fellows of the Royal Society and only the second Indian member, and the first Indian to be elected a Fellow of Trinity College, Cambridge. Of his original letters, Hardy stated that a single look was enough to show they could have been written only by a mathematician of the highest calibre, comparing Ramanujan to mathematical geniuses such as Euler and Jacobi.
In 1919, ill health—now believed to have been hepatic amoebiasis (a complication from episodes of dysentery many years previously)—compelled Ramanujan's return to India, where he died in 1920 at the age of 32. His last letters to Hardy, written in January 1920, show that he was still continuing to produce new mathematical id |
https://en.wikipedia.org/wiki/List%20of%20small%20groups | The following list in mathematics contains the finite groups of small order up to group isomorphism.
Counts
For n = 1, 2, … the number of nonisomorphic groups of order n is
1, 1, 1, 2, 1, 2, 1, 5, 2, 2, 1, 5, 1, 2, 1, 14, 1, 5, 1, 5, ...
For labeled groups, see .
Glossary
Each group is named by Small Groups library as Goi, where o is the order of the group, and i is the index used to label the group within that order.
Common group names:
Zn: the cyclic group of order n (the notation Cn is also used; it is isomorphic to the additive group of Z/nZ)
Dihn: the dihedral group of order 2n (often the notation Dn or D2n is used)
K4: the Klein four-group of order 4, same as and Dih2
D2n: the dihedral group of order 2n, the same as Dihn (notation used in section List of small non-abelian groups)
Sn: the symmetric group of degree n, containing the n! permutations of n elements
An: the alternating group of degree n, containing the even permutations of n elements, of order 1 for , and order n!/2 otherwise
Dicn or Q4n: the dicyclic group of order 4n
Q8: the quaternion group of order 8, also Dic2
The notations Zn and Dihn have the advantage that point groups in three dimensions Cn and Dn do not have the same notation. There are more isometry groups than these two, of the same abstract group type.
The notation denotes the direct product of the two groups; Gn denotes the direct product of a group with itself n times. G ⋊ H denotes a semidirect product where H acts on G; this may also depend on the choice of action of H on G.
Abelian and simple groups are noted. (For groups of order , the simple groups are precisely the cyclic groups Zn, for prime n.) The equality sign ("=") denotes isomorphism.
The identity element in the cycle graphs is represented by the black circle. The lowest order for which the cycle graph does not uniquely represent a group is order 16.
In the lists of subgroups, the trivial group and the group itself are not listed. Where there are several isomorphic subgroups, the number of such subgroups is indicated in parentheses.
Angle brackets <relations> show the presentation of a group.
List of small abelian groups
The finite abelian groups are either cyclic groups, or direct products thereof; see Abelian group. The numbers of nonisomorphic abelian groups of orders n = 1, 2, ... are
1, 1, 1, 2, 1, 1, 1, 3, 2, 1, 1, 2, 1, 1, 1, 5, 1, 2, 1, 2, ...
For labeled abelian groups, see .
List of small non-abelian groups
The numbers of non-abelian groups, by order, are counted by . However, many orders have no non-abelian groups. The orders for which a non-abelian group exists are
6, 8, 10, 12, 14, 16, 18, 20, 21, 22, 24, 26, 27, 28, 30, 32, 34, 36, 38, 39, 40, 42, 44, 46, 48, 50, ...
Classifying groups of small order
Small groups of prime power order pn are given as follows:
Order p: The only group is cyclic.
Order p2: There are just two groups, both abelian.
Order p3: There are three abelian groups, and two non-abelian gr |
https://en.wikipedia.org/wiki/Vector%20quantization | Vector quantization (VQ) is a classical quantization technique from signal processing that allows the modeling of probability density functions by the distribution of prototype vectors. It was originally used for data compression. It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them. Each group is represented by its centroid point, as in k-means and some other clustering algorithms. In simpler terms, vector quantization chooses a set of points to represent a larger set of points.
The density matching property of vector quantization is powerful, especially for identifying the density of large and high-dimensional data. Since data points are represented by the index of their closest centroid, commonly occurring data have low error, and rare data high error. This is why VQ is suitable for lossy data compression. It can also be used for lossy data correction and density estimation.
Vector quantization is based on the competitive learning paradigm, so it is closely related to the self-organizing map model and to sparse coding models used in deep learning algorithms such as autoencoder.
Training
The simplest training algorithm for vector quantization is:
Pick a sample point at random
Move the nearest quantization vector centroid towards this sample point, by a small fraction of the distance
Repeat
A more sophisticated algorithm reduces the bias in the density matching estimation, and ensures that all points are used, by including an extra sensitivity parameter :
Increase each centroid's sensitivity by a small amount
Pick a sample point at random
For each quantization vector centroid , let denote the distance of and
Find the centroid for which is the smallest
Move towards by a small fraction of the distance
Set to zero
Repeat
It is desirable to use a cooling schedule to produce convergence: see Simulated annealing. Another (simpler) method is LBG which is based on K-Means.
The algorithm can be iteratively updated with 'live' data, rather than by picking random points from a data set, but this will introduce some bias if the data are temporally correlated over many samples.
Applications
Vector quantization is used for lossy data compression, lossy data correction, pattern recognition, density estimation and clustering.
Lossy data correction, or prediction, is used to recover data missing from some dimensions. It is done by finding the nearest group with the data dimensions available, then predicting the result based on the values for the missing dimensions, assuming that they will have the same value as the group's centroid.
For density estimation, the area/volume that is closer to a particular centroid than to any other is inversely proportional to the density (due to the density matching property of the algorithm).
Use in data compression
Vector quantization, also called "block quantization" or "pattern matching quantization" is often used i |
https://en.wikipedia.org/wiki/Stochastic%20process | In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a sequence of random variables, where the index of the sequence has the interpretation of time. Stochastic processes are widely used as mathematical models of systems and phenomena that appear to vary in a random manner. Examples include the growth of a bacterial population, an electrical current fluctuating due to thermal noise, or the movement of a gas molecule. Stochastic processes have applications in many disciplines such as biology, chemistry, ecology, neuroscience, physics, image processing, signal processing, control theory, information theory, computer science, and telecommunications. Furthermore, seemingly random changes in financial markets have motivated the extensive use of stochastic processes in finance.
Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes. Examples of such stochastic processes include the Wiener process or Brownian motion process, used by Louis Bachelier to study price changes on the Paris Bourse, and the Poisson process, used by A. K. Erlang to study the number of phone calls occurring in a certain period of time. These two stochastic processes are considered the most important and central in the theory of stochastic processes, and were discovered repeatedly and independently, both before and after Bachelier and Erlang, in different settings and countries.
The term random function is also used to refer to a stochastic or random process, because a stochastic process can also be interpreted as a random element in a function space. The terms stochastic process and random process are used interchangeably, often with no specific mathematical space for the set that indexes the random variables. But often these two terms are used when the random variables are indexed by the integers or an interval of the real line. If the random variables are indexed by the Cartesian plane or some higher-dimensional Euclidean space, then the collection of random variables is usually called a random field instead. The values of a stochastic process are not always numbers and can be vectors or other mathematical objects.
Based on their mathematical properties, stochastic processes can be grouped into various categories, which include random walks, martingales, Markov processes, Lévy processes, Gaussian processes, random fields, renewal processes, and branching processes. The study of stochastic processes uses mathematical knowledge and techniques from probability, calculus, linear algebra, set theory, and topology as well as branches of mathematical analysis such as real analysis, measure theory, Fourier analysis, and functional analysis. The theory of stochastic processes is considered to be an important contribution to mathematics and it continues to be an active topic of research for both theoretical reasons and applications.
Introduction
A stochastic or random pro |
https://en.wikipedia.org/wiki/Chance | Chance may refer to:
Mathematics and Science
In mathematics, likelihood of something (by way of the Likelihood function or Probability density function).
Chance (statistics magazine)
Places
Chance, Kentucky, US
Chance, Maryland, US
Chance, Oklahoma, US
Chance, South Dakota, US
Chance, Virginia, US
Chancé, a commune in Brittany, France
People
Chance (name), a given name and surname
Chance the Rapper (born 1993), Chicago hip hop recording artist
Kamal Givens or Chance (born 1981), American rapper and reality-show contestant
Chancellor, formerly Chance (born 1986), American singer-songwriter and record producer
Arts and entertainment
Film and television
Chance (1984 film), a Russian science fiction comedy film
Chance (1990 film), an action film starring Lawrence Hilton-Jacobs and Dan Haggerty
Chance (2002 film), directed by and starring Amber Benson
Chance (2009 film), directed by Abner Benaim
Chance (2019 film), an American computer-animated film
Chance (2020 film), starring Matthew Modine
"Chance" (Fear Itself), a TV series episode
Chance (TV series), a 2016 American thriller/drama television series
Music
Groups and labels
Chance (band), an American country music group
Chance Records, an American record label
Albums
Chance (Manfred Mann's Earth Band album)
Chance (Candi Staton album)
Songs
"Chance" (Act song)
"Chance" (Big Country song)
"Chance!" (Koharu Kusumi song)
"Chance" (Miho Komatsu song), 1998
"Chance" (Sylvie Vartan song), 1963
"Chance", by DC Talk from Intermission: the Greatest Hits
"Chance" (Savatage song)
"Chance!", by Uverworld from Timeless
"Chance!", by Yui Asaka
"A Chance", by Kenny Chesney
"Chance" by Hayley Kiyoko from Panorama, 2022
Other arts and entertainment
Chance (Conrad novel), a 1913 novel by Joseph Conrad
Chance (Parker novel), a 1996 novel by Robert B. Parker
Chance (comics), two different characters from the Marvel Comics universe
Chance, a space in the game Monopoly
Life (video games), also sometimes called a chance
Other uses
Chance (philosophy) or indeterminism
Chance (baseball), a defensive statistic
Chance (ship), a number of ships of this name
Chance, a celebrity Brahman bull whose clone was named Second Chance
Chance Brothers, a glass company
Chance Rides, an amusement park ride and roller coaster manufacturer
Optima Bus Corporation, formerly Chance Coach, Inc.
See also
Chances (disambiguation)
Second Chance (disambiguation)
Second Chances (disambiguation) |
https://en.wikipedia.org/wiki/Union%20%28set%20theory%29 | In set theory, the union (denoted by ∪) of a collection of sets is the set of all elements in the collection. It is one of the fundamental operations through which sets can be combined and related to each other.
A refers to a union of zero () sets and it is by definition equal to the empty set.
For explanation of the symbols used in this article, refer to the table of mathematical symbols.
Union of two sets
The union of two sets A and B is the set of elements which are in A, in B, or in both A and B. In set-builder notation,
.
For example, if A = {1, 3, 5, 7} and B = {1, 2, 4, 6, 7} then A ∪ B = {1, 2, 3, 4, 5, 6, 7}. A more elaborate example (involving two infinite sets) is:
A = {x is an even integer larger than 1}
B = {x is an odd integer larger than 1}
As another example, the number 9 is not contained in the union of the set of prime numbers {2, 3, 5, 7, 11, ...} and the set of even numbers {2, 4, 6, 8, 10, ...}, because 9 is neither prime nor even.
Sets cannot have duplicate elements, so the union of the sets {1, 2, 3} and {2, 3, 4} is {1, 2, 3, 4}. Multiple occurrences of identical elements have no effect on the cardinality of a set or its contents.
Algebraic properties
Binary union is an associative operation; that is, for any sets
Thus, the parentheses may be omitted without ambiguity: either of the above can be written as Also, union is commutative, so the sets can be written in any order.
The empty set is an identity element for the operation of union. That is, for any set Also, the union operation is idempotent: All these properties follow from analogous facts about logical disjunction.
Intersection distributes over union
and union distributes over intersection
The power set of a set together with the operations given by union, intersection, and complementation, is a Boolean algebra. In this Boolean algebra, union can be expressed in terms of intersection and complementation by the formula
where the superscript denotes the complement in the universal set
Finite unions
One can take the union of several sets simultaneously. For example, the union of three sets A, B, and C contains all elements of A, all elements of B, and all elements of C, and nothing else. Thus, x is an element of A ∪ B ∪ C if and only if x is in at least one of A, B, and C.
A finite union is the union of a finite number of sets; the phrase does not imply that the union set is a finite set.
Arbitrary unions
The most general notion is the union of an arbitrary collection of sets, sometimes called an infinitary union. If M is a set or class whose elements are sets, then x is an element of the union of M if and only if there is at least one element A of M such that x is an element of A. In symbols:
This idea subsumes the preceding sections—for example, A ∪ B ∪ C is the union of the collection {A, B, C}. Also, if M is the empty collection, then the union of M is the empty set.
Notations
The notation for the general concept can va |
https://en.wikipedia.org/wiki/Fibonacci%20coding | In mathematics and computing, Fibonacci coding is a universal code which encodes positive integers into binary code words. It is one example of representations of integers based on Fibonacci numbers. Each code word ends with "11" and contains no other instances of "11" before the end.
The Fibonacci code is closely related to the Zeckendorf representation, a positional numeral system that uses Zeckendorf's theorem and has the property that no number has a representation with consecutive 1s. The Fibonacci code word for a particular integer is exactly the integer's Zeckendorf representation with the order of its digits reversed and an additional "1" appended to the end.
Definition
For a number , if represent the digits of the code word representing then we have:
where is the th Fibonacci number, and so is the th distinct Fibonacci number starting with . The last bit is always an appended bit of 1 and does not carry place value.
It can be shown that such a coding is unique, and the only occurrence of "11" in any code word is at the end i.e. d(k−1) and d(k). The penultimate bit is the most significant bit and the first bit is the least significant bit. Also leading zeros cannot be omitted as they can in e.g. decimal numbers.
The first few Fibonacci codes are shown below, and also their so-called implied probability, the value for each number that has a minimum-size code in Fibonacci coding.
To encode an integer N:
Find the largest Fibonacci number equal to or less than N; subtract this number from N, keeping track of the remainder.
If the number subtracted was the ith Fibonacci number F(i), put a 1 in place i−2 in the code word (counting the left most digit as place 0).
Repeat the previous steps, substituting the remainder for N, until a remainder of 0 is reached.
Place an additional 1 after the rightmost digit in the code word.
To decode a code word, remove the final "1", assign the remaining the values 1,2,3,5,8,13... (the Fibonacci numbers) to the bits in the code word, and sum the values of the "1" bits.
Comparison with other universal codes
Fibonacci coding has a useful property that sometimes makes it attractive in comparison to other universal codes: it is an example of a self-synchronizing code, making it easier to recover data from a damaged stream. With most other universal codes, if a single bit is altered, none of the data that comes after it will be correctly read. With Fibonacci coding, on the other hand, a changed bit may cause one token to be read as two, or cause two tokens to be read incorrectly as one, but reading a "0" from the stream will stop the errors from propagating further. Since the only stream that has no "0" in it is a stream of "11" tokens, the total edit distance between a stream damaged by a single bit error and the original stream is at most three.
This approach—encoding using sequence of symbols, in which some patterns (like "11") are forbidden, can be freely generalized.
Example
The followi |
https://en.wikipedia.org/wiki/Great%20circle | In mathematics, a great circle or orthodrome is the circular intersection of a sphere and a plane passing through the sphere's center point.
Any arc of a great circle is a geodesic of the sphere, so that great circles in spherical geometry are the natural analog of straight lines in Euclidean space. For any pair of distinct non-antipodal points on the sphere, there is a unique great circle passing through both. (Every great circle through any point also passes through its antipodal point, so there are infinitely many great circles through two antipodal points.) The shorter of the two great-circle arcs between two distinct points on the sphere is called the minor arc, and is the shortest surface-path between them. Its arc length is the great-circle distance between the points (the intrinsic distance on a sphere), and is proportional to the measure of the central angle formed by the two points and the center of the sphere.
A great circle is the largest circle that can be drawn on any given sphere. Any diameter of any great circle coincides with a diameter of the sphere, and therefore every great circle is concentric with the sphere and shares the same radius. Any other circle of the sphere is called a small circle, and is the intersection of the sphere with a plane not passing through its center. Small circles are the spherical-geometry analog of circles in Euclidean space.
Every circle in Euclidean 3-space is a great circle of exactly one sphere.
The disk bounded by a great circle is called a great disk: it is the intersection of a ball and a plane passing through its center.
In higher dimensions, the great circles on the n-sphere are the intersection of the n-sphere with 2-planes that pass through the origin in the Euclidean space .
Derivation of shortest paths
To prove that the minor arc of a great circle is the shortest path connecting two points on the surface of a sphere, one can apply calculus of variations to it.
Consider the class of all regular paths from a point to another point . Introduce spherical coordinates so that coincides with the north pole. Any curve on the sphere that does not intersect either pole, except possibly at the endpoints, can be parametrized by
provided we allow to take on arbitrary real values. The infinitesimal arc length in these coordinates is
So the length of a curve from to is a functional of the curve given by
According to the Euler–Lagrange equation, is minimized if and only if
,
where is a -independent constant, and
From the first equation of these two, it can be obtained that
.
Integrating both sides and considering the boundary condition, the real solution of is zero. Thus, and can be any value between 0 and , indicating that the curve must lie on a meridian of the sphere. In a Cartesian coordinate system, this is
which is a plane through the origin, i.e., the center of the sphere.
Applications
Some examples of great circles on the celestial sphere include the celestial horiz |
https://en.wikipedia.org/wiki/Maximal%20ideal | In mathematics, more specifically in ring theory, a maximal ideal is an ideal that is maximal (with respect to set inclusion) amongst all proper ideals. In other words, I is a maximal ideal of a ring R if there are no other ideals contained between I and R.
Maximal ideals are important because the quotients of rings by maximal ideals are simple rings, and in the special case of unital commutative rings they are also fields.
In noncommutative ring theory, a maximal right ideal is defined analogously as being a maximal element in the poset of proper right ideals, and similarly, a maximal left ideal is defined to be a maximal element of the poset of proper left ideals. Since a one-sided maximal ideal A is not necessarily two-sided, the quotient R/A is not necessarily a ring, but it is a simple module over R. If R has a unique maximal right ideal, then R is known as a local ring, and the maximal right ideal is also the unique maximal left and unique maximal two-sided ideal of the ring, and is in fact the Jacobson radical J(R).
It is possible for a ring to have a unique maximal two-sided ideal and yet lack unique maximal one-sided ideals: for example, in the ring of 2 by 2 square matrices over a field, the zero ideal is a maximal two-sided ideal, but there are many maximal right ideals.
Definition
There are other equivalent ways of expressing the definition of maximal one-sided and maximal two-sided ideals. Given a ring R and a proper ideal I of R (that is I ≠ R), I is a maximal ideal of R if any of the following equivalent conditions hold:
There exists no other proper ideal J of R so that I ⊊ J.
For any ideal J with I ⊆ J, either J = I or J = R.
The quotient ring R/I is a simple ring.
There is an analogous list for one-sided ideals, for which only the right-hand versions will be given. For a right ideal A of a ring R, the following conditions are equivalent to A being a maximal right ideal of R:
There exists no other proper right ideal B of R so that A ⊊ B.
For any right ideal B with A ⊆ B, either B = A or B = R.
The quotient module R/A is a simple right R-module.
Maximal right/left/two-sided ideals are the dual notion to that of minimal ideals.
Examples
If F is a field, then the only maximal ideal is {0}.
In the ring Z of integers, the maximal ideals are the principal ideals generated by a prime number.
More generally, all nonzero prime ideals are maximal in a principal ideal domain.
The ideal is a maximal ideal in ring . Generally, the maximal ideals of are of the form where is a prime number and is a polynomial in which is irreducible modulo .
Every prime ideal is a maximal ideal in a Boolean ring, i.e., a ring consisting of only idempotent elements. In fact, every prime ideal is maximal in a commutative ring whenever there exists an integer such that for any .
The maximal ideals of the polynomial ring are principal ideals generated by for some .
More generally, the maximal ideals of the polynomial ring over an alg |
https://en.wikipedia.org/wiki/Congruence%20relation | In abstract algebra, a congruence relation (or simply congruence) is an equivalence relation on an algebraic structure (such as a group, ring, or vector space) that is compatible with the structure in the sense that algebraic operations done with equivalent elements will yield equivalent elements. Every congruence relation has a corresponding quotient structure, whose elements are the equivalence classes (or congruence classes) for the relation.
Basic example
The prototypical example of a congruence relation is congruence modulo on the set of integers. For a given positive integer , two integers and are called congruent modulo , written
if is divisible by (or equivalently if and have the same remainder when divided by ).
For example, and are congruent modulo ,
since is a multiple of 10, or equivalently since both and have a remainder of when divided by .
Congruence modulo (for a fixed ) is compatible with both addition and multiplication on the integers. That is,
if
and
then
and
The corresponding addition and multiplication of equivalence classes is known as modular arithmetic. From the point of view of abstract algebra, congruence modulo is a congruence relation on the ring of integers, and arithmetic modulo occurs on the corresponding quotient ring.
Definition
The definition of a congruence depends on the type of algebraic structure under consideration. Particular definitions of congruence can be made for groups, rings, vector spaces, modules, semigroups, lattices, and so forth. The common theme is that a congruence is an equivalence relation on an algebraic object that is compatible with the algebraic structure, in the sense that the operations are well-defined on the equivalence classes.
Example: Groups
For example, a group is an algebraic object consisting of a set together with a single binary operation, satisfying certain axioms. If is a group with operation , a congruence relation on is an equivalence relation on the elements of satisfying
and
for all . For a congruence on a group, the equivalence class containing the identity element is always a normal subgroup, and the other equivalence classes are the other cosets of this subgroup. Together, these equivalence classes are the elements of a quotient group.
Example: Rings
When an algebraic structure includes more than one operation, congruence relations are required to be compatible with each operation. For example, a ring possesses both addition and multiplication, and a congruence relation on a ring must satisfy
and
whenever and . For a congruence on a ring, the equivalence class containing 0 is always a two-sided ideal, and the two operations on the set of equivalence classes define the corresponding quotient ring.
General
The general notion of a congruence relation can be formally defined in the context of universal algebra, a field which studies ideas common to all algebraic structures. In this setting, a relation on a g |
https://en.wikipedia.org/wiki/Golomb%20ruler | In mathematics, a Golomb ruler is a set of marks at integer positions along a ruler such that no two pairs of marks are the same distance apart. The number of marks on the ruler is its order, and the largest distance between two of its marks is its length. Translation and reflection of a Golomb ruler are considered trivial, so the smallest mark is customarily put at 0 and the next mark at the smaller of its two possible values. Golomb rulers can be viewed as a one-dimensional special case of Costas arrays.
The Golomb ruler was named for Solomon W. Golomb and discovered independently by and . Sophie Piccard also published early research on these sets, in 1939, stating as a theorem the claim that two Golomb rulers with the same distance set must be congruent. This turned out to be false for six-point rulers, but true otherwise.
There is no requirement that a Golomb ruler be able to measure all distances up to its length, but if it does, it is called a perfect Golomb ruler. It has been proved that no perfect Golomb ruler exists for five or more marks. A Golomb ruler is optimal if no shorter Golomb ruler of the same order exists. Creating Golomb rulers is easy, but proving the optimal Golomb ruler (or rulers) for a specified order is computationally very challenging.
Distributed.net has completed distributed massively parallel searches for optimal order-24 through order-28 Golomb rulers, each time confirming the suspected candidate ruler.
Currently, the complexity of finding optimal Golomb rulers (OGRs) of arbitrary order n (where n is given in unary) is unknown. In the past there was some speculation that it is an NP-hard problem. Problems related to the construction of Golomb rulers are provably shown to be NP-hard, where it is also noted that no known NP-complete problem has similar flavor to finding Golomb rulers.
Definitions
Golomb rulers as sets
A set of integers where is a Golomb ruler if and only if
The order of such a Golomb ruler is and its length is . The canonical form has and, if , . Such a form can be achieved through translation and reflection.
Golomb rulers as functions
An injective function with and is a Golomb ruler if and only if
The order of such a Golomb ruler is and its length is . The canonical form has
if .
Optimality
A Golomb ruler of order m with length n may be optimal in either of two respects:
It may be optimally dense, exhibiting maximal m for the specific value of n,
It may be optimally short, exhibiting minimal n for the specific value of m.
The general term optimal Golomb ruler is used to refer to the second type of optimality.
Practical applications
Information theory and error correction
Golomb rulers are used within information theory related to error correcting codes.
Radio frequency selection
Golomb rulers are used in the selection of radio frequencies to reduce the effects of intermodulation interference with both terrestrial and extraterrestrial applications.
Radio antenna placement |
https://en.wikipedia.org/wiki/Point | A point is a small dot or the sharp tip of something. Point or points may refer to:
Mathematics
Point (geometry), an entity that has a location in space or on a plane, but has no extent; more generally, an element of some abstract topological space
Point, or Element (category theory), generalizes the set-theoretic concept of an element of a set to an object of any category
Critical point (mathematics), a stationary point of a function of an arbitrary number of variables
Decimal point
Point-free geometry
Stationary point, a point in the domain of a single-valued function where the value of the function ceases to change
Places
Point, Lewis, a peninsula in the Outer Hebrides, Scotland
Point, Texas, a city in Rains County, Texas, United States
Point, the NE tip and a ferry terminal of Lismore, Inner Hebrides, Scotland
Points, West Virginia, an unincorporated community in the United States
Business and finance
Point (loyalty program), a type of virtual currency in common use among mercantile loyalty programs, globally
Point (mortgage), a percentage sometimes referred to as a form of pre-paid interest used to reduce interest rates in a mortgage loan
Basis point, 1/100 of one percent, denoted bp, bps, and ‱
Percentage points, used to measure a change in percentage absolutely
Pivot point (technical analysis), a price level of significance in analysis of a financial market that is used as a predictive indicator of market movement
"Points", the term for profit sharing in the American film industry, where creatives involved in making the film get a defined percentage of the net profits or even gross receipts
Royalty points, a way of sharing profit between companies and unit holders
Vigorish point, the commission charged on a gambling bet or loanshark's loan
Measurement units
Point (gemstone), 2 milligrams, or one hundredth of a carat
Point (typography), a measurement used in printing, the meaning of which has changed over time
Point, in hunting, the number of antler tips on the hunted animal (e.g. 9 point buck)
Point, for describing paper-stock thickness, a synonym of mil and thou (one thousandth of an inch)
Point, a hundredth of an inch or 0.254 mm, a unit of measurement formerly used for rainfall in Australia
Paris point, 2/3 cm, used for shoe sizes
Points of the compass, one of the 32 directions on a traditional compass, equal to one eighth of a right angle (11.25 degrees)
Sports
Point (American football)
Point (basketball)
Point (ice hockey)
Point (pickleball)
Point (tennis)
Point, fielding (cricket)
Point, in sports Score
Point guard, in basketball
Points (association football)
Points decision, in boxing and some other fighting sports
The point (ice hockey), the location of an ice hockey player
Technology and transport
Point, a data element in a SCADA system representing a single input or output
Points, a contact breaker in an ignition system
Points, a railroad switch (British English)
Points, the clock pos |
https://en.wikipedia.org/wiki/Semidirect%20product | In mathematics, specifically in group theory, the concept of a semidirect product is a generalization of a direct product. There are two closely related concepts of semidirect product:
an inner semidirect product is a particular way in which a group can be made up of two subgroups, one of which is a normal subgroup.
an outer semidirect product is a way to construct a new group from two given groups by using the Cartesian product as a set and a particular multiplication operation.
As with direct products, there is a natural equivalence between inner and outer semidirect products, and both are commonly referred to simply as semidirect products.
For finite groups, the Schur–Zassenhaus theorem provides a sufficient condition for the existence of a decomposition as a semidirect product (also known as splitting extension).
Inner semidirect product definitions
Given a group with identity element , a subgroup , and a normal subgroup , the following statements are equivalent:
is the product of subgroups, , and these subgroups have trivial intersection: .
For every , there are unique and such that .
For every , there are unique and such that .
The composition of the natural embedding with the natural projection is an isomorphism between and the quotient group .
There exists a homomorphism that is the identity on and whose kernel is . In other words, there is a split exact sequence
of groups (which is also known as group extension of by ).
If any of these statements holds (and hence all of them hold, by their equivalence), we say is the semidirect product of and , written
or
or that splits over ; one also says that is a semidirect product of acting on , or even a semidirect product of and . To avoid ambiguity, it is advisable to specify which is the normal subgroup.
If , then there is a group homomorphism given by , and for , we have .
Inner and outer semidirect products
Let us first consider the inner semidirect product. In this case, for a group , consider its normal subgroup and the subgroup (not necessarily normal). Assume that the
conditions on the list above hold. Let denote the group of all automorphisms of , which is a group under composition. Construct a group homomorphism defined by conjugation,
, for all in and in .
In this way we can construct a group with group operation defined as
for in and in .
The subgroups and determine up to isomorphism, as we will show later. In this way, we can construct the group from its subgroups. This kind of construction is called an inner semidirect product (also known as internal semidirect product).
Let us now consider the outer semidirect product. Given any two groups and and a group homomorphism , we can construct a new group , called the outer semidirect product of and with respect to , defined as follows:
This defines a group in which the identity element is and the inverse of the element is . Pairs form a normal subgroup isomor |
https://en.wikipedia.org/wiki/Random%20sequence | The concept of a random sequence is essential in probability theory and statistics. The concept generally relies on the notion of a sequence of random variables and many statistical discussions begin with the words "let X1,...,Xn be independent random variables...". Yet as D. H. Lehmer stated in 1951: "A random sequence is a vague notion... in which each term is unpredictable to the uninitiated and whose digits pass a certain number of tests traditional with statisticians".
Axiomatic probability theory deliberately avoids a definition of a random sequence. Traditional probability theory does not state if a specific sequence is random, but generally proceeds to discuss the properties of random variables and stochastic sequences assuming some definition of randomness. The Bourbaki school considered the statement "let us consider a random sequence" an abuse of language.
Early history
Émile Borel was one of the first mathematicians to formally address randomness in 1909. In 1919 Richard von Mises gave the first definition of algorithmic randomness, which was inspired by the law of large numbers, although he used the term collective rather than random sequence. Using the concept of the impossibility of a gambling system, von Mises defined an infinite sequence of zeros and ones as random if it is not biased by having the frequency stability property i.e. the frequency of zeros goes to 1/2 and every sub-sequence we can select from it by a "proper" method of selection is also not biased.
The sub-sequence selection criterion imposed by von Mises is important, because although 0101010101... is not biased, by selecting the odd positions, we get 000000... which is not random. Von Mises never totally formalized his definition of a proper selection rule for sub-sequences, but in 1940 Alonzo Church defined it as any recursive function which having read the first N elements of the sequence decides if it wants to select element number N + 1. Church was a pioneer in the field of computable functions, and the definition he made relied on the Church Turing Thesis for computability. This definition is often called Mises–Church randomness.
Modern approaches
During the 20th century various technical approaches to defining random sequences were developed and now three distinct paradigms can be identified. In the mid 1960s, A. N. Kolmogorov and D. W. Loveland independently proposed a more permissive selection rule. In their view Church's recursive function definition was too restrictive in that it read the elements in order. Instead they proposed a rule based on a partially computable process which having read any N elements of the sequence, decides if it wants to select another element which has not been read yet. This definition is often called Kolmogorov–Loveland stochasticity. But this method was considered too weak by Alexander Shen who showed that there is a Kolmogorov–Loveland stochastic sequence which does not conform to the general notion of randomness.
In |
https://en.wikipedia.org/wiki/Bounded%20set | In mathematical analysis and related areas of mathematics, a set is called bounded if it is, in a certain sense, of finite measure. Conversely, a set which is not bounded is called unbounded. The word "bounded" makes no sense in a general topological space without a corresponding metric.
Boundary is a distinct concept: for example, a circle in isolation is a boundaryless bounded set, while the half plane is unbounded yet has a boundary.
A bounded set is not necessarily a closed set and vice versa. For example, a subset S of a 2-dimensional real space R2 constrained by two parabolic curves x2 + 1 and x2 - 1 defined in a Cartesian coordinate system is closed by the curves but not bounded (so unbounded).
Definition in the real numbers
A set S of real numbers is called bounded from above if there exists some real number k (not necessarily in S) such that k ≥ s for all s in S. The number k is called an upper bound of S. The terms bounded from below and lower bound are similarly defined.
A set S is bounded if it has both upper and lower bounds. Therefore, a set of real numbers is bounded if it is contained in a finite interval.
Definition in a metric space
A subset S of a metric space (M, d) is bounded if there exists r > 0 such that for all s and t in S, we have d(s, t) < r. The metric space (M, d) is a bounded metric space (or d is a bounded metric) if M is bounded as a subset of itself.
Total boundedness implies boundedness. For subsets of Rn the two are equivalent.
A metric space is compact if and only if it is complete and totally bounded.
A subset of Euclidean space Rn is compact if and only if it is closed and bounded. This is also called the Heine-Borel theorem.
Boundedness in topological vector spaces
In topological vector spaces, a different definition for bounded sets exists which is sometimes called von Neumann boundedness. If the topology of the topological vector space is induced by a metric which is homogeneous, as in the case of a metric induced by the norm of normed vector spaces, then the two definitions coincide.
Boundedness in order theory
A set of real numbers is bounded if and only if it has an upper and lower bound. This definition is extendable to subsets of any partially ordered set. Note that this more general concept of boundedness does not correspond to a notion of "size".
A subset S of a partially ordered set P is called bounded above if there is an element k in P such that k ≥ s for all s in S. The element k is called an upper bound of S. The concepts of bounded below and lower bound are defined similarly. (See also upper and lower bounds.)
A subset S of a partially ordered set P is called bounded if it has both an upper and a lower bound, or equivalently, if it is contained in an interval. Note that this is not just a property of the set S but also one of the set S as subset of P.
A bounded poset P (that is, by itself, not as subset) is one that has a least element and a greatest element. Note that th |
https://en.wikipedia.org/wiki/Monotonic%20function | In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of order theory.
In calculus and analysis
In calculus, a function defined on a subset of the real numbers with real values is called monotonic if and only if it is either entirely non-increasing, or entirely non-decreasing. That is, as per Fig. 1, a function that increases monotonically does not exclusively have to increase, it simply must not decrease.
A function is called monotonically increasing (also increasing or non-decreasing) if for all and such that one has , so preserves the order (see Figure 1). Likewise, a function is called monotonically decreasing (also decreasing or non-increasing) if, whenever , then , so it reverses the order (see Figure 2).
If the order in the definition of monotonicity is replaced by the strict order , one obtains a stronger requirement. A function with this property is called strictly increasing (also increasing). Again, by inverting the order symbol, one finds a corresponding concept called strictly decreasing (also decreasing). A function with either property is called strictly monotone. Functions that are strictly monotone are one-to-one (because for not equal to , either or and so, by monotonicity, either or , thus .)
To avoid ambiguity, the terms weakly monotone, weakly increasing and weakly decreasing are often used to refer to non-strict monotonicity.
The terms "non-decreasing" and "non-increasing" should not be confused with the (much weaker) negative qualifications "not decreasing" and "not increasing". For example, the non-monotonic function shown in figure 3 first falls, then rises, then falls again. It is therefore not decreasing and not increasing, but it is neither non-decreasing nor non-increasing.
A function is said to be absolutely monotonic over an interval if the derivatives of all orders of are nonnegative or all nonpositive at all points on the interval.
Inverse of function
All strictly monotonic functions are invertible because they are guaranteed to have a one-to-one mapping from their range to their domain.
However, functions that are only weakly monotone are not invertible because they are constant on some interval (and therefore are not one-to-one).
A function may be strictly monotonic over a limited a range of values and thus have an inverse on that range even though it is not strictly monotonic everywhere. For example, if is strictly increasing on the range , then it has an inverse on the range .
The term monotonic is sometimes used in place of strictly monotonic, so a source may state that all monotonic functions are invertible when they really mean that all strictly monotonic functions are invertible.
Monotonic transformation
The term monotonic transformation (or monotone transformation) may also cause confusion because i |
https://en.wikipedia.org/wiki/IAL | IAL may refer to:
Intel Architecture Labs, a research arm of Intel Corporation during the 1990s
International Advanced Levels, an academic qualification offered by Edexcel
International Algebraic Language or ALGOL 58
International Artists' Lodge, trade union in Germany
International auxiliary language, a language for communication between people who do not share a native language
Institute for Adult Learning, an autonomous institute based in Singapore.
Iâl or Ial or Yale, a commote in Medieval Wales. |
https://en.wikipedia.org/wiki/Algebraic%20notation%20%28chess%29 | Algebraic notation is the standard method for recording and describing the moves in a game of chess. It is based on a system of coordinates to uniquely identify each square on the board. It is used by most books, magazines, and newspapers.
An early form of algebraic notation was invented by the Syrian player Philip Stamma in the 18th century. In the 19th century, it came into general use in German chess literature, and was subsequently adopted in Russian chess literature. In English-speaking countries, the parallel method of descriptive notation was generally used in chess publications until the 1980s. A few players still use descriptive notation, but it is no longer recognized by FIDE, the international chess governing body.
The term "algebraic notation" may be considered a misnomer, as the system is unrelated to algebra.
Naming the squares
Each square of the board is identified by a unique coordinate pair—a letter and a number—from White's point of view. The vertical columns of squares, called , are labeled a through h from White's left (the ) to right (the ). The horizontal rows of squares, called , are numbered 1 to 8 starting from White's side of the board. Thus each square has a unique identification of file letter followed by rank number. For example, the initial square of White's king is designated as "e1".
Naming the pieces
Each piece type (other than pawns) is identified by an uppercase letter. English-speaking players use the letters K for king, Q for queen, R for rook, B for bishop, and N for knight. Different initial letters are used by other languages.
In chess literature, especially that intended for an international audience, the language-specific letters are often replaced by universally recognized piece symbols; for example, ♞c6 in place of Nc6. This style is known as figurine algebraic notation. The Unicode Miscellaneous Symbols set includes all the symbols necessary for figurine algebraic notation.
Notation for moves
In standard (or short form) algebraic notation, each move of a piece is indicated by the piece's uppercase letter, plus the coordinates of the destination square. For example, Be5 (bishop moves to e5), Nf3 (knight moves to f3). For pawn moves, a letter indicating pawn is not used, only the destination square is given. For example, c5 (pawn moves to c5).
Captures
When a piece makes a , an "x" is inserted immediately before the destination square. For example, Bxe5 (bishop captures the piece on e5). When a pawn makes a capture, the from which the pawn departed is used to identify the pawn. For example, exd5 (pawn on the e-file captures the piece on d5).
En passant captures are indicated by specifying the capturing pawn's file of departure, the "x", the destination square (not the square of the captured pawn), and (optionally) the suffix "e.p." indicating the capture was en passant. For example, exd6 e.p.
Sometimes a multiplication sign (×) or a colon (:) is used instead of "x", either in the middle (B:e |
https://en.wikipedia.org/wiki/Mathematical%20analysis | Analysis is the branch of mathematics dealing with continuous functions, limits, and related theories, such as differentiation, integration, measure, infinite sequences, series, and analytic functions.
These theories are usually studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis.
Analysis may be distinguished from geometry; however, it can be applied to any space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space).
History
Ancient
Mathematical analysis formally developed in the 17th century during the Scientific Revolution, but many of its ideas can be traced back to earlier mathematicians. Early results in analysis were implicitly present in the early days of ancient Greek mathematics. For instance, an infinite geometric sum is implicit in Zeno's paradox of the dichotomy. (Strictly speaking, the point of the paradox is to deny that the infinite sum exists.) Later, Greek mathematicians such as Eudoxus and Archimedes made more explicit, but informal, use of the concepts of limits and convergence when they used the method of exhaustion to compute the area and volume of regions and solids. The explicit use of infinitesimals appears in Archimedes' The Method of Mechanical Theorems, a work rediscovered in the 20th century. In Asia, the Chinese mathematician Liu Hui used the method of exhaustion in the 3rd century CE to find the area of a circle. From Jain literature, it appears that Hindus were in possession of the formulae for the sum of the arithmetic and geometric series as early as the 4th century BCE.
Ācārya Bhadrabāhu uses the sum of a geometric series in his Kalpasūtra in . In Indian mathematics, particular instances of arithmetic series have been found to implicitly occur in Vedic Literature as early as .
Medieval
Zu Chongzhi established a method that would later be called Cavalieri's principle to find the volume of a sphere in the 5th century. In the 12th century, the Indian mathematician Bhāskara II gave examples of derivatives and used what is now known as Rolle's theorem.
In the 14th century, Madhava of Sangamagrama developed infinite series expansions, now called Taylor series, of functions such as sine, cosine, tangent and arctangent. Alongside his development of Taylor series of trigonometric functions, he also estimated the magnitude of the error terms resulting of truncating these series, and gave a rational approximation of some infinite series. His followers at the Kerala School of Astronomy and Mathematics further expanded his works, up to the 16th century.
Modern
Foundations
The modern foundations of mathematical analysis were established in 17th century Europe. This began when Fermat and Descartes developed analytic geometry, which is the precursor to modern calculus. Fermat's method of adequality allowed him to determine the ma |
https://en.wikipedia.org/wiki/Ring%20%28mathematics%29 | In mathematics, rings are algebraic structures that generalize fields: multiplication need not be commutative and multiplicative inverses need not exist. In other words, a ring is a set equipped with two binary operations satisfying properties analogous to those of addition and multiplication of integers. Ring elements may be numbers such as integers or complex numbers, but they may also be non-numerical objects such as polynomials, square matrices, functions, and power series.
Formally, a ring is an abelian group whose operation is called addition, with a second binary operation called multiplication that is associative, is distributive over the addition operation, and has a multiplicative identity element. (Some authors use the term "" with a missing "i" to refer to the more general structure that omits this last requirement; see .)
Whether a ring is commutative (that is, whether the order in which two elements are multiplied might change the result) has profound implications on its behavior. Commutative algebra, the theory of commutative rings, is a major branch of ring theory. Its development has been greatly influenced by problems and ideas of algebraic number theory and algebraic geometry. The simplest commutative rings are those that admit division by non-zero elements; such rings are called fields.
Examples of commutative rings include the set of integers with their standard addition and multiplication, the set of polynomials with their addition and multiplication, the coordinate ring of an affine algebraic variety, and the ring of integers of a number field. Examples of noncommutative rings include the ring of real square matrices with , group rings in representation theory, operator algebras in functional analysis, rings of differential operators, and cohomology rings in topology.
The conceptualization of rings spanned the 1870s to the 1920s, with key contributions by Dedekind, Hilbert, Fraenkel, and Noether. Rings were first formalized as a generalization of Dedekind domains that occur in number theory, and of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. They later proved useful in other branches of mathematics such as geometry and analysis.
Definition
A ring is a set equipped with two binary operations + (addition) and ⋅ (multiplication) satisfying the following three sets of axioms, called the ring axioms
is an abelian group under addition, meaning that:
for all in (that is, is associative).
for all in (that is, is commutative).
There is an element in such that for all in (that is, is the additive identity).
For each in there exists in such that (that is, is the additive inverse of ).
is a monoid under multiplication, meaning that:
for all in (that is, is associative).
There is an element in such that and for all in (that is, is the multiplicative identity).
Multiplication is distributive with respect to addition, meaning that:
fo |
https://en.wikipedia.org/wiki/Gottlob%20Frege | Friedrich Ludwig Gottlob Frege (; ; 8 November 1848 – 26 July 1925) was a German philosopher, logician, and mathematician. He was a mathematics professor at the University of Jena, and is understood by many to be the father of analytic philosophy, concentrating on the philosophy of language, logic, and mathematics. Though he was largely ignored during his lifetime, Giuseppe Peano (1858–1932), Bertrand Russell (1872–1970), and, to some extent, Ludwig Wittgenstein (1889–1951) introduced his work to later generations of philosophers. Frege is widely considered to be the greatest logician since Aristotle, and one of the most profound philosophers of mathematics ever.
His contributions include the development of modern logic in the Begriffsschrift and work in the foundations of mathematics. His book the Foundations of Arithmetic is the seminal text of the logicist project, and is cited by Michael Dummett as where to pinpoint the linguistic turn. His philosophical papers "On Sense and Reference" and "The Thought" are also widely cited. The former argues for two different types of meaning and descriptivism. In Foundations and "The Thought", Frege argues for Platonism against psychologism or formalism, concerning numbers and propositions respectively. Russell's paradox undermined the logicist project by showing Frege's Basic Law V in the Foundations to be false.
Life
Childhood (1848–69)
Frege was born in 1848 in Wismar, Mecklenburg-Schwerin (today part of Mecklenburg-Vorpommern). His father Carl (Karl) Alexander Frege (1809–1866) was the co-founder and headmaster of a girls' high school until his death. After Carl's death, the school was led by Frege's mother Auguste Wilhelmine Sophie Frege (née Bialloblotzky, 12 January 1815 – 14 October 1898); her mother was Auguste Amalia Maria Ballhorn, a descendant of Philipp Melanchthon and her father was Johann Heinrich Siegfried Bialloblotzky, a descendant of a Polish noble family who left Poland in the 17th century. Frege was a Lutheran.
In childhood, Frege encountered philosophies that would guide his future scientific career. For example, his father wrote a textbook on the German language for children aged 9–13, entitled Hülfsbuch zum Unterrichte in der deutschen Sprache für Kinder von 9 bis 13 Jahren (2nd ed., Wismar 1850; 3rd ed., Wismar and Ludwigslust: Hinstorff, 1862) (Help book for teaching German to children from 9 to 13 years old), the first section of which dealt with the structure and logic of language.
Frege studied at and graduated in 1869. His teacher Gustav Adolf Leo Sachse (5 November 1843 – 1 September 1909), who was a poet, played the most important role in determining Frege's future scientific career, encouraging him to continue his studies at the University of Jena.
Studies at University (1869–74)
Frege matriculated at the University of Jena in the spring of 1869 as a citizen of the North German Confederation. In the four semesters of his studies he attended approximately twenty co |
https://en.wikipedia.org/wiki/Graph | Graph may refer to:
Mathematics
Graph (discrete mathematics), a structure made of vertices and edges
Graph theory, the study of such graphs and their properties
Graph (topology), a topological space resembling a graph in the sense of discrete mathematics
Graph of a function
Graph of a relation
Graph paper
Chart, a means of representing data (also called a graph)
Computing
Graph (abstract data type), an abstract data type representing relations or connections
graph (Unix), Unix command-line utility
Conceptual graph, a model for knowledge representation and reasoning
Microsoft Graph, a Microsoft API developer platform that connects multiple services and devices
Other uses
HMS Graph, a submarine of the UK Royal Navy
See also
Complex network
Graf
Graff (disambiguation)
Graph database
Grapheme, in linguistics
Graphemics
Graphic (disambiguation)
-graphy (suffix from the Greek for "describe," "write" or "draw")
List of information graphics software
Statistical graphics |
https://en.wikipedia.org/wiki/Gaussian%20integer | In number theory, a Gaussian integer is a complex number whose real and imaginary parts are both integers. The Gaussian integers, with ordinary addition and multiplication of complex numbers, form an integral domain, usually written as or
Gaussian integers share many properties with integers: they form a Euclidean domain, and have thus a Euclidean division and a Euclidean algorithm; this implies unique factorization and many related properties. However, Gaussian integers do not have a total ordering that respects arithmetic.
Gaussian integers are algebraic integers and form the simplest ring of quadratic integers.
Gaussian integers are named after the German mathematician Carl Friedrich Gauss.
Basic definitions
The Gaussian integers are the set
In other words, a Gaussian integer is a complex number such that its real and imaginary parts are both integers.
Since the Gaussian integers are closed under addition and multiplication, they form a commutative ring, which is a subring of the field of complex numbers. It is thus an integral domain.
When considered within the complex plane, the Gaussian integers constitute the -dimensional integer lattice.
The conjugate of a Gaussian integer is the Gaussian integer .
The norm of a Gaussian integer is its product with its conjugate.
The norm of a Gaussian integer is thus the square of its absolute value as a complex number. The norm of a Gaussian integer is a nonnegative integer, which is a sum of two squares. Thus a norm cannot be of the form , with integer.
The norm is multiplicative, that is, one has
for every pair of Gaussian integers . This can be shown directly, or by using the multiplicative property of the modulus of complex numbers.
The units of the ring of Gaussian integers (that is the Gaussian integers whose multiplicative inverse is also a Gaussian integer) are precisely the Gaussian integers with norm 1, that is, 1, –1, and .
Euclidean division
Gaussian integers have a Euclidean division (division with remainder) similar to that of integers and polynomials. This makes the Gaussian integers a Euclidean domain, and implies that Gaussian integers share with integers and polynomials many important properties such as the existence of a Euclidean algorithm for computing greatest common divisors, Bézout's identity, the principal ideal property, Euclid's lemma, the unique factorization theorem, and the Chinese remainder theorem, all of which can be proved using only Euclidean division.
A Euclidean division algorithm takes, in the ring of Gaussian integers, a dividend and divisor , and produces a quotient and remainder such that
In fact, one may make the remainder smaller:
Even with this better inequality, the quotient and the remainder are not necessarily unique, but one may refine the choice to ensure uniqueness.
To prove this, one may consider the complex number quotient . There are unique integers and such that and , and thus . Taking , one has
with
and
The choice |
https://en.wikipedia.org/wiki/Normal%20space | In topology and related branches of mathematics, a normal space is a topological space X that satisfies Axiom T4: every two disjoint closed sets of X have disjoint open neighborhoods. A normal Hausdorff space is also called a T4 space. These conditions are examples of separation axioms and their further strengthenings define completely normal Hausdorff spaces, or T5 spaces, and perfectly normal Hausdorff spaces, or T6 spaces.
Definitions
A topological space X is a normal space if, given any disjoint closed sets E and F, there are neighbourhoods U of E and V of F that are also disjoint. More intuitively, this condition says that E and F can be separated by neighbourhoods.
A T4 space is a T1 space X that is normal; this is equivalent to X being normal and Hausdorff.
A completely normal space, or , is a topological space X such that every subspace of X is a normal space. It turns out that X is completely normal if and only if every two separated sets can be separated by neighbourhoods. Also, X is completely normal if and only if every open subset of X is normal with the subspace topology.
A T5 space, or completely T4 space, is a completely normal T1 space X, which implies that X is Hausdorff; equivalently, every subspace of X must be a T4 space.
A perfectly normal space is a topological space in which every two disjoint closed sets and can be precisely separated by a function, in the sense that there is a continuous function from to the interval such that and . This is a stronger separation property than normality, as by Urysohn's lemma disjoint closed sets in a normal space can be separated by a function, in the sense of and , but not precisely separated in general. It turns out that X is perfectly normal if and only if X is normal and every closed set is a Gδ set. Equivalently, X is perfectly normal if and only if every closed set is the zero set of a continuous function. The equivalence between these three characterizations is called Vedenissoff's theorem. Every perfectly normal space is completely normal, because perfect normality is a hereditary property.
A T6 space, or perfectly T4 space, is a perfectly normal Hausdorff space.
Note that the terms "normal space" and "T4" and derived concepts occasionally have a different meaning. (Nonetheless, "T5" always means the same as "completely T4", whatever that may be.) The definitions given here are the ones usually used today. For more on this issue, see History of the separation axioms.
Terms like "normal regular space" and "normal Hausdorff space" also turn up in the literature—they simply mean that the space both is normal and satisfies the other condition mentioned. In particular, a normal Hausdorff space is the same thing as a T4 space. Given the historical confusion of the meaning of the terms, verbal descriptions when applicable are helpful, that is, "normal Hausdorff" instead of "T4", or "completely normal Hausdorff" instead of "T5".
Fully normal spaces and fully T4 spaces |
https://en.wikipedia.org/wiki/Paracompact%20space | In mathematics, a paracompact space is a topological space in which every open cover has an open refinement that is locally finite. These spaces were introduced by . Every compact space is paracompact. Every paracompact Hausdorff space is normal, and a Hausdorff space is paracompact if and only if it admits partitions of unity subordinate to any open cover. Sometimes paracompact spaces are defined so as to always be Hausdorff.
Every closed subspace of a paracompact space is paracompact. While compact subsets of Hausdorff spaces are always closed, this is not true for paracompact subsets. A space such that every subspace of it is a paracompact space is called hereditarily paracompact. This is equivalent to requiring that every open subspace be paracompact.
The notion of paracompact space is also studied in pointless topology, where it is more well-behaved. For example, the product of any number of paracompact locales is a paracompact locale, but the product of two paracompact spaces may not be paracompact. Compare this to Tychonoff's theorem, which states that the product of any collection of compact topological spaces is compact. However, the product of a paracompact space and a compact space is always paracompact.
Every metric space is paracompact. A topological space is metrizable if and only if it is a paracompact and locally metrizable Hausdorff space.
Definition
A cover of a set is a collection of subsets of whose union contains . In symbols, if is an indexed family of subsets of , then is a cover of if
A cover of a topological space is open if all its members are open sets. A refinement of a cover of a space is a new cover of the same space such that every set in the new cover is a subset of some set in the old cover. In symbols, the cover is a refinement of the cover if and only if, for every in , there exists some in such that .
An open cover of a space is locally finite if every point of the space has a neighborhood that intersects only finitely many sets in the cover. In symbols, is locally finite if and only if, for any in , there exists some neighbourhood of such that the set
is finite. A topological space is now said to be paracompact if every open cover has a locally finite open refinement.
This definition extends verbatim to locales, with the exception of locally finite: an open cover of is locally finite iff the set of opens that intersect only finitely many opens in also form a cover of . Note that an open cover on a topological space is locally finite iff its a locally finite cover of the underlying locale.
Examples
Every compact space is paracompact.
Every regular Lindelöf space is paracompact. In particular, every locally compact Hausdorff second-countable space is paracompact.
The Sorgenfrey line is paracompact, even though it is neither compact, locally compact, second countable, nor metrizable.
Every CW complex is paracompact.
(Theorem of A. H. Stone) Every metric space is par |
https://en.wikipedia.org/wiki/Locally%20compact%20space | In topology and related branches of mathematics, a topological space is called locally compact if, roughly speaking, each small portion of the space looks like a small portion of a compact space. More precisely, it is a topological space in which every point has a compact neighborhood.
In mathematical analysis locally compact spaces that are Hausdorff are of particular interest; they are abbreviated as LCH spaces.
Formal definition
Let X be a topological space. Most commonly X is called locally compact if every point x of X has a compact neighbourhood, i.e., there exists an open set U and a compact set K, such that .
There are other common definitions: They are all equivalent if X is a Hausdorff space (or preregular). But they are not equivalent in general:
1. every point of X has a compact neighbourhood.
2. every point of X has a closed compact neighbourhood.
2′. every point of X has a relatively compact neighbourhood.
2″. every point of X has a local base of relatively compact neighbourhoods.
3. every point of X has a local base of compact neighbourhoods.
4. every point of X has a local base of closed compact neighbourhoods.
5. X is Hausdorff and satisfies any (or equivalently, all) of the previous conditions.
Logical relations among the conditions:
Each condition implies (1).
Conditions (2), (2′), (2″) are equivalent.
Neither of conditions (2), (3) implies the other.
Condition (4) implies (2) and (3).
Compactness implies conditions (1) and (2), but not (3) or (4).
Condition (1) is probably the most commonly used definition, since it is the least restrictive and the others are equivalent to it when X is Hausdorff. This equivalence is a consequence of the facts that compact subsets of Hausdorff spaces are closed, and closed subsets of compact spaces are compact. Spaces satisfying (1) are also called , as they satisfy the weakest of the conditions here.
As they are defined in terms of relatively compact sets, spaces satisfying (2), (2'), (2") can more specifically be called locally relatively compact. Steen & Seebach calls (2), (2'), (2") strongly locally compact to contrast with property (1), which they call locally compact.
Spaces satisfying condition (4) are exactly the spaces. Indeed, such a space is regular, as every point has a local base of closed neighbourhoods. Conversely, in a regular locally compact space suppose a point has a compact neighbourhood . By regularity, given an arbitrary neighbourhood of , there is a closed neighbourhood of contained in and is compact as a closed set in a compact set.
Condition (5) is used, for example, in Bourbaki. Any space that is locally compact (in the sense of condition (1)) and also Hausdorff automatically satisfies all the conditions above. Since in most applications locally compact spaces are also Hausdorff, these locally compact Hausdorff (LCH) spaces will thus be the spaces that this article is primarily concerned with.
Examples and counterexamples
Compact Hausdorff s |
https://en.wikipedia.org/wiki/Nowhere%20dense%20set | In mathematics, a subset of a topological space is called nowhere dense or rare if its closure has empty interior. In a very loose sense, it is a set whose elements are not tightly clustered (as defined by the topology on the space) anywhere. For example, the integers are nowhere dense among the reals, whereas the interval (0, 1) is not nowhere dense.
A countable union of nowhere dense sets is called a meagre set. Meagre sets play an important role in the formulation of the Baire category theorem, which is used in the proof of several fundamental results of functional analysis.
Definition
Density nowhere can be characterized in different (but equivalent) ways. The simplest definition is the one from density:
A subset of a topological space is said to be dense in another set if the intersection is a dense subset of is or in if is not dense in any nonempty open subset of
Expanding out the negation of density, it is equivalent to require that each nonempty open set contains a nonempty open subset disjoint from It suffices to check either condition on a base for the topology on In particular, density nowhere in is often described as being dense in no open interval.
Definition by closure
The second definition above is equivalent to requiring that the closure, cannot contain any nonempty open set. This is the same as saying that the interior of the closure of is empty; that is, Alternatively, the complement of the closure must be a dense subset of in other words, the exterior of is dense in
Properties
The notion of nowhere dense set is always relative to a given surrounding space. Suppose where has the subspace topology induced from The set may be nowhere dense in but not nowhere dense in Notably, a set is always dense in its own subspace topology. So if is nonempty, it will not be nowhere dense as a subset of itself. However the following results hold:
If is nowhere dense in then is nowhere dense in
If is open in , then is nowhere dense in if and only if is nowhere dense in
If is dense in , then is nowhere dense in if and only if is nowhere dense in
A set is nowhere dense if and only if its closure is.
Every subset of a nowhere dense set is nowhere dense, and a finite union of nowhere dense sets is nowhere dense. Thus the nowhere dense sets form an ideal of sets, a suitable notion of negligible set. In general they do not form a 𝜎-ideal, as meager sets, which are the countable unions of nowhere dense sets, need not be nowhere dense. For example, the set is not nowhere dense in
The boundary of every open set and of every closed set is closed and nowhere dense. A closed set is nowhere dense if and only if it is equal to its boundary, if and only if it is equal to the boundary of some open set (for example the open set can be taken as the complement of the set). An arbitrary set is nowhere dense if and only if it is a subset of the boundary of some open set (for example the |
https://en.wikipedia.org/wiki/Partition%20of%20unity | In mathematics, a partition of unity of a topological space is a set of continuous functions from to the unit interval [0,1] such that for every point :
there is a neighbourhood of where all but a finite number of the functions of are 0, and
the sum of all the function values at is 1, i.e.,
Partitions of unity are useful because they often allow one to extend local constructions to the whole space. They are also important in the interpolation of data, in signal processing, and the theory of spline functions.
Existence
The existence of partitions of unity assumes two distinct forms:
Given any open cover of a space, there exists a partition indexed over the same set such that supp Such a partition is said to be subordinate to the open cover
If the space is locally-compact, given any open cover of a space, there exists a partition indexed over a possibly distinct index set such that each has compact support and for each , supp for some .
Thus one chooses either to have the supports indexed by the open cover, or compact supports. If the space is compact, then there exist partitions satisfying both requirements.
A finite open cover always has a continuous partition of unity subordinated to it, provided the space is locally compact and Hausdorff.
Paracompactness of the space is a necessary condition to guarantee the existence of a partition of unity subordinate to any open cover. Depending on the category to which the space belongs, it may also be a sufficient condition. The construction uses mollifiers (bump functions), which exist in continuous and smooth manifolds, but not in analytic manifolds. Thus for an open cover of an analytic manifold, an analytic partition of unity subordinate to that open cover generally does not exist. See analytic continuation.
If and are partitions of unity for spaces and , respectively, then the set of all pairs is a partition of unity for the cartesian product space . The tensor product of functions act as
Example
We can construct a partition of unity on by looking at a chart on the complement of a point sending to with center . Now, let be a bump function on defined by then, both this function and can be extended uniquely onto by setting . Then, the set forms a partition of unity over .
Variant definitions
Sometimes a less restrictive definition is used: the sum of all the function values at a particular point is only required to be positive, rather than 1, for each point in the space. However, given such a set of functions one can obtain a partition of unity in the strict sense by dividing by the sum; the partition becomes where , which is well defined since at each point only a finite number of terms are nonzero. Even further, some authors drop the requirement that the supports be locally finite, requiring only that for all .
Applications
A partition of unity can be used to define the integral (with respect to a volume form) of a function defined over a manifold |
https://en.wikipedia.org/wiki/Statistician | A statistician is a person who works with theoretical or applied statistics. The profession exists in both the private and public sectors.
It is common to combine statistical knowledge with expertise in other subjects, and statisticians may work as employees or as statistical consultants.
Nature of the work
According to the United States Bureau of Labor Statistics, as of 2014, 26,970 jobs were classified as statistician in the United States. Of these people, approximately 30 percent worked for governments (federal, state, or local). As of October 2021, the median pay for statisticians in the United States was $92,270.
Additionally, there is a substantial number of people who use statistics and data analysis in their work but have job titles other than statistician, such as actuaries, applied mathematicians, economists, data scientists, data analysts (predictive analytics), financial analysts, psychometricians, sociologists, epidemiologists, and quantitative psychologists. Statisticians are included with the professions in various national and international occupational classifications.
In many countries, including the United States, employment in the field requires either a master's degree in statistics or a related field or a PhD.
According to one industry professional, "Typical work includes collaborating with scientists, providing mathematical modeling, simulations, designing randomized experiments and randomized sampling plans, analyzing experimental or survey results, and forecasting future events (such as sales of a product)."
According to the BLS, "Overall employment is projected to grow 33% from 2016 to 2026, much faster than average for all occupations. Businesses will need these workers to analyze the increasing volume of digital and electronic data." In October 2021, the CNBC rated it the fastest growing job in science and technology of the next decade, with a projected growth rate of 35.40%.
See also
List of statisticians
History of statistics
Data science
References
External links
Statistician entry, Occupational Outlook Handbook, U.S. Bureau of Labor Statistics
Careers Center, American Statistical Association
Careers information, Royal Statistical Society (UK)
Listing of tasks and duties - The International Standard Classification of Occupations (ISCO)
Listings of nature of work etc - O*NET
Statistics profession and organizations
Statistician |
https://en.wikipedia.org/wiki/Henri%20Poincar%C3%A9 | Jules Henri Poincaré (, ; ; 29 April 185417 July 1912) was a French mathematician, theoretical physicist, engineer, and philosopher of science. He is often described as a polymath, and in mathematics as "The Last Universalist", since he excelled in all fields of the discipline as it existed during his lifetime. Due to his scientific success, influence and his discoveries, he has been deemed the "the philosopher par excellence of modern science."
As a mathematician and physicist, he made many original fundamental contributions to pure and applied mathematics, mathematical physics, and celestial mechanics. In his research on the three-body problem, Poincaré became the first person to discover a chaotic deterministic system which laid the foundations of modern chaos theory. He is also considered to be one of the founders of the field of topology.
Poincaré made clear the importance of paying attention to the invariance of laws of physics under different transformations, and was the first to present the Lorentz transformations in their modern symmetrical form. Poincaré discovered the remaining relativistic velocity transformations and recorded them in a letter to Hendrik Lorentz in 1905. Thus he obtained perfect invariance of all of Maxwell's equations, an important step in the formulation of the theory of special relativity. In 1905, Poincaré first proposed gravitational waves (ondes gravifiques) emanating from a body and propagating at the speed of light as being required by the Lorentz transformations. In 1912, he wrote an influential paper which provided a mathematical argument for quantum mechanics.
The Poincaré group used in physics and mathematics was named after him.
Early in the 20th century he formulated the Poincaré conjecture that became over time one of the famous unsolved problems in mathematics until it was solved in 2002–2003 by Grigori Perelman.
Life
Poincaré was born on 29 April 1854 in Cité Ducale neighborhood, Nancy, Meurthe-et-Moselle, into an influential French family. His father Léon Poincaré (1828–1892) was a professor of medicine at the University of Nancy. His younger sister Aline married the spiritual philosopher Émile Boutroux. Another notable member of Henri's family was his cousin, Raymond Poincaré, a fellow member of the Académie française, who was President of France from 1913 to 1920, and three-time Prime Minister of France between 1913 and 1929.
Education
During his childhood he was seriously ill for a time with diphtheria and received special instruction from his mother, Eugénie Launois (1830–1897).
In 1862, Henri entered the Lycée in Nancy (now renamed the in his honour, along with Henri Poincaré University, also in Nancy). He spent eleven years at the Lycée and during this time he proved to be one of the top students in every topic he studied. He excelled in written composition. His mathematics teacher described him as a "monster of mathematics" and he won first prizes in the concours général, a competitio |
https://en.wikipedia.org/wiki/Wolfram%20Mathematica | Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allow machine learning, statistics, symbolic computation, data manipulation, network analysis, time series analysis, NLP, optimization, plotting functions and various types of data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other programming languages. It was conceived by Stephen Wolfram, and is developed by Wolfram Research of Champaign, Illinois. The Wolfram Language is the programming language used in Mathematica. Mathematica 1.0 was released on June 23, 1988 in Champaign, Illinois and Santa Clara, California.
Notebook interface
Mathematica is split into two parts: the kernel and the front end. The kernel interprets expressions (Wolfram Language code) and returns result expressions, which can then be displayed by the front end.
The original front end, designed by Theodore Gray in 1988, consists of a notebook interface and allows the creation and editing of notebook documents that can contain code, plaintext, images, and graphics.
Alternatives to the Mathematica front end include Wolfram Workbench—an Eclipse-based integrated development environment (IDE) that was introduced in 2006. It provides project-based code development tools for Mathematica, including revision management, debugging, profiling, and testing.
There is also a plugin for IntelliJ IDEA-based IDEs to work with Wolfram Language code that in addition to syntax highlighting can analyze and auto-complete local variables and defined functions. The Mathematica Kernel also includes a command line front end.
Other interfaces include JMath, based on GNU Readline and WolframScript which runs self-contained Mathematica programs (with arguments) from the UNIX command line.
The file extension for Mathematica files is .nb and .m for configuration files.
Mathematica is designed to be fully stable and backwards compatible with previous versions.
High-performance computing
Capabilities for high-performance computing were extended with the introduction of packed arrays in version 4 (1999) and sparse matrices (version 5, 2003), and by adopting the GNU Multiple Precision Arithmetic Library to evaluate high-precision arithmetic.
Version 5.2 (2005) added automatic multi-threading when computations are performed on multi-core computers. This release included CPU-specific optimized libraries. In addition Mathematica is supported by third party specialist acceleration hardware such as ClearSpeed.
In 2002, gridMathematica was introduced to allow user level parallel programming on heterogeneous clusters and multiprocessor systems and in 2008 parallel computing technology was included in all Mathematica licenses including support for grid technology such as Windows HPC Server 2008, Microsoft Compute Cluster Server and Sun Grid.
Support for CUDA and OpenCL GPU hardware was added in 2010.
Extensions
As of Version 13, there a |
https://en.wikipedia.org/wiki/Cox%27s%20theorem | Cox's theorem, named after the physicist Richard Threlkeld Cox, is a derivation of the laws of probability theory from a certain set of postulates. This derivation justifies the so-called "logical" interpretation of probability, as the laws of probability derived by Cox's theorem are applicable to any proposition. Logical (also known as objective Bayesian) probability is a type of Bayesian probability. Other forms of Bayesianism, such as the subjective interpretation, are given other justifications.
Cox's assumptions
Cox wanted his system to satisfy the following conditions:
Divisibility and comparability – The plausibility of a proposition is a real number and is dependent on information we have related to the proposition.
Common sense – Plausibilities should vary sensibly with the assessment of plausibilities in the model.
Consistency – If the plausibility of a proposition can be derived in many ways, all the results must be equal.
The postulates as stated here are taken from Arnborg and Sjödin.
"Common sense" includes consistency with Aristotelian logic in the sense that logically equivalent propositions shall have the same plausibility.
The postulates as originally stated by Cox were not mathematically
rigorous (although more so than the informal description above),
as noted by Halpern. However it appears to be possible
to augment them with various mathematical assumptions made either
implicitly or explicitly by Cox to produce a valid proof.
Cox's notation:
The plausibility of a proposition given some related information is denoted by .
Cox's postulates and functional equations are:
The plausibility of the conjunction of two propositions , , given some related information , is determined by the plausibility of given and that of given .
In form of a functional equation
Because of the associative nature of the conjunction in propositional logic, the consistency with logic gives a functional equation saying that the function is an associative binary operation.
Additionally, Cox postulates the function to be monotonic.
All strictly increasing associative binary operations on the real numbers are isomorphic to multiplication of numbers in a subinterval of , which means that there is a monotonic function mapping plausibilities to such that
In case given is certain, we have and due to the requirement of consistency. The general equation then leads to
This shall hold for any proposition , which leads to
In case given is impossible, we have and due to the requirement of consistency. The general equation (with the A and B factors switched) then leads to
This shall hold for any proposition , which, without loss of generality, leads to a solution
Due to the requirement of monotonicity, this means that maps plausibilities to interval .
The plausibility of a proposition determines the plausibility of the proposition's negation.
This postulates the existence of a function such that
Because "a double negative is an affirmat |
https://en.wikipedia.org/wiki/Interval%20%28mathematics%29 | In mathematics, a (real) interval is the set of all real numbers lying between two fixed endpoints with no "gaps". Each endpoint is either a real number or positive or negative infinity, indicating the interval extends without a bound. An interval can contain neither endpoint, either endpoint, or both endpoints.
For example, the set of real numbers consisting of , , and all numbers in between is an interval, denoted and called the unit interval; the set of all positive real numbers is an interval, denoted ; the set of all real numbers is an interval, denoted ; and any single real number is an interval, denoted .
Intervals are ubiquitous in mathematical analysis. For example, they occur implicitly in the epsilon-delta definition of continuity; the intermediate value theorem asserts that the image of an interval by a continuous function is an interval; integrals of real functions are defined over an interval; etc.
Interval arithmetic consists of computing with intervals instead of real numbers for providing a guaranteed enclosure of the result of a numerical computation, even in the presence of uncertainties of input data and rounding errors.
Intervals are likewise defined on an arbitrary totally ordered set, such as integers or rational numbers. The notation of integer intervals is considered in the special section below.
Definitions
An interval is a subset of the real numbers that contains all real numbers lying between any two numbers of the subset.
The endpoints of an interval are its supremum, and its infimum, if they exist as real numbers. If the infimum does not exist, one says often that the corresponding endpoint is Similarly, if the supremum does not exist, one says that the corresponding endpoint is
Intervals are completely determined by their endpoints and whether each endpoint belong to the interval. This is a consequence of the least-upper-bound property of the real numbers. This characterization is used to specify intervals by mean of , which is described below.
An does not include any endpoint, and is indicated with parentheses. For example, is the interval of all real numbers greater than and less than . (This interval can also be denoted by , see below). The open interval consists of real numbers greater than , i.e., positive real numbers. The open intervals are thus one of the forms
where and are real numbers such that When in the first case, the resulting interval is the empty set , which is a degenerate interval (see below). The open intervals are those intervals that are open sets for the usual topology on the real numbers.
A is an interval that includes all its endpoints and is denoted with square brackets. For example, means greater than or equal to and less than or equal to . Closed intervals have one of the following forms in which and are real numbers such that
The closed intervals are those intervals that are closed sets for the usual topology on the real numbers. The empty set and are the |
https://en.wikipedia.org/wiki/Conjugacy%20class | In mathematics, especially group theory, two elements and of a group are conjugate if there is an element in the group such that This is an equivalence relation whose equivalence classes are called conjugacy classes. In other words, each conjugacy class is closed under for all elements in the group.
Members of the same conjugacy class cannot be distinguished by using only the group structure, and therefore share many properties. The study of conjugacy classes of non-abelian groups is fundamental for the study of their structure. For an abelian group, each conjugacy class is a set containing one element (singleton set).
Functions that are constant for members of the same conjugacy class are called class functions.
Definition
Let be a group. Two elements are conjugate if there exists an element such that in which case is called of and is called a conjugate of
In the case of the general linear group of invertible matrices, the conjugacy relation is called matrix similarity.
It can be easily shown that conjugacy is an equivalence relation and therefore partitions into equivalence classes. (This means that every element of the group belongs to precisely one conjugacy class, and the classes and are equal if and only if and are conjugate, and disjoint otherwise.) The equivalence class that contains the element is
and is called the conjugacy class of The of is the number of distinct (nonequivalent) conjugacy classes. All elements belonging to the same conjugacy class have the same order.
Conjugacy classes may be referred to by describing them, or more briefly by abbreviations such as "6A", meaning "a certain conjugacy class with elements of order 6", and "6B" would be a different conjugacy class with elements of order 6; the conjugacy class 1A is the conjugacy class of the identity which has order 1. In some cases, conjugacy classes can be described in a uniform way; for example, in the symmetric group they can be described by cycle type.
Examples
The symmetric group consisting of the 6 permutations of three elements, has three conjugacy classes:
No change . The single member has order 1.
Transposing two . The 3 members all have order 2.
A cyclic permutation of all three . The 2 members both have order 3.
These three classes also correspond to the classification of the isometries of an equilateral triangle.
The symmetric group consisting of the 24 permutations of four elements, has five conjugacy classes, listed with their description, cycle type, member order, and members:
No change. Cycle type = [14]. Order = 1. Members = { (1, 2, 3, 4) }. The single row containing this conjugacy class is shown as a row of black circles in the adjacent table.
Interchanging two (other two remain unchanged). Cycle type = [1221]. Order = 2. Members = { (1, 2, 4, 3), (1, 4, 3, 2), (1, 3, 2, 4), (4, 2, 3, 1), (3, 2, 1, 4), (2, 1, 3, 4) }). The 6 rows containing this conjugacy class are highlighted in green in the ad |
https://en.wikipedia.org/wiki/Urysohn%27s%20lemma | In topology, Urysohn's lemma is a lemma that states that a topological space is normal if and only if any two disjoint closed subsets can be separated by a continuous function.
Urysohn's lemma is commonly used to construct continuous functions with various properties on normal spaces. It is widely applicable since all metric spaces and all compact Hausdorff spaces are normal. The lemma is generalised by (and usually used in the proof of) the Tietze extension theorem.
The lemma is named after the mathematician Pavel Samuilovich Urysohn.
Discussion
Two subsets and of a topological space are said to be separated by neighbourhoods if there are neighbourhoods of and of that are disjoint. In particular and are necessarily disjoint.
Two plain subsets and are said to be separated by a continuous function if there exists a continuous function from into the unit interval such that for all and for all Any such function is called a Urysohn function for and In particular and are necessarily disjoint.
It follows that if two subsets and are separated by a function then so are their closures. Also it follows that if two subsets and are separated by a function then and are separated by neighbourhoods.
A normal space is a topological space in which any two disjoint closed sets can be separated by neighbourhoods. Urysohn's lemma states that a topological space is normal if and only if any two disjoint closed sets can be separated by a continuous function.
The sets and need not be precisely separated by , i.e., it is not necessary and guaranteed that and for outside and A topological space in which every two disjoint closed subsets and are precisely separated by a continuous function is perfectly normal.
Urysohn's lemma has led to the formulation of other topological properties such as the 'Tychonoff property' and 'completely Hausdorff spaces'. For example, a corollary of the lemma is that normal T1 spaces are Tychonoff.
Formal statement
A topological space is normal if and only if, for any two non-empty closed disjoint subsets and of there exists a continuous map such that and
Proof sketch
The proof proceeds by repeatedly applying the following alternate characterization of normality. If is a normal space, is an open subset of , and is closed, then there exists an open and a closed such that .
Let and be disjoint closed subsets of . The main idea of the proof is to repeatedly apply this characterization of normality to and , continuing with the new sets built on every step.
The sets we build are indexed by dyadic fractions. For every dyadic fraction , we construct an open subset and a closed subset of such that:
and for all ,
for all ,
For , .
Intuitively, the sets and expand outwards in layers from :
This construction proceeds by mathematical induction. For the base step, we define two extra sets and .
Now assume that and that the sets and have already been constructed for . |
https://en.wikipedia.org/wiki/Unit%20interval | In mathematics, the unit interval is the closed interval , that is, the set of all real numbers that are greater than or equal to 0 and less than or equal to 1. It is often denoted (capital letter ). In addition to its role in real analysis, the unit interval is used to study homotopy theory in the field of topology.
In the literature, the term "unit interval" is sometimes applied to the other shapes that an interval from 0 to 1 could take: , , and . However, the notation is most commonly reserved for the closed interval .
Properties
The unit interval is a complete metric space, homeomorphic to the extended real number line. As a topological space, it is compact, contractible, path connected and locally path connected. The Hilbert cube is obtained by taking a topological product of countably many copies of the unit interval.
In mathematical analysis, the unit interval is a one-dimensional analytical manifold whose boundary consists of the two points 0 and 1. Its standard orientation goes from 0 to 1.
The unit interval is a totally ordered set and a complete lattice (every subset of the unit interval has a supremum and an infimum).
Cardinality
The size or cardinality of a set is the number of elements it contains.
The unit interval is a subset of the real numbers . However, it has the same size as the whole set: the cardinality of the continuum. Since the real numbers can be used to represent points along an infinitely long line, this implies that a line segment of length 1, which is a part of that line, has the same number of points as the whole line. Moreover, it has the same number of points as a square of area 1, as a cube of volume 1, and even as an unbounded n-dimensional Euclidean space (see Space filling curve).
The number of elements (either real numbers or points) in all the above-mentioned sets is uncountable, as it is strictly greater than the number of natural numbers.
Orientation
The unit interval is a curve. The open interval (0,1) is a subset of the positive real numbers and inherits an orientation from them. The orientation is reversed when the interval is entered from 1, such as in the integral used to define natural logarithm for x in the interval, thus yielding negative values for logarithm of such x. In fact, this integral is evaluated as a signed area yielding negative area over the unit interval due to reversed orientation there.
Generalizations
The interval , with length two, demarcated by the positive and negative units, occurs frequently, such as in the range of the trigonometric functions sine and cosine and the hyperbolic function tanh. This interval may be used for the domain of inverse functions. For instance, when is restricted to then is in this interval and arcsine is defined there.
Sometimes, the term "unit interval" is used to refer to objects that play a role in various branches of mathematics analogous to the role that plays in homotopy theory. For example, in the theory of quivers, the |
https://en.wikipedia.org/wiki/Divisor | In mathematics, a divisor of an integer , also called a factor of , is an integer that may be multiplied by some integer to produce . In this case, one also says that is a multiple of An integer is divisible or evenly divisible by another integer if is a divisor of ; this implies dividing by leaves no remainder.
Definition
An integer is divisible by a nonzero integer if there exists an integer such that . This is written as
Other ways of saying the same thing are that divides , is a divisor of , is a factor of , and is a multiple of . If does not divide , then the notation is .
Usually, is required to be nonzero, but is allowed to be zero. With this convention, for every nonzero integer . Some definitions omit the requirement that be nonzero.
General
Divisors can be negative as well as positive, although often the term is restricted to positive divisors. For example, there are six divisors of 4; they are 1, 2, 4, −1, −2, and −4, but only the positive ones (1, 2, and 4) would usually be mentioned.
1 and −1 divide (are divisors of) every integer. Every integer (and its negation) is a divisor of itself. Integers divisible by 2 are called even, and integers not divisible by 2 are called odd.
1, −1, n and −n are known as the trivial divisors of n. A divisor of n that is not a trivial divisor is known as a non-trivial divisor (or strict divisor). A nonzero integer with at least one non-trivial divisor is known as a composite number, while the units −1 and 1 and prime numbers have no non-trivial divisors.
There are divisibility rules that allow one to recognize certain divisors of a number from the number's digits.
Examples
7 is a divisor of 42 because , so we can say . It can also be said that 42 is divisible by 7, 42 is a multiple of 7, 7 divides 42, or 7 is a factor of 42.
The non-trivial divisors of 6 are 2, −2, 3, −3.
The positive divisors of 42 are 1, 2, 3, 6, 7, 14, 21, 42.
The set of all positive divisors of 60, , partially ordered by divisibility, has the Hasse diagram:
Further notions and facts
There are some elementary rules:
If and , then , i.e. divisibility is a transitive relation.
If and , then or .
If and , then holds, as does . However, if and , then does not always hold (e.g. and but 5 does not divide 6).
If , and , then . This is called Euclid's lemma.
If is a prime number and then or .
A positive divisor of that is different from is called a or an of . A number that does not evenly divide but leaves a remainder is sometimes called an of .
An integer whose only proper divisor is 1 is called a prime number. Equivalently, a prime number is a positive integer that has exactly two positive factors: 1 and itself.
Any positive divisor of is a product of prime divisors of raised to some power. This is a consequence of the fundamental theorem of arithmetic.
A number is said to be perfect if it equals the sum of its proper divisors, deficient if the sum of its proper diviso |
https://en.wikipedia.org/wiki/Pascal%27s%20triangle | In mathematics, Pascal's triangle is a triangular array of the binomial coefficients arising in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Blaise Pascal, although other mathematicians studied it centuries before him in Persia, India, China, Germany, and Italy.
The rows of Pascal's triangle are conventionally enumerated starting with row at the top (the 0th row). The entries in each row are numbered from the left beginning with and are usually staggered relative to the numbers in the adjacent rows. The triangle may be constructed in the following manner: In row 0 (the topmost row), there is a unique nonzero entry 1. Each entry of each subsequent row is constructed by adding the number above and to the left with the number above and to the right, treating blank entries as 0. For example, the initial number of row 1 (or any other row) is 1 (the sum of 0 and 1), whereas the numbers 1 and 3 in row 3 are added to produce the number 4 in row 4.
Formula
The entry in the th row and th column of Pascal's triangle is denoted . For example, the unique nonzero entry in the topmost row is . With this notation, the construction of the previous paragraph may be written as follows:
,
for any non-negative integer and any integer . This recurrence for the binomial coefficients is known as Pascal's rule.
History
The pattern of numbers that forms Pascal's triangle was known well before Pascal's time. In the Islamic world, the Persian mathematician Al-Karaji (953–1029) wrote a now-lost book which contained the first formulation of the binomial coefficients and the first description of Pascal's triangle. It was later repeated by Omar Khayyám (1048–1131), another Persian mathematician; thus the triangle is also referred to as the Khayyam's triangle () in Iran. Several theorems related to the triangle were known, including the binomial theorem. Khayyam used a method of finding nth roots based on the binomial expansion, and therefore on the binomial coefficients.
Pascal's triangle was known in China during the early 11th century as a result of the work of the Chinese mathematician Jia Xian (1010–1070). During the 13th century, Yang Hui (1238–1298) presented the triangle and hence it is still known as Yang Hui's triangle () in China.
In Europe, Pascal's triangle appeared for the first time in the Arithmetic of Jordanus de Nemore (13th century).
The binomial coefficients were calculated by Gersonides during the early 14th century, using the multiplicative formula for them. Petrus Apianus (1495–1552) published the full triangle on the frontispiece of his book on business calculations in 1527. Michael Stifel published a portion of the triangle (from the second to the middle column in each row) in 1544, describing it as a table of figurate numbers. In Italy, Pascal's triangle is referred to as Tartaglia's triangle, named for the Italian algebraist Niccolò Fontana Tartaglia (1500–157 |
https://en.wikipedia.org/wiki/Bayes%27%20theorem | In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to an individual of a known age to be assessed more accurately by conditioning it relative to their age, rather than simply assuming that the individual is typical of the population as a whole.
One of the many applications of Bayes' theorem is Bayesian inference, a particular approach to statistical inference. When applied, the probabilities involved in the theorem may have different probability interpretations. With Bayesian probability interpretation, the theorem expresses how a degree of belief, expressed as a probability, should rationally change to account for the availability of related evidence. Bayesian inference is fundamental to Bayesian statistics, being considered by one authority as; "to the theory of probability what Pythagoras's theorem is to geometry."
History
Bayes' theorem is named after the Reverend Thomas Bayes (), also a statistician and philosopher. Bayes used conditional probability to provide an algorithm (his Proposition 9) that uses evidence to calculate limits on an unknown parameter. His work was published in 1763 as An Essay towards solving a Problem in the Doctrine of Chances. Bayes studied how to compute a distribution for the probability parameter of a binomial distribution (in modern terminology). On Bayes's death his family transferred his papers to a friend, the minister, philosopher, and mathematician Richard Price.
Over two years, Richard Price significantly edited the unpublished manuscript, before sending it to a friend who read it aloud at the Royal Society on 23 December 1763. Price edited Bayes's major work "An Essay towards solving a Problem in the Doctrine of Chances" (1763), which appeared in Philosophical Transactions, and contains Bayes' theorem. Price wrote an introduction to the paper which provides some of the philosophical basis of Bayesian statistics and chose one of the two solutions offered by Bayes. In 1765, Price was elected a Fellow of the Royal Society in recognition of his work on the legacy of Bayes. On 27 April a letter sent to his friend Benjamin Franklin was read out at the Royal Society, and later published, where Price applies this work to population and computing 'life-annuities'.
Independently of Bayes, Pierre-Simon Laplace in 1774, and later in his 1812 Théorie analytique des probabilités, used conditional probability to formulate the relation of an updated posterior probability from a prior probability, given evidence. He reproduced and extended Bayes's results in 1774, apparently unaware of Bayes's work. The Bayesian interpretation of probability was developed mainly by Laplace.
About 200 years later, Sir Harold Jeffreys pu |
https://en.wikipedia.org/wiki/Bayesian%20inference | Bayesian inference ( or ) is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".
Introduction to Bayes' rule
Formal explanation
Bayesian inference derives the posterior probability as a consequence of two antecedents: a prior probability and a "likelihood function" derived from a statistical model for the observed data. Bayesian inference computes the posterior probability according to Bayes' theorem:
where
stands for any hypothesis whose probability may be affected by data (called evidence below). Often there are competing hypotheses, and the task is to determine which is the most probable.
, the prior probability, is the estimate of the probability of the hypothesis before the data , the current evidence, is observed.
, the evidence, corresponds to new data that were not used in computing the prior probability.
, the posterior probability, is the probability of given , i.e., after is observed. This is what we want to know: the probability of a hypothesis given the observed evidence.
is the probability of observing given and is called the likelihood. As a function of with fixed, it indicates the compatibility of the evidence with the given hypothesis. The likelihood function is a function of the evidence, , while the posterior probability is a function of the hypothesis, .
is sometimes termed the marginal likelihood or "model evidence". This factor is the same for all possible hypotheses being considered (as is evident from the fact that the hypothesis does not appear anywhere in the symbol, unlike for all the other factors) and hence does not factor into determining the relative probabilities of different hypotheses.
For different values of , only the factors and , both in the numerator, affect the value of the posterior probability of a hypothesis is proportional to its prior probability (its inherent likeliness) and the newly acquired likelihood (its compatibility with the new observed evidence).
Bayes' rule can also be written as follows:
because
and
where is "not ", the logical negation of .
One quick and easy way to remember the equation would be to use rule of multiplication:
Alternatives to Bayesian updating
Bayesian updating is widely used and computationally convenient. However, it is not the only updating rule that might be considered rational.
Ian Hacking noted that traditional "Dutch book" arguments did not specify Bayesian u |
https://en.wikipedia.org/wiki/Sophist | A sophist () was a teacher in ancient Greece in the fifth and fourth centuries BC. Sophists specialized in one or more subject areas, such as philosophy, rhetoric, music, athletics and mathematics. They taught arete, "virtue" or "excellence", predominantly to young statesmen and nobility.
Etymology
The Greek word is related to the noun . Since the times of Homer it commonly referred to an expert in his profession or craft. Charioteers, sculptors, or military experts could be referred to as in their occupations. The word has gradually come to connote general wisdom and especially wisdom in human affairs such as politics, ethics, and household management. This was the meaning ascribed to the Greek Seven Sages of 7th and 6th century BC (such as Solon and Thales), and it was the meaning that appears in the histories of Herodotus.
The word gives rise to the verb , the passive voice of which means "to become or be wise", or "to be clever or skilled". From the verb is derived the noun , which originally meant "a master of one's craft" and later "a prudent man" or "wise man". The word for "sophist" in various languages comes from .
The word "sophist" could be combined with other Greek words to form compounds. Examples include meteorosophist, which roughly translates to "expert in celestial phenomena"; gymnosophist (or "naked sophist", a word used to refer to Indian philosophers), deipnosophist or "dinner sophist" (as in the title of Athenaeus's ), and iatrosophist, a type of physician in the later Roman period.
History
In the second half of the 5th century BC, particularly in Athens, "sophist" came to denote a class of mostly itinerant intellectuals who taught courses in various subjects, speculated about the nature of language and culture, and employed rhetoric to achieve their purposes, generally to persuade or convince others. "Sophists did, however, have one important thing in common: whatever else they did or did not claim to know, they characteristically had a great understanding of what words would entertain or impress or persuade an audience." Sophists went to Athens to teach because the city was flourishing at the time. It was good employment for those good at debate, which was a speciality of the first sophists, and they received the fame and fortune they were seeking. Protagoras is generally regarded as the first of these professional sophists. Others include Gorgias, Prodicus, Hippias, Thrasymachus, Lycophron, Callicles, Antiphon, and Cratylus. A few sophists claimed that they could find the answers to all questions. Most of these sophists are known today primarily through the writings of their opponents (particularly Plato and Aristotle), which makes it difficult to assemble an unbiased view of their practices and teachings. In some cases, such as Gorgias, original rhetorical works are extant, allowing the author to be judged on his own terms, but in most cases, knowledge about what individual sophists wrote or said comes from fragmen |
https://en.wikipedia.org/wiki/Numeral | A numeral is a figure, symbol, or group of figures or symbols denoting a number. It may refer to:
Numeral system used in mathematics
Numeral (linguistics), a part of speech denoting numbers (e.g. one and first in English)
Numerical digit, the glyphs used to represent numerals
See also
Numerology, belief in a divine relationship between numbers and coinciding events |
https://en.wikipedia.org/wiki/Adrien-Marie%20Legendre | Adrien-Marie Legendre (; ; 18 September 1752 – 9 January 1833) was a French mathematician who made numerous contributions to mathematics. Well-known and important concepts such as the Legendre polynomials and Legendre transformation are named after him. He is also known for his contributions to the method of least squares, and was the first to officially publish on it, though Carl Friedrich Gauss had discovered it before him.
Life
Adrien-Marie Legendre was born in Paris on 18 September 1752 to a wealthy family. He received his education at the Collège Mazarin in Paris, and defended his thesis in physics and mathematics in 1770. He taught at the École Militaire in Paris from 1775 to 1780 and at the École Normale from 1795. At the same time, he was associated with the Bureau des Longitudes. In 1782, the Berlin Academy awarded Legendre a prize for his treatise on projectiles in resistant media. This treatise also brought him to the attention of Lagrange.
The Académie des sciences made Legendre an adjoint member in 1783 and an associate in 1785. In 1789, he was elected a Fellow of the Royal Society.
He assisted with the Anglo-French Survey (1784–1790) to calculate the precise distance between the Paris Observatory and the Royal Greenwich Observatory by means of trigonometry. To this end in 1787 he visited Dover and London together with Dominique, comte de Cassini and Pierre Méchain. The three also visited William Herschel, the discoverer of the planet Uranus.
Legendre lost his private fortune in 1793 during the French Revolution. That year, he also married Marguerite-Claudine Couhin, who helped him put his affairs in order. In 1795, Legendre became one of six members of the mathematics section of the reconstituted Académie des Sciences, renamed the Institut National des Sciences et des Arts. Later, in 1803, Napoleon reorganized the Institut National, and Legendre became a member of the Geometry section. From 1799 to 1812, Legendre served as mathematics examiner for graduating artillery students at the École Militaire and from 1799 to 1815 he served as permanent mathematics examiner for the École Polytechnique. In 1824, Legendre's pension from the École Militaire was stopped because he refused to vote for the government candidate at the Institut National. His pension was partially reinstated with the change in government in 1828. In 1831, he was made an officer of the Légion d'Honneur.
Legendre died in Paris on 9 January 1833, after a long and painful illness, and Legendre's widow carefully preserved his belongings to memorialize him. Upon her death in 1856, she was buried next to her husband in the village of Auteuil, where the couple had lived, and left their last country house to the village. Legendre's name is one of the 72 names inscribed on the Eiffel Tower.
Mathematical work
Abel's work on elliptic functions was built on Legendre's, and some of Gauss' work in statistics and number theory completed that of Legendre. He developed, and firs |
https://en.wikipedia.org/wiki/Alternating%20group | In mathematics, an alternating group is the group of even permutations of a finite set. The alternating group on a set of elements is called the alternating group of degree , or the alternating group on letters and denoted by or
Basic properties
For , the group An is the commutator subgroup of the symmetric group Sn with index 2 and has therefore n!/2 elements. It is the kernel of the signature group homomorphism explained under symmetric group.
The group An is abelian if and only if and simple if and only if or . A5 is the smallest non-abelian simple group, having order 60, and the smallest non-solvable group.
The group A4 has the Klein four-group V as a proper normal subgroup, namely the identity and the double transpositions , that is the kernel of the surjection of A4 onto . We have the exact sequence . In Galois theory, this map, or rather the corresponding map , corresponds to associating the Lagrange resolvent cubic to a quartic, which allows the quartic polynomial to be solved by radicals, as established by Lodovico Ferrari.
Conjugacy classes
As in the symmetric group, any two elements of An that are conjugate by an element of An must have the same cycle shape. The converse is not necessarily true, however. If the cycle shape consists only of cycles of odd length with no two cycles the same length, where cycles of length one are included in the cycle type, then there are exactly two conjugacy classes for this cycle shape .
Examples:
The two permutations (123) and (132) are not conjugates in A3, although they have the same cycle shape, and are therefore conjugate in S3.
The permutation (123)(45678) is not conjugate to its inverse (132)(48765) in A8, although the two permutations have the same cycle shape, so they are conjugate in S8.
Relation with symmetric group
See Symmetric group.
As finite symmetric groups are the groups of all permutations of a set with finite elements, and the alternating groups are groups of even permutations, alternating groups are subgroups of finite symmetric groups.
Generators and relations
For n ≥ 3, An is generated by 3-cycles, since 3-cycles can be obtained by combining pairs of transpositions. This generating set is often used to prove that An is simple for .
Automorphism group
For , except for , the automorphism group of An is the symmetric group Sn, with inner automorphism group An and outer automorphism group Z2; the outer automorphism comes from conjugation by an odd permutation.
For and 2, the automorphism group is trivial. For the automorphism group is Z2, with trivial inner automorphism group and outer automorphism group Z2.
The outer automorphism group of A6 is the Klein four-group , and is related to the outer automorphism of S6. The extra outer automorphism in A6 swaps the 3-cycles (like (123)) with elements of shape 32 (like ).
Exceptional isomorphisms
There are some exceptional isomorphisms between some of the small alternating groups and small groups of Lie type, pa |
https://en.wikipedia.org/wiki/Parity%20of%20a%20permutation | In mathematics, when X is a finite set with at least two elements, the permutations of X (i.e. the bijective functions from X to X) fall into two classes of equal size: the even permutations and the odd permutations. If any total ordering of X is fixed, the parity (oddness or evenness) of a permutation of X can be defined as the parity of the number of inversions for σ, i.e., of pairs of elements x, y of X such that and .
The sign, signature, or signum of a permutation σ is denoted sgn(σ) and defined as +1 if σ is even and −1 if σ is odd. The signature defines the alternating character of the symmetric group Sn. Another notation for the sign of a permutation is given by the more general Levi-Civita symbol (εσ), which is defined for all maps from X to X, and has value zero for non-bijective maps.
The sign of a permutation can be explicitly expressed as
where N(σ) is the number of inversions in σ.
Alternatively, the sign of a permutation σ can be defined from its decomposition into the product of transpositions as
where m is the number of transpositions in the decomposition. Although such a decomposition is not unique, the parity of the number of transpositions in all decompositions is the same, implying that the sign of a permutation is well-defined.
Example
Consider the permutation σ of the set defined by and In one-line notation, this permutation is denoted 34521. It can be obtained from the identity permutation 12345 by three transpositions: first exchange the numbers 2 and 4, then exchange 3 and 5, and finally exchange 1 and 3. This shows that the given permutation σ is odd. Following the method of the cycle notation article, this could be written, composing from right to left, as
There are many other ways of writing σ as a composition of transpositions, for instance
,
but it is impossible to write it as a product of an even number of transpositions.
Properties
The identity permutation is an even permutation. An even permutation can be obtained as the composition of an even number and only an even number of exchanges (called transpositions) of two elements, while an odd permutation can be obtained by (only) an odd number of transpositions.
The following rules follow directly from the corresponding rules about addition of integers:
the composition of two even permutations is even
the composition of two odd permutations is even
the composition of an odd and an even permutation is odd
From these it follows that
the inverse of every even permutation is even
the inverse of every odd permutation is odd
Considering the symmetric group Sn of all permutations of the set {1, ..., n}, we can conclude that the map
that assigns to every permutation its signature is a group homomorphism.
Furthermore, we see that the even permutations form a subgroup of Sn. This is the alternating group on n letters, denoted by An. It is the kernel of the homomorphism sgn. The odd permutations cannot form a subgroup, since the composite of two |
https://en.wikipedia.org/wiki/Multivariate%20random%20variable | In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individual statistical unit. For example, while a given person has a specific age, height and weight, the representation of these features of an unspecified person from within a group would be a random vector. Normally each element of a random vector is a real number.
Random vectors are often used as the underlying implementation of various types of aggregate random variables, e.g. a random matrix, random tree, random sequence, stochastic process, etc.
More formally, a multivariate random variable is a column vector (or its transpose, which is a row vector) whose components are scalar-valued random variables on the same probability space as each other, , where is the sample space, is the sigma-algebra (the collection of all events), and is the probability measure (a function returning each event's probability).
Probability distribution
Every random vector gives rise to a probability measure on with the Borel algebra as the underlying sigma-algebra. This measure is also known as the joint probability distribution, the joint distribution, or the multivariate distribution of the random vector.
The distributions of each of the component random variables are called marginal distributions. The conditional probability distribution of given is the probability distribution of when is known to be a particular value.
The cumulative distribution function of a random vector is defined as
where .
Operations on random vectors
Random vectors can be subjected to the same kinds of algebraic operations as can non-random vectors: addition, subtraction, multiplication by a scalar, and the taking of inner products.
Affine transformations
Similarly, a new random vector can be defined by applying an affine transformation to a random vector :
, where is an matrix and is an column vector.
If is an invertible matrix and has a probability density function , then the probability density of is
.
Invertible mappings
More generally we can study invertible mappings of random vectors.
Let be a one-to-one mapping from an open subset of onto a subset of , let have continuous partial derivatives in and let the Jacobian determinant of be zero at no point of . Assume that the real random vector has a probability density function and satisfies . Then the random vector is of probability density
where denotes the indicator function and set denotes support of .
Expected value
The expected value or mean of a random vector is a fixed vector whose elements are the expected values of the respective random variables.
|
https://en.wikipedia.org/wiki/Domain%20of%20a%20function | In mathematics, the domain of a function is the set of inputs accepted by the function. It is sometimes denoted by or , where is the function. In layman's terms, the domain of a function can generally be thought of as "what x can be".
More precisely, given a function , the domain of is . In modern mathematical language, the domain is part of the definition of a function rather than a property of it.
In the special case that and are both sets of real numbers, the function can be graphed in the Cartesian coordinate system. In this case, the domain is represented on the -axis of the graph, as the projection of the graph of the function onto the -axis.
For a function , the set is called the codomain, and the set of values attained by the function (which is a subset of ) is called its range or image.
Any function can be restricted to a subset of its domain. The restriction of to , where , is written as .
Natural domain
If a real function is given by a formula, it may be not defined for some values of the variable. In this case, it is a partial function, and the set of real numbers on which the formula can be evaluated to a real number is called the natural domain or domain of definition of . In many contexts, a partial function is called simply a function, and its natural domain is called simply its domain.
Examples
The function defined by cannot be evaluated at 0. Therefore, the natural domain of is the set of real numbers excluding 0, which can be denoted by or .
The piecewise function defined by has as its natural domain the set of real numbers.
The square root function has as its natural domain the set of non-negative real numbers, which can be denoted by , the interval , or .
The tangent function, denoted , has as its natural domain the set of all real numbers which are not of the form for some integer , which can be written as .
Other uses
The term domain is also commonly used in a different sense in mathematical analysis: a domain is a non-empty connected open set in a topological space. In particular, in real and complex analysis, a domain is a non-empty connected open subset of the real coordinate space or the complex coordinate space
Sometimes such a domain is used as the domain of a function, although functions may be defined on more general sets. The two concepts are sometimes conflated as in, for example, the study of partial differential equations: in that case, a domain is the open connected subset of where a problem is posed, making it both an analysis-style domain and also the domain of the unknown function(s) sought.
Set theoretical notions
For example, it is sometimes convenient in set theory to permit the domain of a function to be a proper class , in which case there is formally no such thing as a triple . With such a definition, functions do not have a domain, although some authors still use it informally after introducing a function in the form .
See also
Argument of a function
Attribu |
https://en.wikipedia.org/wiki/Codomain | In mathematics, the codomain or set of destination of a function is the set into which all of the output of the function is constrained to fall. It is the set in the notation . The term range is sometimes ambiguously used to refer to either the codomain or image of a function.
A codomain is part of a function if is defined as a triple where is called the domain of , its codomain, and its graph. The set of all elements of the form , where ranges over the elements of the domain , is called the image of . The image of a function is a subset of its codomain so it might not coincide with it. Namely, a function that is not surjective has elements in its codomain for which the equation does not have a solution.
A codomain is not part of a function if is defined as just a graph. For example in set theory it is desirable to permit the domain of a function to be a proper class , in which case there is formally no such thing as a triple . With such a definition functions do not have a codomain, although some authors still use it informally after introducing a function in the form .
Examples
For a function
defined by
or equivalently
the codomain of is , but does not map to any negative number.
Thus the image of is the set ; i.e., the interval .
An alternative function is defined thus:
While and map a given to the same number, they are not, in this view, the same function because they have different codomains. A third function can be defined to demonstrate why:
The domain of cannot be but can be defined to be :
The compositions are denoted
On inspection, is not useful. It is true, unless defined otherwise, that the image of is not known; it is only known that it is a subset of . For this reason, it is possible that , when composed with , might receive an argument for which no output is defined – negative numbers are not elements of the domain of , which is the square root function.
Function composition therefore is a useful notion only when the codomain of the function on the right side of a composition (not its image, which is a consequence of the function and could be unknown at the level of the composition) is a subset of the domain of the function on the left side.
The codomain affects whether a function is a surjection, in that the function is surjective if and only if its codomain equals its image. In the example, is a surjection while is not. The codomain does not affect whether a function is an injection.
A second example of the difference between codomain and image is demonstrated by the linear transformations between two vector spaces – in particular, all the linear transformations from to itself, which can be represented by the matrices with real coefficients. Each matrix represents a map with the domain and codomain . However, the image is uncertain. Some transformations may have image equal to the whole codomain (in this case the matrices with rank ) but many do not, instea |
https://en.wikipedia.org/wiki/SMP | SMP may refer to:
Organisations
Scale Model Products, 1950s, acquired by Aluminum Model Toys
School Mathematics Project, UK developer of mathematics textbooks
Sekolah Menengah Pertama, "junior high school" in Indonesia
Shanghai Municipal Police, until 1943
Sipah-e-Muhammad Pakistan, Pakistani group banned as terrorist
Post-nominal letters of Roman Catholic order Sisters of Mary of the Presentation
Standard Motor Products (NYSE: SMP), US automotive product company
, the Finnish Rural Party, 1959-2003
Science and technology
Shape-memory polymer, smart materials
Signal Message Processor, for the Multifunctional Information Distribution System
Silyl modified polymers, used in adhesives and sealants
Simulation Model Portability, SMP2, European space mission simulator standard
Slow-moving proteinase, the enzyme Cathepsin E
Socialist millionaire problem in cryptography
Sorbitan monopalmitate, a food additive
SOTA Mapping Project, a website for radio amateurs
Stable marriage problem in mathematics
Stable massive particle in physics, e.g the MoEDAL experiment
Surface-mount package, for electronic components
Computing
Serial Management Protocol for Serial attached SCSI (SAS)
System Modification Program, IBM mainframe software
SMP/E (System Modification Program/Extended), IBM mainframe software
Supplementary Multilingual Plane, Unicode characters for historical scripts
SMP (computer algebra system)
Symmetric multiprocessing
Security Manager Protocol used in Bluetooth Low Energy
SimpleX Messaging Protocol, a privacy focused messaging protocol
Entertainment
SMP (band)
Survival Multiplayer, a common Minecraft server gamemode
Dream SMP, a Minecraft server colloquially referred to as "the SMP"
Other uses
Securities Markets Program of the European Central Bank
Statutory Maternity Pay in the UK
Sau Mau Ping station, Hong Kong
Scalp micropigmentation
SHOKUGAN MODELING PROJECT, a Japanese plastic model kit series released by Bandai
Single-member plurality voting
Sydney Motorsport Park, a motorsport facility located in Australia
SMP Racing, a Russian auto racing team |
https://en.wikipedia.org/wiki/Multivariate%20normal%20distribution | In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.
Definitions
Notation and parametrization
The multivariate normal distribution of a k-dimensional random vector can be written in the following notation:
or to make it explicitly known that X is k-dimensional,
with k-dimensional mean vector
and covariance matrix
such that and . The inverse of the covariance matrix is called the precision matrix, denoted by .
Standard normal random vector
A real random vector is called a standard normal random vector if all of its components are independent and each is a zero-mean unit-variance normally distributed random variable, i.e. if for all .
Centered normal random vector
A real random vector is called a centered normal random vector if there exists a deterministic matrix such that has the same distribution as where is a standard normal random vector with components.
Normal random vector
A real random vector is called a normal random vector if there exists a random -vector , which is a standard normal random vector, a -vector , and a matrix , such that .
Formally:
Here the covariance matrix is .
In the degenerate case where the covariance matrix is singular, the corresponding distribution has no density; see the section below for details. This case arises frequently in statistics; for example, in the distribution of the vector of residuals in the ordinary least squares regression. The are in general not independent; they can be seen as the result of applying the matrix to a collection of independent Gaussian variables .
Equivalent definitions
The following definitions are equivalent to the definition given above. A random vector has a multivariate normal distribution if it satisfies one of the following equivalent conditions.
Every linear combination of its components is normally distributed. That is, for any constant vector , the random variable has a univariate normal distribution, where a univariate normal distribution with zero variance is a point mass on its mean.
There is a k-vector and a symmetric, positive semidefinite matrix , such that the characteristic function of is
The spherical normal distribution can be characterised as the unique distribution where components are independent in any orthogonal coordinate system.
Density function
Non-degenerate case
The multivariate normal distribu |
https://en.wikipedia.org/wiki/Differential%20calculus | In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve.
The primary objects of study in differential calculus are the derivative of a function, related notions such as the differential, and their applications. The derivative of a function at a chosen input value describes the rate of change of the function near that input value. The process of finding a derivative is called differentiation. Geometrically, the derivative at a point is the slope of the tangent line to the graph of the function at that point, provided that the derivative exists and is defined at that point. For a real-valued function of a single real variable, the derivative of a function at a point generally determines the best linear approximation to the function at that point.
Differential calculus and integral calculus are connected by the fundamental theorem of calculus, which states that differentiation is the reverse process to integration.
Differentiation has applications in nearly all quantitative disciplines. In physics, the derivative of the displacement of a moving body with respect to time is the velocity of the body, and the derivative of the velocity with respect to time is acceleration. The derivative of the momentum of a body with respect to time equals the force applied to the body; rearranging this derivative statement leads to the famous equation associated with Newton's second law of motion. The reaction rate of a chemical reaction is a derivative. In operations research, derivatives determine the most efficient ways to transport materials and design factories.
Derivatives are frequently used to find the maxima and minima of a function. Equations involving derivatives are called differential equations and are fundamental in describing natural phenomena. Derivatives and their generalizations appear in many fields of mathematics, such as complex analysis, functional analysis, differential geometry, measure theory, and abstract algebra.
Derivative
The derivative of at the point is the slope of the tangent to . In order to gain an intuition for this, one must first be familiar with finding the slope of a linear equation, written in the form . The slope of an equation is its steepness. It can be found by picking any two points and dividing the change in by the change in , meaning that . For, the graph of has a slope of , as shown in the diagram below:
For brevity, is often written as , with being the Greek letter delta, meaning 'change in'. The slope of a linear equation is constant, meaning that the steepness is the same everywhere. However, many graphs such as vary in their steepness. This means that you can no longer pick any two arbitrary points and compute the slope. Instead, the slope of the graph can be computed by considering the tangent line—a line that 'jus |
https://en.wikipedia.org/wiki/Conformal%20map | In mathematics, a conformal map is a function that locally preserves angles, but not necessarily lengths.
More formally, let and be open subsets of . A function is called conformal (or angle-preserving) at a point if it preserves angles between directed curves through , as well as preserving orientation. Conformal maps preserve both angles and the shapes of infinitesimally small figures, but not necessarily their size or curvature.
The conformal property may be described in terms of the Jacobian derivative matrix of a coordinate transformation. The transformation is conformal whenever the Jacobian at each point is a positive scalar times a rotation matrix (orthogonal with determinant one). Some authors define conformality to include orientation-reversing mappings whose Jacobians can be written as any scalar times any orthogonal matrix.
For mappings in two dimensions, the (orientation-preserving) conformal mappings are precisely the locally invertible complex analytic functions. In three and higher dimensions, Liouville's theorem sharply limits the conformal mappings to a few types.
The notion of conformality generalizes in a natural way to maps between Riemannian or semi-Riemannian manifolds.
In two dimensions
If is an open subset of the complex plane , then a function is conformal if and only if it is holomorphic and its derivative is everywhere non-zero on . If is antiholomorphic (conjugate to a holomorphic function), it preserves angles but reverses their orientation.
In the literature, there is another definition of conformal: a mapping which is one-to-one and holomorphic on an open set in the plane. The open mapping theorem forces the inverse function (defined on the image of ) to be holomorphic. Thus, under this definition, a map is conformal if and only if it is biholomorphic. The two definitions for conformal maps are not equivalent. Being one-to-one and holomorphic implies having a non-zero derivative. However, the exponential function is a holomorphic function with a nonzero derivative, but is not one-to-one since it is periodic.
The Riemann mapping theorem, one of the profound results of complex analysis, states that any non-empty open simply connected proper subset of admits a bijective conformal map to the open unit disk in . Informally, this means that any blob can be transformed into a perfect circle by some conformal map.
Global conformal maps on the Riemann sphere
A map of the Riemann sphere onto itself is conformal if and only if it is a Möbius transformation.
The complex conjugate of a Möbius transformation preserves angles, but reverses the orientation. For example, circle inversions.
Conformality with respect to three types of angles
In plane geometry there are three types of angles that may be preserved in a conformal map. Each is hosted by its own real algebra, ordinary complex numbers, split-complex numbers, and dual numbers. The conformal maps are described by linear fractional transformations in eac |
https://en.wikipedia.org/wiki/Astronomy | Astronomy is a natural science that studies celestial objects and phenomena. It uses mathematics, physics, and chemistry in order to explain their origin and evolution. Objects of interest include planets, moons, stars, nebulae, galaxies, meteoroids, asteroids, and comets. Relevant phenomena include supernova explosions, gamma ray bursts, quasars, blazars, pulsars, and cosmic microwave background radiation. More generally, astronomy studies everything that originates beyond Earth's atmosphere. Cosmology is a branch of astronomy that studies the universe as a whole.
Astronomy is one of the oldest natural sciences. The early civilizations in recorded history made methodical observations of the night sky. These include the Egyptians, Babylonians, Greeks, Indians, Chinese, Maya, and many ancient indigenous peoples of the Americas. In the past, astronomy included disciplines as diverse as astrometry, celestial navigation, observational astronomy, and the making of calendars.
Professional astronomy is split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects. This data is then analyzed using basic principles of physics. Theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. These two fields complement each other. Theoretical astronomy seeks to explain observational results and observations are used to confirm theoretical results.
Astronomy is one of the few sciences in which amateurs play an active role. This is especially true for the discovery and observation of transient events. Amateur astronomers have helped with many important discoveries, such as finding new comets.
Etymology
Astronomy (from the Greek ἀστρονομία from ἄστρον astron, "star" and -νομία -nomia from νόμος nomos, "law" or "culture") means "law of the stars" (or "culture of the stars" depending on the translation). Astronomy should not be confused with astrology, the belief system which claims that human affairs are correlated with the positions of celestial objects. Although the two fields share a common origin, they are now entirely distinct.
Use of terms "astronomy" and "astrophysics"
"Astronomy" and "astrophysics" are synonyms. Based on strict dictionary definitions, "astronomy" refers to "the study of objects and matter outside the Earth's atmosphere and of their physical and chemical properties," while "astrophysics" refers to the branch of astronomy dealing with "the behavior, physical properties, and dynamic processes of celestial objects and phenomena". In some cases, as in the introduction of the introductory textbook The Physical Universe by Frank Shu, "astronomy" may be used to describe the qualitative study of the subject, whereas "astrophysics" is used to describe the physics-oriented version of the subject. However, since most modern astronomical research deals with subjects related to physics, |
https://en.wikipedia.org/wiki/Convergence%20of%20random%20variables | In probability theory, there exist several different notions of convergence of random variables. The convergence of sequences of random variables to some limit random variable is an important concept in probability theory, and its applications to statistics and stochastic processes. The same concepts are known in more general mathematics as stochastic convergence and they formalize the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle down into a behavior that is essentially unchanging when items far enough into the sequence are studied. The different possible notions of convergence relate to how such a behavior can be characterized: two readily understood behaviors are that the sequence eventually takes a constant value, and that values in the sequence continue to change but can be described by an unchanging probability distribution.
Background
"Stochastic convergence" formalizes the idea that a sequence of essentially random or unpredictable events can sometimes be expected to settle into a pattern. The pattern may for instance be
Convergence in the classical sense to a fixed value, perhaps itself coming from a random event
An increasing similarity of outcomes to what a purely deterministic function would produce
An increasing preference towards a certain outcome
An increasing "aversion" against straying far away from a certain outcome
That the probability distribution describing the next outcome may grow increasingly similar to a certain distribution
Some less obvious, more theoretical patterns could be
That the series formed by calculating the expected value of the outcome's distance from a particular value may converge to 0
That the variance of the random variable describing the next event grows smaller and smaller.
These other types of patterns that may arise are reflected in the different types of stochastic convergence that have been studied.
While the above discussion has related to the convergence of a single series to a limiting value, the notion of the convergence of two series towards each other is also important, but this is easily handled by studying the sequence defined as either the difference or the ratio of the two series.
For example, if the average of n independent random variables Yi, i = 1, ..., n, all having the same finite mean and variance, is given by
then as n tends to infinity, converges in probability (see below) to the common mean, μ, of the random variables Yi. This result is known as the weak law of large numbers. Other forms of convergence are important in other useful theorems, including the central limit theorem.
Throughout the following, we assume that (Xn) is a sequence of random variables, and X is a random variable, and all of them are defined on the same probability space .
Convergence in distribution
With this mode of convergence, we increasingly expect to see the next outcome in a sequence of random experiments becoming better and bette |
https://en.wikipedia.org/wiki/Strong%20convergence | In mathematics, strong convergence may refer to:
The strong convergence of random variables of a probability distribution.
The norm-convergence of a sequence in a Hilbert space (as opposed to weak convergence).
The convergence of operators in the strong operator topology. |
https://en.wikipedia.org/wiki/Weak%20convergence | In mathematics, weak convergence may refer to:
Weak convergence of random variables of a probability distribution
Weak convergence of measures, of a sequence of probability measures
Weak convergence (Hilbert space) of a sequence in a Hilbert space
more generally, convergence in weak topology in a Banach space or a topological vector space |
https://en.wikipedia.org/wiki/Extreme%20value%20theory | Extreme value theory or extreme value analysis (EVA) is a branch of statistics dealing with the extreme deviations from the median of probability distributions. It seeks to assess, from a given ordered sample of a given random variable, the probability of events that are more extreme than any previously observed. Extreme value analysis is widely used in many disciplines, such as structural engineering, finance, economics, earth sciences, traffic prediction, and geological engineering. For example, EVA might be used in the field of hydrology to estimate the probability of an unusually large flooding event, such as the 100-year flood. Similarly, for the design of a breakwater, a coastal engineer would seek to estimate the 50-year wave and design the structure accordingly.
Data analysis
Two main approaches exist for practical extreme value analysis.
The first method relies on deriving block maxima (minima) series as a preliminary step. In many situations it is customary and convenient to extract the annual maxima (minima), generating an "Annual Maxima Series" (AMS).
The second method relies on extracting, from a continuous record, the peak values reached for any period during which values exceed a certain threshold (falls below a certain threshold). This method is generally referred to as the "Peak Over Threshold" method (POT).
For AMS data, the analysis may partly rely on the results of the Fisher–Tippett–Gnedenko theorem, leading to the generalized extreme value distribution being selected for fitting. However, in practice, various procedures are applied to select between a wider range of distributions. The theorem here relates to the limiting distributions for the minimum or the maximum of a very large collection of independent random variables from the same distribution. Given that the number of relevant random events within a year may be rather limited, it is unsurprising that analyses of observed AMS data often lead to distributions other than the generalized extreme value distribution (GEVD) being selected.
For POT data, the analysis may involve fitting two distributions: one for the number of events in a time period considered and a second for the size of the exceedances.
A common assumption for the first is the Poisson distribution, with the generalized Pareto distribution being used for the exceedances.
A tail-fitting can be based on the Pickands–Balkema–de Haan theorem.
Novak reserves the term "POT method" to the case where the threshold is non-random, and distinguishes it from the case where one deals with exceedances of a random threshold.
Applications
Applications of extreme value theory include predicting the probability distribution of:
Extreme floods; the size of freak waves
Tornado outbreaks
Maximum sizes of ecological populations
Side effects of drugs (e.g., ximelagatran)
The magnitudes of large insurance losses
Equity risks; day-to-day market risk
Mutational events during evolution
Large wildfires
Environmenta |
https://en.wikipedia.org/wiki/Haar%20wavelet | In mathematics, the Haar wavelet is a sequence of rescaled "square-shaped" functions which together form a wavelet family or basis. Wavelet analysis is similar to Fourier analysis in that it allows a target function over an interval to be represented in terms of an orthonormal basis. The Haar sequence is now recognised as the first known wavelet basis and is extensively used as a teaching example.
The Haar sequence was proposed in 1909 by Alfréd Haar.
Haar used these functions to give an example of an orthonormal system for the space of square-integrable functions on the unit interval [0, 1]. The study of wavelets, and even the term "wavelet", did not come until much later. As a special case of the Daubechies wavelet, the Haar wavelet is also known as Db1.
The Haar wavelet is also the simplest possible wavelet. The technical disadvantage of the Haar wavelet is that it is not continuous, and therefore not differentiable. This property can, however, be an advantage for the analysis of signals with sudden transitions (discrete signals), such as monitoring of tool failure in machines.
The Haar wavelet's mother wavelet function can be described as
Its scaling function can be described as
Haar functions and Haar system
For every pair n, k of integers in , the Haar function ψn,k is defined on the real line by the formula
This function is supported on the right-open interval , i.e., it vanishes outside that interval. It has integral 0 and norm 1 in the Hilbert space L2(),
The Haar functions are pairwise orthogonal,
where represents the Kronecker delta. Here is the reason for orthogonality: when the two supporting intervals and are not equal, then they are either disjoint, or else the smaller of the two supports, say , is contained in the lower or in the upper half of the other interval, on which the function remains constant. It follows in this case that the product of these two Haar functions is a multiple of the first Haar function, hence the product has integral 0.
The Haar system on the real line is the set of functions
It is complete in L2(): The Haar system on the line is an orthonormal basis in L2().
Haar wavelet properties
The Haar wavelet has several notable properties:
Haar system on the unit interval and related systems
In this section, the discussion is restricted to the unit interval [0, 1] and to the Haar functions that are supported on [0, 1]. The system of functions considered by Haar in 1910,
called the Haar system on [0, 1] in this article, consists of the subset of Haar wavelets defined as
with the addition of the constant function 1 on [0, 1].
In Hilbert space terms, this Haar system on [0, 1] is a complete orthonormal system, i.e., an orthonormal basis, for the space L2([0, 1]) of square integrable functions on the unit interval.
The Haar system on [0, 1] —with the constant function 1 as first element, followed with the Haar functions ordered according to the lexicographic ordering of couples — is further |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.