source
stringlengths 31
168
| text
stringlengths 51
3k
|
|---|---|
https://en.wikipedia.org/wiki/Leibniz%20operator
|
In abstract algebraic logic, a branch of mathematical logic, the Leibniz operator is a tool used to classify deductive systems, which have a precise technical definition and capture a large number of logics. The Leibniz operator was introduced by Wim Blok and Don Pigozzi, two of the founders of the field, as a means to abstract the well-known Lindenbaum–Tarski process, that leads to the association of Boolean algebras to classical propositional calculus, and make it applicable to as wide a variety of sentential logics as possible. It is an operator that assigns to a given theory of a given sentential logic, perceived as a term algebra with a consequence operation on its universe, the largest congruence on the algebra that is compatible with the theory.
Formulation
In this article, we introduce the Leibniz operator in the special case of classical propositional calculus, then we abstract it to the general notion applied to an arbitrary sentential logic and, finally, we summarize some of the most important consequences of its use in the theory of abstract algebraic logic.
Let
denote the classical propositional calculus. According to the classical
Lindenbaum–Tarski process, given a theory
of ,
if
denotes the binary relation on the set of formulas
of , defined by
if and only if
where denotes the usual
classical propositional equivalence connective, then
turns out to be a congruence
on the formula algebra. Furthermore, the quotient
is a Boolean algebra
and every Boolean algebra may be formed in this way.
Thus, the variety of Boolean algebras, which is,
in algebraic logic terminology, the
equivalent algebraic semantics (algebraic counterpart)
of classical propositional calculus, is the class of
all algebras formed by taking appropriate quotients
of term algebras by those special kinds of
congruences.
Notice that the condition
that defines
is equivalent to the
condition
for every formula : if and only if .
Passing now to an arbitrary sentential logic
given a theory ,
the Leibniz congruence associated with is
denoted by and is defined, for all
, by
if and only if, for every formula
containing a variable
and possibly other variables in the list ,
and all formulas forming a list of the same
length as that of , we have that
if and only if .
It turns out that this binary relation is a congruence relation on the formula algebra and, in fact, may alternatively be characterized as the largest congruence on the formula algebra that is compatible
with the theory , in the sense that if and , then we must have also . It is this congruence that plays the same role as the congruence used in the traditional Lindenbaum–Tarski process described above in the context of an arbitrary sentential logic.
It is not, however, the case that for arbitrary sentential logics the quotients of the term algebras by these Leibniz congruences over different theories yield all algebras in the class that forms the natural algebraic cou
|
https://en.wikipedia.org/wiki/Banach%20function%20algebra
|
In functional analysis, a Banach function algebra on a compact Hausdorff space X is unital subalgebra, A, of the commutative C*-algebra C(X) of all continuous, complex-valued functions from X, together with a norm on A that makes it a Banach algebra.
A function algebra is said to vanish at a point p if f(p) = 0 for all . A function algebra separates points if for each distinct pair of points , there is a function such that .
For every define for . Then
is a homomorphism (character) on , non-zero if does not vanish at .
Theorem: A Banach function algebra is semisimple (that is its Jacobson radical is equal to zero) and each commutative unital, semisimple Banach algebra is isomorphic (via the Gelfand transform) to a Banach function algebra on its character space (the space of algebra homomorphisms from A into the complex numbers given the relative weak* topology).
If the norm on is the uniform norm (or sup-norm) on , then is called
a uniform algebra. Uniform algebras are an important special case of Banach function algebras.
References
Andrew Browder (1969) Introduction to Function Algebras, W. A. Benjamin
H.G. Dales (2000) Banach Algebras and Automatic Continuity, London Mathematical Society Monographs 24, Clarendon Press
Graham Allan & H. Garth Dales (2011) Introduction to Banach Spaces and Algebras, Oxford University Press
Banach algebras
|
https://en.wikipedia.org/wiki/UPML
|
UPML may refer to:
Ukrainian Physics and Mathematics Lyceum, a high school in Kyiv, Ukraine.
Uniaxial Perfectly Matched Layer, numerical truncation methodology.
|
https://en.wikipedia.org/wiki/Levene%27s%20test
|
In statistics, Levene's test is an inferential statistic used to assess the equality of variances for a variable calculated for two or more groups. Some common statistical procedures assume that variances of the populations from which different samples are drawn are equal. Levene's test assesses this assumption. It tests the null hypothesis that the population variances are equal (called homogeneity of variance or homoscedasticity). If the resulting p-value of Levene's test is less than some significance level (typically 0.05), the obtained differences in sample variances are unlikely to have occurred based on random sampling from a population with equal variances. Thus, the null hypothesis of equal variances is rejected and it is concluded that there is a difference between the variances in the population.
Some of the procedures typically assuming homoscedasticity, for which one can use Levene's tests, include analysis of variance and t-tests.
Levene's test is sometimes used before a comparison of means, informing the decision on whether to use a pooled t-test or the Welch's t-test. However, it was shown that such a two-step procedure may markedly inflate the type 1 error obtained with the t-tests and thus should not be done in the first place. Instead, the choice of pooled or Welch's test should be made a priori based on the study design.
Levene's test may also be used as a main test for answering a stand-alone question of whether two sub-samples in a given population have equal or different variances.
Levene's test was developed by and named after American statistician and geneticist Howard Levene.
Definition
Levene's test is equivalent to a 1-way between-groups analysis of variance (ANOVA) with the dependent variable being the absolute value of the difference between a score and the mean of the group to which the score belongs (shown below as ). The test statistic, , is equivalent to the statistic that would be produced by such an ANOVA, and is defined as follows:
where
is the number of different groups to which the sampled cases belong,
is the number of cases in the th group,
is the total number of cases in all groups,
is the value of the measured variable for theth case from the th group,
(Both definitions are in use though the second one is, strictly speaking, the Brown–Forsythe test – see below for comparison.)
is the mean of the for group ,
is the mean of all .
The test statistic is approximately F-distributed with and degrees of freedom, and hence is the significance of the outcome of tested against where is a quantile of the F-distribution, with and degrees of freedom, and is the chosen level of significance (usually 0.05 or 0.01).
Comparison with the Brown–Forsythe test
The Brown–Forsythe test uses the median instead of the mean in computing the spread within each group ( vs. , above). Although the optimal choice depends on the underlying distribution, the definition based on the median is recomm
|
https://en.wikipedia.org/wiki/Cartan%20subgroup
|
In the theory of algebraic groups, a Cartan subgroup of a connected linear algebraic group over a (not necessarily algebraically closed) field is the centralizer of a maximal torus. Cartan subgroups are smooth (equivalently reduced), connected and nilpotent. If is algebraically closed, they are all conjugate to each other.
Notice that in the context of algebraic groups a torus is an algebraic group
such that the base extension (where is the algebraic closure of ) is isomorphic to the product of a finite number of copies of the . Maximal such subgroups have in the theory of algebraic groups a role that is similar to that of maximal tori in the theory of Lie groups.
If is reductive (in particular, if it is semi-simple), then a torus is maximal if and only if it is its own centraliser and thus Cartan subgroups of are precisely the maximal tori.
Example
The general linear groups are reductive. The diagonal subgroup is clearly a torus (indeed a split torus, since it is product of n copies of already before any base extension), and it can be shown to be maximal. Since is reductive, the diagonal subgroup is a Cartan subgroup.
See also
Borel subgroup
Algebraic group
Algebraic torus
References
Algebraic geometry
Linear algebraic groups
|
https://en.wikipedia.org/wiki/Frequency%20%28statistics%29
|
In statistics, the frequency or absolute frequency of an event is the number of times the observation has occurred/recorded in an experiment or study. These frequencies are often depicted graphically or in tabular form.
Types
The cumulative frequency is the total of the absolute frequencies of all events at or below a certain point in an ordered list of events.
The relative frequency (or empirical probability) of an event is the absolute frequency normalized by the total number of events:
The values of for all events can be plotted to produce a frequency distribution.
In the case when for certain , pseudocounts can be added.
Depicting frequency distributions
A frequency distribution shows a summarized grouping of data divided into mutually exclusive classes and the number of occurrences in a class. It is a way of showing unorganized data notably to show results of an election, income of people for a certain region, sales of a product within a certain period, student loan amounts of graduates, etc. Some of the graphs that can be used with frequency distributions are histograms, line charts, bar charts and pie charts. Frequency distributions are used for both qualitative and quantitative data.
Construction
Decide the number of classes. Too many classes or too few classes might not reveal the basic shape of the data set, also it will be difficult to interpret such frequency distribution. The ideal number of classes may be determined or estimated by formula: (log base 10), or by the square-root choice formula where n is the total number of observations in the data. (The latter will be much too large for large data sets such as population statistics.) However, these formulas are not a hard rule and the resulting number of classes determined by formula may not always be exactly suitable with the data being dealt with.
Calculate the range of the data (Range = Max – Min) by finding the minimum and maximum data values. Range will be used to determine the class interval or class width.
Decide the width of the classes, denoted by h and obtained by (assuming the class intervals are the same for all classes).
Generally the class interval or class width is the same for all classes. The classes all taken together must cover at least the distance from the lowest value (minimum) in the data to the highest (maximum) value. Equal class intervals are preferred in frequency distribution, while unequal class intervals (for example logarithmic intervals) may be necessary in certain situations to produce a good spread of observations between the classes and avoid a large number of empty, or almost empty classes.
Decide the individual class limits and select a suitable starting point of the first class which is arbitrary; it may be less than or equal to the minimum value. Usually it is started before the minimum value in such a way that the midpoint (the average of lower and upper class limits of the first class) is properly placed.
Take an obser
|
https://en.wikipedia.org/wiki/Presheaf%20%28category%20theory%29
|
In category theory, a branch of mathematics, a presheaf on a category is a functor . If is the poset of open sets in a topological space, interpreted as a category, then one recovers the usual notion of presheaf on a topological space.
A morphism of presheaves is defined to be a natural transformation of functors. This makes the collection of all presheaves on into a category, and is an example of a functor category. It is often written as . A functor into is sometimes called a profunctor.
A presheaf that is naturally isomorphic to the contravariant hom-functor Hom(–, A) for some object A of C is called a representable presheaf.
Some authors refer to a functor as a -valued presheaf.
Examples
A simplicial set is a Set-valued presheaf on the simplex category .
Properties
When is a small category, the functor category is cartesian closed.
The poset of subobjects of form a Heyting algebra, whenever is an object of for small .
For any morphism of , the pullback functor of subobjects has a right adjoint, denoted , and a left adjoint, . These are the universal and existential quantifiers.
A locally small category embeds fully and faithfully into the category of set-valued presheaves via the Yoneda embedding which to every object of associates the hom functor .
The category admits small limits and small colimits. See limit and colimit of presheaves for further discussion.
The density theorem states that every presheaf is a colimit of representable presheaves; in fact, is the colimit completion of (see #Universal property below.)
Universal property
The construction is called the colimit completion of C because of the following universal property:
Proof: Given a presheaf F, by the density theorem, we can write where are objects in C. Then let which exists by assumption. Since is functorial, this determines the functor . Succinctly, is the left Kan extension of along y; hence, the name "Yoneda extension". To see commutes with small colimits, we show is a left-adjoint (to some functor). Define to be the functor given by: for each object M in D and each object U in C,
Then, for each object M in D, since by the Yoneda lemma, we have:
which is to say is a left-adjoint to .
The proposition yields several corollaries. For example, the proposition implies that the construction is functorial: i.e., each functor determines the functor .
Variants
A presheaf of spaces on an ∞-category C is a contravariant functor from C to the ∞-category of spaces (for example, the nerve of the category of CW-complexes.) It is an ∞-category version of a presheaf of sets, as a "set" is replaced by a "space". The notion is used, among other things, in the ∞-category formulation of Yoneda's lemma that says: is fully faithful (here C can be just a simplicial set.)
See also
Topos
Category of elements
Simplicial presheaf (this notion is obtained by replacing "set" with "simplicial set")
Presheaf with transfers
Notes
Referen
|
https://en.wikipedia.org/wiki/Central%20product
|
In mathematics, especially in the field of group theory, the central product is one way of producing a group from two smaller groups. The central product is similar to the direct product, but in the central product two isomorphic central subgroups of the smaller groups are merged into a single central subgroup of the product. Central products are an important construction and can be used for instance to classify extraspecial groups.
Definition
There are several related but distinct notions of central product. Similarly to the direct product, there are both internal and external characterizations, and additionally there are variations on how strictly the intersection of the factors is controlled.
A group G is an internal central product of two subgroups H, K if
G is generated by H and K.
Every element of H commutes with every element of K.
Sometimes the stricter requirement that is exactly equal to the center is imposed, as in . The subgroups H and K are then called central factors of G.
The external central product is constructed from two groups H and K, two subgroups and , and a group isomorphism . The external central product is the quotient of the direct product by the normal subgroup
,
. Sometimes the stricter requirement that H1 = Z(H) and K1 = Z(K) is imposed, as in .
An internal central product is isomorphic to an external central product with H1 = K1 = H ∩ K and θ the identity. An external central product is an internal central product of the images of H × 1 and 1 × K in the quotient group . This is shown for each definition in and .
Note that the external central product is not in general determined by its factors H and K alone. The isomorphism type of the central product will depend on the isomorphism θ. It is however well defined in some notable situations, for example when H and K are both finite extra special groups and and .
Examples
The Pauli group is the central product of the cyclic group and the dihedral group .
Every extra special group is a central product of extra special groups of order p3.
The layer of a finite group, that is, the subgroup generated by all subnormal quasisimple subgroups, is a central product of quasisimple groups in the sense of Gorenstein.
Applications
The representation theory of central products is very similar to the representation theory of direct products, and so is well understood, .
Central products occur in many structural lemmas, such as which is used in George Glauberman's result that finite groups admitting a Klein four group of fixed-point-free automorphisms are solvable.
In certain context of a tensor product of Lie modules (and other related structures), the automorphism group contains a central product of the automorphism groups of each factor .
References
Finite groups
|
https://en.wikipedia.org/wiki/Elementary%20abelian%20group
|
In mathematics, specifically in group theory, an elementary abelian group is an abelian group in which all elements other than the identity have the same order. This common order must be a prime number, and the elementary abelian groups in which the common order is p are a particular kind of p-group. A group for which p = 2 (that is, an elementary abelian 2-group) is sometimes called a Boolean group.
Every elementary abelian p-group is a vector space over the prime field with p elements, and conversely every such vector space is an elementary abelian group.
By the classification of finitely generated abelian groups, or by the fact that every vector space has a basis, every finite elementary abelian group must be of the form (Z/pZ)n for n a non-negative integer (sometimes called the group's rank). Here, Z/pZ denotes the cyclic group of order p (or equivalently the integers mod p), and the superscript notation means the n-fold direct product of groups.
In general, a (possibly infinite) elementary abelian p-group is a direct sum of cyclic groups of order p. (Note that in the finite case the direct product and direct sum coincide, but this is not so in the infinite case.)
In the rest of this article, all groups are assumed finite.
Examples and properties
The elementary abelian group (Z/2Z)2 has four elements: . Addition is performed componentwise, taking the result modulo 2. For instance, . This is in fact the Klein four-group.
In the group generated by the symmetric difference on a (not necessarily finite) set, every element has order 2. Any such group is necessarily abelian because, since every element is its own inverse, xy = (xy)−1 = y−1x−1 = yx. Such a group (also called a Boolean group), generalizes the Klein four-group example to an arbitrary number of components.
(Z/pZ)n is generated by n elements, and n is the least possible number of generators. In particular, the set , where ei has a 1 in the ith component and 0 elsewhere, is a minimal generating set.
Every elementary abelian group has a fairly simple finite presentation.
Vector space structure
Suppose V (Z/pZ)n is an elementary abelian group. Since Z/pZ Fp, the finite field of p elements, we have V = (Z/pZ)n Fpn, hence V can be considered as an n-dimensional vector space over the field Fp. Note that an elementary abelian group does not in general have a distinguished basis: choice of isomorphism V (Z/pZ)n corresponds to a choice of basis.
To the observant reader, it may appear that Fpn has more structure than the group V, in particular that it has scalar multiplication in addition to (vector/group) addition. However, V as an abelian group has a unique Z-module structure where the action of Z corresponds to repeated addition, and this Z-module structure is consistent with the Fp scalar multiplication. That is, c·g = g + g + ... + g (c times) where c in Fp (considered as an integer with 0 ≤ c < p) gives V a natural Fp-module structure.
Automorphism group
As a ve
|
https://en.wikipedia.org/wiki/David%20Webb%20%28Hong%20Kong%20activist%29
|
David Michael Webb (born 29 August 1965) is an activist investor, share market analyst and retired investment banker based in Hong Kong.
Early life
Webb graduated in Mathematics from Exeter College, Oxford in 1986. From 1981 to 1986 he was also an author of books and games for early home computers, particularly the ZX Spectrum. He authored the Pac Man type game Spookyman and went on to create the acclaimed 3D Vector graphics game Starion on the Spectrum.
After graduation he became an investment banker in London. He moved to Hong Kong in 1991. He was a director in the corporate finance department of Barclays de Zoete Wedd (Asia) Limited (later Barclays Capital Asia Limited), the Hong Kong subsidiary of investment bank Barclays, until 31 March 1994,, when he moved to become an in-house adviser to Wheelock and Company Limited. He retired from Wheelock on 31 March 1998 at the age of 32 and in the same year, founded Webb-site.com, a non-profit platform to advocate better corporate and economic governance in Hong Kong.
Webb was appointed a Deputy Chairman of the Hong Kong Securities and Futures Commission's Takeover and Mergers Panel on 1 April 2013, having commenced serving as member on 1 April 2001.
Activism
Webb has been referred to as the "'Long Hair' of the financial markets" (in an allusion to Leung Kwok-hung), but his activism is not purely restricted to the finance sector. He uses his eponymous webb-site.com as his official mouthpiece on all matters commercial and political.
In 2003, he launched "Project Poll", in which he purchased 10 shares in each of the 33 constituents of the Hang Seng Index, registering them in 5 names (himself, his wife and 3 BVI companies he owned) and used company law to demand poll voting (1 share, 1 vote) rather than a show of hands in all their shareholder meetings. This eventually led to a change of Hong Kong Listing Rules to require poll voting in all companies from 2009 onwards. Also in 2003, he launched Project VAMPIRE (Vote Against Mandate for Placings, Issues by Rights Excepted), to oppose resolutions that allow for massive issues of shares for cash without offering them to existing shareholders, negating pre-emption rights.
In December 2005, he advocated widening the electorate of the functional constituencies, arguing that professionals in fields such as bankers and stockbrokers should get to elect their own representatives. He observed then that only accountants, lawyers, doctors and teachers are able to exercise that right; stockbrokers are represented by convicted fraudster Chim Pui-chung, whilst the banking seat has only been contested once in 20 years. During the 2014 Hong Kong protests, he said that the economic impact of the protests was minor compared to the large economic benefits of a more dynamic economy that would come from democracy, ending collusion between the Government and the tycoons who currently elect the Chief executive.
Hong Kong Stock Exchange
Webb argues that there is inhere
|
https://en.wikipedia.org/wiki/Joachim%20Nitsche
|
Joachim A. Nitsche (September 2, 1926, Nossen – January 12, 1996) was a German mathematician and professor of mathematics in Freiburg, known for his important contributions to the mathematical and numerical analysis of partial differential equations. The duality argument for estimating the error of the finite element method and a scheme for the weak enforcement of Dirichlet boundary conditions for Poisson's equation bear his name.
Biography
Education
Nitsche graduated from school at Bischofswerda in 1946. Starting in summer 1947, he studied mathematics the University of Göttingen, where he received his Diplom (under supervision of Franz Rellich) after only six semesters. In 1951, he received his degree (Dr. rer. nat.) at the Technical University of Berlin-Charlottenburg (nowadays TU Berlin). After only two years, he received his Habilitation at the Free University of Berlin.
Marriage and children
In 1952, Nitsche married Gisela Lange, with whom he had three children.
Professional career
From 1955 to 1957, Nitsche held a teaching position at the Free University of Berlin, which he left for a position at IBM in Böblingen. He became professor at the Albert Ludwigs University of Freiburg in 1958 and received the chair for applied mathematics there in 1962. He remained in this position until he became emeritus in 1991.
Works
Contributions
Quasi-optimal error estimates for the finite element method
Point-wise error estimates for the finite element method
Publications
Praktische Mathematik, BI Hochschulskripten 812*, Bibliographisches Institut, Mannheim, Zurich, 1968.
References
Amann, H., Helfrich, H.-P., Scholz, R. "Joachim A. Nitsche (1926-1996)", Jahresbericht der Deutschen Mathematikervereinigung 99 (1997) 90-100.
1926 births
1996 deaths
People from Nossen
20th-century German mathematicians
IBM employees
University of Göttingen alumni
|
https://en.wikipedia.org/wiki/Projective%20orthogonal%20group
|
In projective geometry and linear algebra, the projective orthogonal group PO is the induced action of the orthogonal group of a quadratic space V = (V,Q) on the associated projective space P(V). Explicitly, the projective orthogonal group is the quotient group
PO(V) = O(V)/ZO(V) = O(V)/{±I}
where O(V) is the orthogonal group of (V) and ZO(V)={±I} is the subgroup of all orthogonal scalar transformations of V – these consist of the identity and reflection through the origin. These scalars are quotiented out because they act trivially on the projective space and they form the kernel of the action, and the notation "Z" is because the scalar transformations are the center of the orthogonal group.
The projective special orthogonal group, PSO, is defined analogously, as the induced action of the special orthogonal group on the associated projective space. Explicitly:
PSO(V) = SO(V)/ZSO(V)
where SO(V) is the special orthogonal group over V and ZSO(V) is the subgroup of orthogonal scalar transformations with unit determinant. Here ZSO is the center of SO, and is trivial in odd dimension, while it equals {±1} in even dimension – this odd/even distinction occurs throughout the structure of the orthogonal groups. By analogy with GL/SL and GO/SO, the projective orthogonal group is also sometimes called the projective general orthogonal group and denoted PGO.
Like the orthogonal group, the projective orthogonal group can be defined over any field and with varied quadratic forms, though, as with the ordinary orthogonal group, the main emphasis is on the real positive definite projective orthogonal group; other fields are elaborated in generalizations, below. Except when mentioned otherwise, in the sequel PO and PSO will refer to the real positive definite groups.
Like the spin groups and pin groups, which are covers rather than quotients of the (special) orthogonal groups, the projective (special) orthogonal groups are of interest for (projective) geometric analogs of Euclidean geometry, as related Lie groups, and in representation theory.
More intrinsically, the (real positive definite) projective orthogonal group PO can be defined as the isometries of elliptic space (in the sense of elliptic geometry), while PSO can be defined as the orientation-preserving isometries of elliptic space (when the space is orientable; otherwise PSO = PO).
Structure
Odd and even dimensions
The structure of PO differs significantly between odd and even dimension, fundamentally because in even dimension, reflection through the origin is orientation-preserving, while in odd dimension it is orientation-reversing ( but ). This is seen in the fact that each odd-dimensional real projective space is orientable, while each even-dimensional real projective space of positive dimension is non-orientable. At a more abstract level, the Lie algebras of odd- and even-dimensional projective orthogonal groups form two different families:
Thus, O(2k+1) = SO(2k+1) × {±I},
while and is in
|
https://en.wikipedia.org/wiki/Eigen
|
Eigen may refer to:
Eigen (C++ library), computer programming library for matrix and linear algebra operations
Eigen, Schwyz, settlement in the municipality of Alpthal in the canton of Schwyz, Switzerland
Eigen, Thurgau, locality in the municipality of Lengwil in the canton of Thurgau, Switzerland
Manfred Eigen (1927–2019), German biophysicist
Saint Eigen, female Christian saint
, Japanese sport shooter
See also
Eigenvalue, eigenvector and eigenspace in mathematics and physics
Eigenclass, synonym to metaclass in the Ruby programming language
Eigenbehaviour, with its connection to eigenform and eigenvalue in cybernetics (relevant authors being Heinz von Foerster, Luis Rocha and Louis Kauffman)
|
https://en.wikipedia.org/wiki/Bernstein%20inequality
|
In mathematics, Bernstein inequality, named after Sergei Natanovich Bernstein, may refer to:
Bernstein's inequality (mathematical analysis)
Bernstein inequalities (probability theory)
Mathematics disambiguation pages
|
https://en.wikipedia.org/wiki/Complex%20affine%20space
|
Affine geometry, broadly speaking, is the study of the geometrical properties of lines, planes, and their higher dimensional analogs, in which a notion of "parallel" is retained, but no metrical notions of distance or angle are. Affine spaces differ from linear spaces (that is, vector spaces) in that they do not have a distinguished choice of origin. So, in the words of Marcel Berger, "An affine space is nothing more than a vector space whose origin we try to forget about, by adding translations to the linear maps." Accordingly, a complex affine space, that is an affine space over the complex numbers, is like a complex vector space, but without a distinguished point to serve as the origin.
Affine geometry is one of the two main branches of classical algebraic geometry, the other being projective geometry. A complex affine space can be obtained from a complex projective space by fixing a hyperplane, which can be thought of as a hyperplane of ideal points "at infinity" of the affine space. To illustrate the difference (over the real numbers), a parabola in the affine plane intersects the line at infinity, whereas an ellipse does not. However, any two conic sections are projectively equivalent. So a parabola and ellipse are the same when thought of projectively, but different when regarded as affine objects. Somewhat less intuitively, over the complex numbers, an ellipse intersects the line at infinity in a pair of points while a parabola intersects the line at infinity in a single point. So, for a slightly different reason, an ellipse and parabola are inequivalent over the complex affine plane but remain equivalent over the (complex) projective plane.
Any complex vector space is an affine space: all one needs to do is forget the origin (and possibly any additional structure such as an inner product). For example, the complex n-space can be regarded as a complex affine space, when one is interested only in its affine properties (as opposed to its linear or metrical properties, for example). Since any two affine spaces of the same dimension are isomorphic, in some situations it is appropriate to identify them with , with the understanding that only affinely-invariant notions are ultimately meaningful. This usage is very common in modern algebraic geometry.
Affine structure
There are several equivalent ways to specify the affine structure of an n-dimensional complex affine space A. The simplest involves an auxiliary space V, called the difference space, which is a vector space over the complex numbers. Then an affine space is a set A together with a simple and transitive action of V on A. (That is, A is a V-torsor.)
Another way is to define a notion of affine combination, satisfying certain axioms. An affine combination of points is expressed as a sum of the form
where the scalars are complex numbers that sum to unity.
The difference space can be identified with the set of "formal differences" , modulo the relation that formal
|
https://en.wikipedia.org/wiki/Pentagonal%20tiling
|
In geometry, a pentagonal tiling is a tiling of the plane where each individual piece is in the shape of a pentagon.
A regular pentagonal tiling on the Euclidean plane is impossible because the internal angle of a regular pentagon, 108°, is not a divisor of 360°, the angle measure of a whole turn. However, regular pentagons can tile the hyperbolic plane with four pentagons around each vertex (or more) and sphere with three pentagons; the latter produces a tiling that is topologically equivalent to the dodecahedron.
Monohedral convex pentagonal tilings
Fifteen types of convex pentagons are known to tile the plane monohedrally (i.e. with one type of tile). The most recent one was discovered in 2015. This list has been shown to be complete by (result subject to peer-review). showed that there are only eight edge-to-edge convex types, a result obtained independently by .
Michaël Rao of the École normale supérieure de Lyon claimed in May 2017 to have found the proof that there are in fact no convex pentagons that tile beyond these 15 types. As of 11 July 2017, the first half of Rao's proof had been independently verified (computer code available) by Thomas Hales, a professor of mathematics at the University of Pittsburgh. As of December 2017, the proof was not yet fully peer-reviewed.
Each enumerated tiling family contains pentagons that belong to no other type; however, some individual pentagons may belong to multiple types. In addition, some of the pentagons in the known tiling types also permit alternative tiling patterns beyond the standard tiling exhibited by all members of its type.
The sides of length a, b, c, d, e are directly clockwise from the angles at vertices A, B, C, D, E respectively. (Thus,
A, B, C, D, E are opposite to d, e, a, b, c respectively.)
Many of these monohedral tile types have degrees of freedom. These freedoms include variations of internal angles and edge lengths. In the limit, edges may have lengths that approach zero or angles that approach 180°. Types 1, 2, 4, 5, 6, 7, 8, 9, and 13 allow parametric possibilities with nonconvex prototiles.
Periodic tilings are characterised by their wallpaper group symmetry, for example p2 (2222) is defined by four 2-fold gyration points. This nomenclature is used in the diagrams below, where the tiles are also colored by their k-isohedral positions within the symmetry.
A primitive unit is a section of the tiling that generates the whole tiling using only translations, and is as small as possible.
Reinhardt (1918)
found the first five types of pentagonal tile. All five can create isohedral tilings, meaning that the symmetries of the tiling can take any tile to any other tile (more formally, the automorphism group acts transitively on the tiles).
B. Grünbaum and G. C. Shephard have shown that there are exactly twenty-four distinct "types" of isohedral tilings of the plane by pentagons according to their classification scheme. All use Reinhardt's tiles, usually with ad
|
https://en.wikipedia.org/wiki/Poincar%C3%A9%20model
|
Poincaré model can refer to:
Poincaré disk model, a model of n-dimensional hyperbolic geometry
Poincaré half-plane model, a model of two-dimensional hyperbolic geometry
ru:Модель Пуанкаре
|
https://en.wikipedia.org/wiki/Pitchfork%20bifurcation
|
In bifurcation theory, a field within mathematics, a pitchfork bifurcation is a particular type of local bifurcation where the system transitions from one fixed point to three fixed points. Pitchfork bifurcations, like Hopf bifurcations, have two types – supercritical and subcritical.
In continuous dynamical systems described by ODEs—i.e. flows—pitchfork bifurcations occur generically in systems with symmetry.
Supercritical case
The normal form of the supercritical pitchfork bifurcation is
For , there is one stable equilibrium at . For there is an unstable equilibrium at , and two stable equilibria at .
Subcritical case
The normal form for the subcritical case is
In this case, for the equilibrium at is stable, and there are two unstable equilibria at . For the equilibrium at is unstable.
Formal definition
An ODE
described by a one parameter function with satisfying:
(f is an odd function),
has a pitchfork bifurcation at . The form of the pitchfork is given
by the sign of the third derivative:
Note that subcritical and supercritical describe the stability of the outer lines of the pitchfork (dashed or solid, respectively) and are not dependent on which direction the pitchfork faces. For example, the negative of the first ODE above, , faces the same direction as the first picture but reverses the stability.
See also
Bifurcation theory
Bifurcation diagram
References
Steven Strogatz, Non-linear Dynamics and Chaos: With applications to Physics, Biology, Chemistry and Engineering, Perseus Books, 2000.
S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos, Springer-Verlag, 1990.
Bifurcation theory
|
https://en.wikipedia.org/wiki/Centre%20%28geometry%29
|
In geometry, a centre (British English) or center (American English) () of an object is a point in some sense in the middle of the object. According to the specific definition of centre taken into consideration, an object might have no centre. If geometry is regarded as the study of isometry groups, then a centre is a fixed point of all the isometries that move the object onto itself.
Circles, spheres, and segments
The centre of a circle is the point equidistant from the points on the edge. Similarly the centre of a sphere is the point equidistant from the points on the surface, and the centre of a line segment is the midpoint of the two ends.
Symmetric objects
For objects with several symmetries, the centre of symmetry is the point left unchanged by the symmetric actions. So the centre of a square, rectangle, rhombus or parallelogram is where the diagonals intersect, this is (among other properties) the fixed point of rotational symmetries. Similarly the centre of an ellipse or a hyperbola is where the axes intersect.
Triangles
Several special points of a triangle are often described as triangle centres:
the circumcentre, which is the centre of the circle that passes through all three vertices;
the centroid or centre of mass, the point on which the triangle would balance if it had uniform density;
the incentre, the centre of the circle that is internally tangent to all three sides of the triangle;
the orthocentre, the intersection of the triangle's three altitudes; and
the nine-point centre, the centre of the circle that passes through nine key points of the triangle.
For an equilateral triangle, these are the same point, which lies at the intersection of the three axes of symmetry of the triangle, one third of the distance from its base to its apex.
A strict definition of a triangle centre is a point whose trilinear coordinates are f(a,b,c) : f(b,c,a) : f(c,a,b) where f is a function of the lengths of the three sides of the triangle, a, b, c such that:
f is homogeneous in a, b, c; i.e., f(ta,tb,tc)=thf(a,b,c) for some real power h; thus the position of a centre is independent of scale.
f is symmetric in its last two arguments; i.e., f(a,b,c)= f(a,c,b); thus position of a centre in a mirror-image triangle is the mirror-image of its position in the original triangle.
This strict definition excludes pairs of bicentric points such as the Brocard points (which are interchanged by a mirror-image reflection). As of 2020, the Encyclopedia of Triangle Centers lists over 39,000 different triangle centres.
Tangential polygons and cyclic polygons
A tangential polygon has each of its sides tangent to a particular circle, called the incircle or inscribed circle. The centre of the incircle, called the incentre, can be considered a centre of the polygon.
A cyclic polygon has each of its vertices on a particular circle, called the circumcircle or circumscribed circle. The centre of the circumcircle, called the circumcentre, can be considered a cen
|
https://en.wikipedia.org/wiki/Tetrahedroid
|
In algebraic geometry, a tetrahedroid (or tétraédroïde) is a special kind of Kummer surface studied by , with the property that the intersections with the faces of a fixed tetrahedron are given by two conics intersecting in four nodes. Tetrahedroids generalize Fresnel's wave surface.
References
Algebraic surfaces
Complex surfaces
|
https://en.wikipedia.org/wiki/Tridyakis%20icosahedron
|
In geometry, the tridyakis icosahedron is the dual polyhedron of the nonconvex uniform polyhedron, icositruncated dodecadodecahedron. It has 44 vertices, 180 edges, and 120 scalene triangular faces.
Proportions
The triangles have one angle of , one of and one of . The dihedral angle equals . Part of each triangle lies within the solid, hence is invisible in solid models.
See also
Catalan solid Duals to convex uniform polyhedra
Uniform polyhedra
List of uniform polyhedra
References
Photo on page 96, Dorman Luke construction and stellation pattern on page 97.
Dual uniform polyhedra
|
https://en.wikipedia.org/wiki/Milnor%27s%20sphere
|
In mathematics, specifically differential and algebraic topology, during the mid 1950's John Milnorpg 14 was trying to understand the structure of -connected manifolds of dimension (since -connected -manifolds are homeomorphic to spheres, this is the first non-trivial case after) and found an example of a space which is homotopy equivalent to a sphere, but was not explicitly diffeomorphic. He did this through looking at real vector bundles over a sphere and studied the properties of the associated disk bundle. It turns out, the boundary of this bundle is homotopically equivalent to a sphere , but in certain cases it is not diffeomorphic. This lack of diffeomorphism comes from studying a hypothetical cobordism between this boundary and a sphere, and showing this hypothetical cobordism invalidates certain properties of the Hirzebruch signature theorem.
See also
Exotic sphere
Oriented cobordism
References
Differential topology
Algebraic topology
Topology
|
https://en.wikipedia.org/wiki/Professor%20of%20Mathematics%20%28Glasgow%29
|
The Chair of Mathematics in the University of Glasgow in Scotland was established in 1691. Previously, under James VI's Nova Erectio, the teaching of Mathematics had been the responsibility of the Regents.
List of Mathematics Professors
George Sinclair MA (1691-1696)
Robert Sinclair MA MD (1699)
Robert Simson MA MD (1711)
Rev Prof James Williamson FRSE MA DD (1761)
James Millar MA (1796)
James Thomson MA LLD (1832)
Hugh Blackburn MA (1849)
William Jack MA LLD (1879)
George Alexander Gibson MA LLD (1909)
Thomas Murray MacRobert MA DSc LLD (1927)
Robert Alexander Rankin MA PhD DSc FRSE (1954-1982)
Robert Winston Keith Odoni BSc PhD FRSE (1989-2001)
Peter Kropholler (2003-2013)
Michael Wemyss (2016-)
References
Who, What and Where: The History and Constitution of the University of Glasgow. Compiled by Michael Moss, Moira Rankin and Lesley Richmond)
https://www.universitystory.gla.ac.uk/biography/?id=WH1773&type=P
https://www.maths.gla.ac.uk/~mwemyss/
See also
List of Professorships at the University of Glasgow
Mathematics
Glasgow
1691 establishments in Scotland
Mathematics education in the United Kingdom
|
https://en.wikipedia.org/wiki/Spectral%20asymmetry
|
In mathematics and physics, the spectral asymmetry is the asymmetry in the distribution of the spectrum of eigenvalues of an operator. In mathematics, the spectral asymmetry arises in the study of elliptic operators on compact manifolds, and is given a deep meaning by the Atiyah-Singer index theorem. In physics, it has numerous applications, typically resulting in a fractional charge due to the asymmetry of the spectrum of a Dirac operator. For example, the vacuum expectation value of the baryon number is given by the spectral asymmetry of the Hamiltonian operator. The spectral asymmetry of the confined quark fields is an important property of the chiral bag model. For fermions, it is known as the Witten index, and can be understood as describing the Casimir effect for fermions.
Definition
Given an operator with eigenvalues , an equal number of which are positive and negative, the spectral asymmetry may be defined as the sum
where is the sign function. Other regulators, such as the zeta function regulator, may be used.
The need for both a positive and negative spectrum in the definition is why the spectral asymmetry usually occurs in the study of Dirac operators.
Example
As an example, consider an operator with a spectrum
where n is an integer, ranging over all positive and negative values. One may show in a straightforward manner that in this case obeys for any integer , and that for we have . The graph of is therefore a periodic sawtooth curve.
Discussion
Related to the spectral asymmetry is the vacuum expectation value of the energy associated with the operator, the Casimir energy, which is given by
This sum is formally divergent, and the divergences must be accounted for and removed using standard regularization techniques.
References
MF Atiyah, VK Patodi and IM Singer, Spectral asymmetry and Riemannian geometry I, Proc. Camb. Phil. Soc., 77 (1975), 43-69.
Linas Vepstas, A.D. Jackson, A.S. Goldhaber, Two-phase models of baryons and the chiral Casimir effect, Physics Letters B140 (1984) p. 280-284.
Linas Vepstas, A.D. Jackson, Justifying the Chiral Bag, Physics Reports, 187 (1990) p. 109-143.
Spectral theory
Asymmetry
|
https://en.wikipedia.org/wiki/Binomial%20number
|
In mathematics, specifically in number theory, a binomial number is an integer which can be obtained by evaluating a homogeneous polynomial containing two terms. It is a generalization of a Cunningham number.
Definition
A binomial number is an integer obtained by evaluating a homogeneous polynomial containing two terms, also called a binomial. The form of this binomial is , with and . However, since is always divisible by , when studying the numbers generated from the version with the negative sign, they are usually divided by first. Binomial numbers formed this way form Lucas sequences. Specifically:
and
Binomial numbers are a generalization of a Cunningham numbers, and it will be seen that the Cunningham numbers are binomial numbers where . Other subsets of the binomial numbers are the Mersenne numbers and the repunits.
Factorization
The main reason for studying these numbers is to obtain their factorizations. Aside from algebraic factors, which are obtained by factoring the underlying polynomial (binomial) that was used to define the number, such as difference of two squares and sum of two cubes, there are other prime factors (called primitive prime factors, because for a given they do not factorize with ) which occur seemingly at random, and it is these which the number theorist is looking for.
Some binomial numbers' underlying binomials have Aurifeuillian factorizations, which can assist in finding prime factors. Cyclotomic polynomials are also helpful in finding factorizations.
The amount of work required in searching for a factor is considerably reduced by applying Legendre's theorem. This theorem states that all factors of a binomial number are of the form if is even or if it is odd.
Observation
Some people write "binomial number" when they mean binomial coefficient, but this usage is not standard and is deprecated.
See also
Cunningham project
Notes
References
External links
Binomial Number at MathWorld
Number theory
|
https://en.wikipedia.org/wiki/Durbin%E2%80%93Watson%20statistic
|
In statistics, the Durbin–Watson statistic is a test statistic used to detect the presence of autocorrelation at lag 1 in the residuals (prediction errors) from a regression analysis. It is named after James Durbin and Geoffrey Watson. The small sample distribution of this ratio was derived by John von Neumann (von Neumann, 1941). Durbin and Watson (1950, 1951) applied this statistic to the residuals from least squares regressions, and developed bounds tests for the null hypothesis that the errors are serially uncorrelated against the alternative that they follow a first order autoregressive process. Note that the distribution of this test statistic does not depend on the estimated regression coefficients and the variance of the errors.
A similar assessment can be also carried out with the Breusch–Godfrey test and the Ljung–Box test.
Computing and interpreting the Durbin–Watson statistic
If is the residual given by the Durbin-Watson test statistic is
where is the number of observations. For large , is approximately equal to , where is the sample autocorrelation of the residuals. therefore indicates no autocorrelation. The value of always lies between and . If the Durbin–Watson statistic is substantially less than 2, there is evidence of positive serial correlation. As a rough rule of thumb, if Durbin–Watson is less than 1.0, there may be cause for alarm. Small values of indicate successive error terms are positively correlated. If , successive error terms are negatively correlated. In regressions, this can imply an underestimation of the level of statistical significance.
To test for positive autocorrelation at significance , the test statistic is compared to lower and upper critical values ( and ):
If , there is statistical evidence that the error terms are positively autocorrelated.
If , there is no statistical evidence that the error terms are positively autocorrelated.
If , the test is inconclusive.
Positive serial correlation is serial correlation in which a positive error for one observation increases the chances of a positive error for another observation.
To test for negative autocorrelation at significance , the test statistic is compared to lower and upper critical values ( and ):
If , there is statistical evidence that the error terms are negatively autocorrelated.
If , there is no statistical evidence that the error terms are negatively autocorrelated.
If , the test is inconclusive.
Negative serial correlation implies that a positive error for one observation increases the chance of a negative error for another observation and a negative error for one observation increases the chances of a positive error for another.
The critical values, and , vary by level of significance () and the degrees of freedom in the regression equation. Their derivation is complex—statisticians typically obtain them from the appendices of statistical texts.
If the design matrix of the regression is known, exact critical values for
|
https://en.wikipedia.org/wiki/List%20of%20Sheffield%20Wednesday%20F.C.%20players
|
This is a list of footballers who have played for Sheffield Wednesday F.C. in
competitive fixtures. Appearance and goal statistics are for all competitions.
For current players see Current squad.
References
Sheffield Wednesday
Players
Association football player non-biographical articles
|
https://en.wikipedia.org/wiki/NSMB%20%28mathematics%29
|
NSMB is a computer system for solving Navier–Stokes equations using the finite volume method. It supports meshes built of several blocks (multi-blocks) and supports parallelisation. The name stands for "Navier–Stokes multi-block". It was developed by a consortium of European scientific institutions and companies, between 1992 and 2003.
References
Numerical software
|
https://en.wikipedia.org/wiki/Hermite%27s%20identity
|
In mathematics, Hermite's identity, named after Charles Hermite, gives the value of a summation involving the floor function. It states that for every real number x and for every positive integer n the following identity holds:
Proofs
Proof by algebraic manipulation
Split into its integer part and fractional part, . There is exactly one with
By subtracting the same integer from inside the floor operations on the left and right sides of this inequality, it may be rewritten as
Therefore,
and multiplying both sides by gives
Now if the summation from Hermite's identity is split into two parts at index , it becomes
Proof using functions
Consider the function
Then the identity is clearly equivalent to the statement for all real . But then we find,
Where in the last equality we use the fact that for all integers . But then has period . It then suffices to prove that for all . But in this case, the integral part of each summand in is equal to 0. We deduce that the function is indeed 0 for all real inputs .
References
Mathematical identities
Articles containing proofs
|
https://en.wikipedia.org/wiki/Complex%20logarithm
|
In mathematics, a complex logarithm is a generalization of the natural logarithm to nonzero complex numbers. The term refers to one of the following, which are strongly related:
A complex logarithm of a nonzero complex number , defined to be any complex number for which . Such a number is denoted by . If is given in polar form as , where and are real numbers with , then is one logarithm of , and all the complex logarithms of are exactly the numbers of the form for integers . These logarithms are equally spaced along a vertical line in the complex plane.
A complex-valued function , defined on some subset of the set of nonzero complex numbers, satisfying for all in . Such complex logarithm functions are analogous to the real logarithm function , which is the inverse of the real exponential function and hence satisfies for all positive real numbers . Complex logarithm functions can be constructed by explicit formulas involving real-valued functions, by integration of , or by the process of analytic continuation.
There is no continuous complex logarithm function defined on all of . Ways of dealing with this include branches, the associated Riemann surface, and partial inverses of the complex exponential function. The principal value defines a particular complex logarithm function that is continuous except along the negative real axis; on the complex plane with the negative real numbers and 0 removed, it is the analytic continuation of the (real) natural logarithm.
Problems with inverting the complex exponential function
For a function to have an inverse, it must map distinct values to distinct values; that is, it must be injective. But the complex exponential function is not injective, because for any complex number and integer , since adding to has the effect of rotating counterclockwise radians. So the points
equally spaced along a vertical line, are all mapped to the same number by the exponential function. This means that the exponential function does not have an inverse function in the standard sense. There are two solutions to this problem.
One is to restrict the domain of the exponential function to a region that does not contain any two numbers differing by an integer multiple of : this leads naturally to the definition of branches of , which are certain functions that single out one logarithm of each number in their domains. This is analogous to the definition of on as the inverse of the restriction of to the interval : there are infinitely many real numbers with , but one arbitrarily chooses the one in .
Another way to resolve the indeterminacy is to view the logarithm as a function whose domain is not a region in the complex plane, but a Riemann surface that covers the punctured complex plane in an infinite-to-1 way.
Branches have the advantage that they can be evaluated at complex numbers. On the other hand, the function on the Riemann surface is elegant in that it packages together all branch
|
https://en.wikipedia.org/wiki/P2-irreducible%20manifold
|
{{DISPLAYTITLE:P2-irreducible manifold}}
In mathematics, a P2-irreducible manifold is a 3-manifold that is irreducible and contains no 2-sided (real projective plane). An orientable manifold is P2-irreducible if and only if it is irreducible. Every non-orientable P2-irreducible manifold is a Haken manifold.
References
3-manifolds
|
https://en.wikipedia.org/wiki/Bob%20Vaughan
|
Robert Charles "Bob" Vaughan FRS (born 24 March 1945) is a British mathematician, working in the field of analytic number theory.
Life
Since 1999 he has been Professor at Pennsylvania State University, and since 1990 Fellow of the Royal Society. He did his PhD at the University of London under supervision of Theodor Estermann. He supervised Trevor Wooley's PhD.
Awards
In 2012, he became a fellow of the American Mathematical Society.
See also
Vaughan's identity
Writings
References
External links
Robert C. Vaughan's Home page (includes CV and list of publications)
1945 births
Living people
Fellows of the Royal Society
Alumni of the University of London
Fellows of the American Mathematical Society
|
https://en.wikipedia.org/wiki/Theodor%20Estermann
|
Theodor Estermann (5 February 1902 – 29 November 1991) was a German-born American mathematician, working in the field of analytic number theory. The Estermann measure, a measure of the central symmetry of a convex set in the Euclidean plane, is named after him.
He was born in Neubrandenburg, Germany, "to keen Zionists who named him in honour of Herzl." His doctorate, completed in 1925, was supervised by Hans Rademacher. He spent most of his career at University College London, eventually as a professor. Heini Halberstam, Klaus Roth and Robert Charles Vaughan were Ph.D. students of his.
Though Estermann left Germany in 1929, before the Nazis seized power in 1933, some historians count him among the early emigrants who fled Nazi Germany.
The physicist Immanuel Estermann was the brother of Theodor Estermann.
References
External links
LMS obituary
1902 births
1991 deaths
Academics of University College London
20th-century German mathematicians
People from Neubrandenburg
Jewish scientists
Jewish emigrants from Nazi Germany to the United Kingdom
|
https://en.wikipedia.org/wiki/Heini%20Halberstam
|
Heini Halberstam (11 September 1926 – 25 January 2014) was a Czech-born British mathematician, working in the field of analytic number theory. He is remembered in part for the Elliott–Halberstam conjecture from 1968.
Life and career
Halberstam was born in Most, Czechoslovakia and died in Champaign, Illinois, US. His father died when he was very young. After Adolf Hitler's annexation of the Sudetenland, he and his mother moved to Prague. At the age of twelve, as the Nazi occupation progressed, he was one of the 669 children saved by Sir Nicholas Winton, who organized the Kindertransport, a train that allowed those children to leave Nazi-occupied territory. He was sent to England, where he lived during World War II.
He obtained his PhD in 1952, from University College, London, under supervision of Theodor Estermann. From 1962 until 1964, Halberstam was Erasmus Smith's Professor of Mathematics at Trinity College Dublin; From 1964 until 1980, Halberstam was a Professor of Mathematics at the University of Nottingham. In 1980, he took up a position at the University of Illinois Urbana-Champaign (UIUC) and became an Emeritus Professor at UIUC in 1996. In 2012, he became a fellow of the American Mathematical Society.
He is known also for books, Sequences with Klaus Roth on additive number theory, and with H. E. Richert on sieve theory.
References
1926 births
2014 deaths
20th-century English mathematicians
21st-century English mathematicians
Czech Jews
Number theorists
Academics of the University of Nottingham
Fellows of the American Mathematical Society
Kindertransport refugees
University of Illinois Urbana-Champaign faculty
Alumni of University College London
Donegall Lecturers of Mathematics at Trinity College Dublin
People from Most (city)
University of Illinois faculty
English emigrants to the United States
Czechoslovak emigrants to England
|
https://en.wikipedia.org/wiki/Guido%20Stampacchia
|
Guido Stampacchia (26 March 1922 – 27 April 1978) was an Italian mathematician, known for his work on the theory of variational inequalities, the calculus of variation and the theory of elliptic partial differential equations.
Life and academic career
Stampacchia was born in Naples, Italy, to Emanuele Stampacchia and Giulia Campagnano. He obtained his high school certification from the Liceo-Ginnasio Giambattista Vico in Naples in classical subjects, although he showed a stronger aptitude for mathematics and physics.
In 1940 he was admitted to the Scuola Normale Superiore di Pisa for undergraduate studies in pure mathematics. He was drafted in March 1943 but nevertheless managed to take examinations during the summer before joining the resistance movement against the Germans in the defence of Rome in September. He was discharged in June 1945.
In 1944 he won a scholarship to the University of Naples which allowed him to continue his studies. In the 1945–1946 academic year he declined a specialization at the Scuola Normale in the Faculty of Sciences in favour of an assistant position at the Istituto Universitario Navale. In 1949 he was appointed as assistant with tenure to the chair of mathematical analysis, and in 1951 he obtained his "Libera docenza".
In 1952 won a national competition for the chair at the University of Palermo. He was nominated Professor on Probation at the University of Genoa later the same year and was promoted to full Professor in 1955.
He married fellow student Sara Naldini in October 1948. Children Mauro, Renata, Giulia, and Franca were born in 1949, 1951, 1955 and 1956 respectively.
Stampacchia was active in research and teaching throughout his career. He made key contributions to a number of fields, including calculus of variation, variational inequalities and differential equations. In 1967 Stampacchia was elected President of the Unione Matematica Italiana. It was about this time that his research efforts shifted toward the emerging field of variational inequalities, which he modelled after boundary value problems for partial differential equations. He was also director of the Istituto per le Applicazioni del Calcolo of Consiglio Nazionale delle Ricerche from December 1968 to 1974.
Stampacchia accepted the position of Professor of Mathematical Analysis at the University of Rome in 1968 and returned to the University of Pise in 1970 and subsequently with the chair of Higher Analysis at the Scuola Normale Superiore. In early 1978 he suffered a serious heart attack and died of heart arrest on 27 April that year while he was in Paris as a Visiting Professor.
The Stampacchia Medal, an international prize awarded every three years for contributions to the Calculus of Variations, was established in 2003.
Selected works
.
with Sergio Campanato, Sulle maggiorazioni in Lp nella teoria delle equazioni ellittiche, Bollettino dell'Unione Matematica Italiana, Bologna, Zanichelli, 1965.
with Jaurès Cecconi, Lezioni di analisi
|
https://en.wikipedia.org/wiki/Category%20of%20elements
|
In category theory, a branch of mathematics, the category of elements of a presheaf is a category associated to that presheaf whose objects are the elements of sets in the presheaf.
The category of elements of a simplicial set is fundamental in simplicial homotopy theory, a branch of algebraic topology. More generally, the category of elements plays a key role in the proof that every weighted colimit can be expressed as an ordinary colimit, which is in turn necessary for the basic results in theory of pointwise left Kan extensions, and the characterization of the presheaf category as the free cocompletion of a category.
Definition
Let be a category and let be a set-valued functor. The category of elements of (also denoted ) is the category whose:
Objects are pairs where and .
Morphisms are arrows of such that .
An equivalent definition is that the category of elements of is the comma category , where is a singleton (a set with one element).
The category of elements of is naturally equipped with a projection functor that sends an object to , and an arrow to its underlying arrow in .
As a functor from presheaves to small categories
For small , this construction can be extended into a functor from to , the category of small categories. Using the Yoneda lemma one can show that , where is the Yoneda embedding. This isomorphism is natural in and thus the functor is naturally isomorphic to .
See also
Grothendieck construction
References
External links
Representable functors
|
https://en.wikipedia.org/wiki/Stepwise%20regression
|
In statistics, stepwise regression is a method of fitting regression models in which the choice of predictive variables is carried out by an automatic procedure. In each step, a variable is considered for addition to or subtraction from the set of explanatory variables based on some prespecified criterion. Usually, this takes the form of a forward, backward, or combined sequence of F-tests or t-tests.
The frequent practice of fitting the final selected model followed by reporting estimates and confidence intervals without adjusting them to take the model building process into account has led to calls to stop using stepwise model building altogether or to at least make sure model uncertainty is correctly reflected.
Alternatives include other model selection techniques, such as adjusted R2, Akaike information criterion, Bayesian information criterion, Mallows's Cp, PRESS, or false discovery rate.
Main approaches
The main approaches for stepwise regression are:
Forward selection, which involves starting with no variables in the model, testing the addition of each variable using a chosen model fit criterion, adding the variable (if any) whose inclusion gives the most statistically significant improvement of the fit, and repeating this process until none improves the model to a statistically significant extent.
Backward elimination, which involves starting with all candidate variables, testing the deletion of each variable using a chosen model fit criterion, deleting the variable (if any) whose loss gives the most statistically insignificant deterioration of the model fit, and repeating this process until no further variables can be deleted without a statistically significant loss of fit.
Bidirectional elimination, a combination of the above, testing at each step for variables to be included or excluded.
Alternatives
A widely used algorithm was first proposed by Efroymson (1960). This is an automatic procedure for statistical model selection in cases where there is a large number of potential explanatory variables, and no underlying theory on which to base the model selection. The procedure is used primarily in regression analysis, though the basic approach is applicable in many forms of model selection. This is a variation on forward selection. At each stage in the process, after a new variable is added, a test is made to check if some variables can be deleted without appreciably increasing the residual sum of squares (RSS). The procedure terminates when the measure is (locally) maximized, or when the available improvement falls below some critical value.
One of the main issues with stepwise regression is that it searches a large space of possible models. Hence it is prone to overfitting the data. In other words, stepwise regression will often fit much better in sample than it does on new out-of-sample data. Extreme cases have been noted where models have achieved statistical significance working on random numbers. This problem can be mi
|
https://en.wikipedia.org/wiki/Dana%20Randall
|
Dana Randall is an American computer scientist. She works as the ADVANCE Professor of Computing, and adjunct professor of mathematics at the Georgia Institute of Technology. She is also an External Professor of the Santa Fe Institute. Previously she was executive director of the Georgia Tech Institute of Data Engineering and Science (IDEaS) that she co-founded, and director of the Algorithms and Randomness Center. Her research include combinatorics, computational aspects of statistical mechanics, Monte Carlo stimulation of Markov chains, and randomized algorithms.
Education
Randall was born in Queens, New York. She graduated from New York City's Stuyvesant High School in 1984. She received her A.B. in Mathematics from Harvard University in 1988 and her Ph.D. in computer science from the University of California, Berkeley in 1994 under the supervision of Alistair Sinclair.
Her sister is theoretical physicist Lisa Randall.
Research
Her primary research interest is analyzing algorithms for counting problems (e.g. counting matchings in a graph) using Markov chains. One of her important contributions to this area is a decomposition theorem for analyzing Markov chains.
Accolades
In 2012 she became a fellow of the American Mathematical Society.
She delivered her Arnold Ross Lecture on October 29, 2009, an honor previously conferred on Barry Mazur, Elwyn Berlekamp, Ken Ribet, Manjul Bhargava, David Kelly and Paul Sally.
Publications
Clustering in interfering models of binary mixtures
References
External links
Dana Randall's website
Arnold Ross Lectures details at the AMS
Year of birth missing (living people)
Living people
Georgia Tech faculty
Stuyvesant High School alumni
Harvard University alumni
University of California, Berkeley alumni
Theoretical computer scientists
American women computer scientists
American computer scientists
20th-century American scientists
20th-century American women scientists
21st-century American scientists
21st-century American women scientists
American women academics
|
https://en.wikipedia.org/wiki/H%C3%B6lder%27s%20theorem
|
In mathematics, Hölder's theorem states that the gamma function does not satisfy any algebraic differential equation whose coefficients are rational functions. This result was first proved by Otto Hölder in 1887; several alternative proofs have subsequently been found.
The theorem also generalizes to the -gamma function.
Statement of the theorem
For every there is no non-zero polynomial such that
where is the gamma function.
For example, define by
Then the equation
is called an algebraic differential equation, which, in this case, has the solutions and — the Bessel functions of the first and second kind respectively. Hence, we say that and are differentially algebraic (also algebraically transcendental). Most of the familiar special functions of mathematical physics are differentially algebraic. All algebraic combinations of differentially algebraic functions are differentially algebraic. Furthermore, all compositions of differentially algebraic functions are differentially algebraic. Hölder’s Theorem simply states that the gamma function, , is not differentially algebraic and is therefore transcendentally transcendental.
Proof
Let and assume that a non-zero polynomial exists such that
As a non-zero polynomial in can never give rise to the zero function on any non-empty open domain of (by the fundamental theorem of algebra), we may suppose, without loss of generality, that contains a monomial term having a non-zero power of one of the indeterminates .
Assume also that has the lowest possible overall degree with respect to the lexicographic ordering For example,
because the highest power of in any monomial term of the first polynomial is smaller than that of the second polynomial.
Next, observe that for all we have:
If we define a second polynomial by the transformation
then we obtain the following algebraic differential equation for :
Furthermore, if is the highest-degree monomial term in , then the highest-degree monomial term in is
Consequently, the polynomial
has a smaller overall degree than , and as it clearly gives rise to an algebraic differential equation for , it must be the zero polynomial by the minimality assumption on . Hence, defining by
we get
Now, let in to obtain
A change of variables then yields
and an application of mathematical induction (along with a change of variables at each induction step) to the earlier expression
reveals that
This is possible only if is divisible by , which contradicts the minimality assumption on . Therefore, no such exists, and so is not differentially algebraic. Q.E.D.
References
Gamma and related functions
Theorems in analysis
|
https://en.wikipedia.org/wiki/Pr%C3%BCfer%20group
|
In mathematics, specifically in group theory, the Prüfer p-group or the p-quasicyclic group or p∞-group, Z(p∞), for a prime number p is the unique p-group in which every element has p different p-th roots.
The Prüfer p-groups are countable abelian groups that are important in the classification of infinite abelian groups: they (along with the group of rational numbers) form the smallest building blocks of all divisible groups.
The groups are named after Heinz Prüfer, a German mathematician of the early 20th century.
Constructions of Z(p∞)
The Prüfer p-group may be identified with the subgroup of the circle group, U(1), consisting of all pn-th roots of unity as n ranges over all non-negative integers:
The group operation here is the multiplication of complex numbers.
There is a presentation
Here, the group operation in Z(p∞) is written as multiplication.
Alternatively and equivalently, the Prüfer p-group may be defined as the Sylow p-subgroup of the quotient group Q/Z, consisting of those elements whose order is a power of p:
(where Z[1/p] denotes the group of all rational numbers whose denominator is a power of p, using addition of rational numbers as group operation).
For each natural number n, consider the quotient group Z/pnZ and the embedding Z/pnZ → Z/pn+1Z induced by multiplication by p. The direct limit of this system is Z(p∞):
If we perform the direct limit in the category of topological groups, then we need to impose a topology on each of the , and take the final topology on . If we wish for to be Hausdorff, we must impose the discrete topology on each of the , resulting in to have the discrete topology.
We can also write
where Qp denotes the additive group of p-adic numbers and Zp is the subgroup of p-adic integers.
Properties
The complete list of subgroups of the Prüfer p-group Z(p∞) = Z[1/p]/Z is:
(Here is a cyclic subgroup of Z(p∞) with pn elements; it contains precisely those elements of Z(p∞) whose order divides pn and corresponds to the set of pn-th roots of unity.) The Prüfer p-groups are the only infinite groups whose subgroups are totally ordered by inclusion. This sequence of inclusions expresses the Prüfer p-group as the direct limit of its finite subgroups. As there is no maximal subgroup of a Prüfer p-group, it is its own Frattini subgroup.
Given this list of subgroups, it is clear that the Prüfer p-groups are indecomposable (cannot be written as a direct sum of proper subgroups). More is true: the Prüfer p-groups are subdirectly irreducible. An abelian group is subdirectly irreducible if and only if it is isomorphic to a finite cyclic p-group or to a Prüfer group.
The Prüfer p-group is the unique infinite p-group that is locally cyclic (every finite set of elements generates a cyclic group). As seen above, all proper subgroups of Z(p∞) are finite. The Prüfer p-groups are the only infinite abelian groups with this property.
The Prüfer p-groups are divisible. They play an important role in the classi
|
https://en.wikipedia.org/wiki/Ruth%E2%80%93Aaron%20pair
|
In mathematics, a Ruth–Aaron pair consists of two consecutive integers (e.g., 714 and 715) for which the sums of the prime factors of each integer are equal:
714 = 2 × 3 × 7 × 17,
715 = 5 × 11 × 13,
and
2 + 3 + 7 + 17 = 5 + 11 + 13 = 29.
There are different variations in the definition, depending on how many times to count primes that appear multiple times in a factorization.
The name was given by Carl Pomerance for Babe Ruth and Hank Aaron, as Ruth's career regular-season home run total was 714, a record which Aaron eclipsed on April 8, 1974, when he hit his 715th career home run. Pomerance was a mathematician at the University of Georgia at the time Aaron (a member of the nearby Atlanta Braves) broke Ruth's record, and the student of one of Pomerance's colleagues noticed that the sums of the prime factors of 714 and 715 were equal.
Examples
If only distinct prime factors are counted, the first few Ruth–Aaron pairs are:
(5, 6), (24, 25), (49, 50), (77, 78), (104, 105), (153, 154), (369, 370), (492, 493), (714, 715), (1682, 1683), (2107, 2108)
(The lesser of each pair is listed in ).
Counting repeated prime factors (e.g., 8 = 2×2×2 and 9 = 3×3 with 2+2+2 = 3+3), the first few Ruth–Aaron pairs are:
(5, 6), (8, 9), (15, 16), (77, 78), (125, 126), (714, 715), (948, 949), (1330, 1331)
(The lesser of each pair is listed in ).
The intersection of the two lists begins:
(5, 6), (77, 78), (714, 715), (5405, 5406).
(The lesser of each pair is listed in ).
Any Ruth–Aaron pair of square-free integers belongs to both lists with the same sum of prime factors. The intersection also contains pairs that are not square-free, for example (7129199, 7129200) = (7×112×19×443, 24×3×52×13×457). Here 7+11+19+443 = 2+3+5+13+457 = 480, and also 7+11+11+19+443 = 2+2+2+2+3+5+5+13+457 = 491.
Density
Ruth-Aaron pairs are sparse (that is, they have density 0). This was conjectured by Nelson et al. in 1974 and proven in 1978 by Paul Erdős and Pomerance.
Ruth–Aaron triplets
Ruth–Aaron triplets (overlapping Ruth–Aaron pairs) also exist. The first and possibly the second when counting distinct prime factors:
89460294 = 2 × 3 × 7 × 11 × 23 × 8419,
89460295 = 5 × 4201 × 4259,
89460296 = 2 × 2 × 2 × 31 × 43 × 8389,
and 2 + 3 + 7 + 11 + 23 + 8419 = 5 + 4201 + 4259 = 2 + 31 + 43 + 8389 = 8465.
151165960539 = 3 × 11 × 11 × 83 × 2081 × 2411,
151165960540 = 2 × 2 × 5 × 7 × 293 × 1193 × 3089,
151165960541 = 23 × 29 × 157 × 359 × 4021,
and 3 + 11 + 83 + 2081 + 2411 = 2 + 5 + 7 + 293 + 1193 + 3089 = 23 + 29 + 157 + 359 + 4021 = 4589.
The first two Ruth–Aaron triplets when counting repeated prime factors:
417162 = 2 × 3 × 251 × 277,
417163 = 17 × 53 × 463,
417164 = 2 × 2 × 11 × 19 × 499,
and 2 + 3 + 251 + 277 = 17 + 53 + 463 = 2 + 2 + 11 + 19 + 499 = 533.
6913943284 = 2 × 2 × 37 × 89 × 101 × 5197,
6913943285 = 5 × 283 × 1259 × 3881,
6913943286 = 2 × 3 × 167 × 2549 × 2707,
and 2 + 2 + 37 + 89 + 101 + 5197 = 5 + 283 + 1259 + 3881 = 2 + 3 + 167 + 2549 + 2707 = 5428.
onl
|
https://en.wikipedia.org/wiki/Hunt%20process
|
In probability theory, a Hunt process is a strong Markov process which is quasi-left continuous with respect to the minimum completed admissible filtration .
It is named after Gilbert Hunt.
See also
Markov process
Markov chain
Shift of finite type
References
Markov processes
|
https://en.wikipedia.org/wiki/Critical%20point%20%28set%20theory%29
|
In set theory, the critical point of an elementary embedding of a transitive class into another transitive class is the smallest ordinal which is not mapped to itself.
Suppose that is an elementary embedding where and are transitive classes and is definable in by a formula of set theory with parameters from . Then must take ordinals to ordinals and must be strictly increasing. Also . If for all and , then is said to be the critical point of .
If is V, then (the critical point of ) is always a measurable cardinal, i.e. an uncountable cardinal number κ such that there exists a -complete, non-principal ultrafilter over . Specifically, one may take the filter to be . Generally, there will be many other <κ-complete, non-principal ultrafilters over . However, might be different from the ultrapower(s) arising from such filter(s).
If and are the same and is the identity function on , then is called "trivial". If the transitive class is an inner model of ZFC and has no critical point, i.e. every ordinal maps to itself, then is trivial.
References
Large cardinals
|
https://en.wikipedia.org/wiki/Curl
|
Curl or CURL may refer to:
Science and technology
Curl (mathematics), a vector operator that shows a vector field's rate of rotation
Curl (programming language), an object-oriented programming language designed for interactive Web content
cURL, a program and application library for transferring data with URLs
Antonov An-26, an aircraft, NATO reporting name CURL
Sports and weight training
Curl (association football), is spin on the ball, which will make it swerve when kicked
Curl, in the sport of curling, the curved path a stone makes on the ice or the act of playing; see Glossary of curling
Biceps curl, a weight training exercises
Leg curl, a weight training exercises
Wrist curl, a weight training exercises
Other uses
Curl (Japanese snack), a brand of corn puffs
Curl or ringlet, a lock of hair that grows in a curved, rather than straight, direction
Consortium of University Research Libraries, an association of UK academic and research libraries
Executive curl, the ring above a naval officer's gold lace or braid rank insignia
People with the surname
Kamren Curl (born 1999), American football player
Martina Gangle Curl (1906–1994), American artist and activist
Robert Curl (1933–2022), Nobel Laureate and emeritus professor of chemistry at Rice University
Rod Curl (born 1943), American professional golfer
Phil Curls (1942–2007), American politician
See also
Curling (disambiguation)
Overlap (disambiguation)
Spiral
|
https://en.wikipedia.org/wiki/Jacobi%20eigenvalue%20algorithm
|
In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix (a process known as diagonalization). It is named after Carl Gustav Jacob Jacobi, who first proposed the method in 1846, but only became widely used in the 1950s with the advent of computers.
Description
Let be a symmetric matrix, and be a Givens rotation matrix. Then:
is symmetric and similar to .
Furthermore, has entries:
where and .
Since is orthogonal, and have the same Frobenius norm (the square-root sum of squares of all components), however we can choose such that , in which case has a larger sum of squares on the diagonal:
Set this equal to 0, and rearrange:
if
In order to optimize this effect, Sij should be the off-diagonal element with the largest absolute value, called the pivot.
The Jacobi eigenvalue method repeatedly performs rotations until the matrix becomes almost diagonal. Then the elements in the diagonal are approximations of the (real) eigenvalues of S.
Convergence
If is a pivot element, then by definition for . Let denote the sum of squares of all off-diagonal entries of . Since has exactly off-diagonal elements, we have or . Now . This implies
or ;
that is, the sequence of Jacobi rotations converges at least linearly by a factor to a diagonal matrix.
A number of Jacobi rotations is called a sweep; let denote the result. The previous estimate yields
;
that is, the sequence of sweeps converges at least linearly with a factor ≈ .
However the following result of Schönhage yields locally quadratic convergence. To this end let S have m distinct eigenvalues with multiplicities and let d > 0 be the smallest distance of two different eigenvalues. Let us call a number of
Jacobi rotations a Schönhage-sweep. If denotes the result then
.
Thus convergence becomes quadratic as soon as
Cost
Each Jacobi rotation can be done in O(n) steps when the pivot element p is known. However the search for p requires inspection of all N ≈ n2 off-diagonal elements. We can reduce this to O(n) complexity too if we introduce an additional index array with the property that is the index of the largest element in row i, (i = 1, ..., n − 1) of the current S. Then the indices of the pivot (k, l) must be one of the pairs . Also the updating of the index array can be done in O(n) average-case complexity: First, the maximum entry in the updated rows k and l can be found in O(n) steps. In the other rows i, only the entries in columns k and l change. Looping over these rows, if is neither k nor l, it suffices to compare the old maximum at to the new entries and update if necessary. If should be equal to k or l and the corresponding entry decreased during the update, the maximum over row i has to be found from scratch in O(n) complexity. However, this will happen on average only once per rotation. Thus, each rotation
|
https://en.wikipedia.org/wiki/Quasi-bialgebra
|
In mathematics, quasi-bialgebras are a generalization of bialgebras: they were first defined by the Ukrainian mathematician Vladimir Drinfeld in 1990. A quasi-bialgebra differs from a bialgebra by having coassociativity replaced by an invertible element which controls the non-coassociativity. One of their key properties is that the corresponding category of modules forms a tensor category.
Definition
A quasi-bialgebra is an algebra over a field equipped with morphisms of algebras
along with invertible elements , and such that the following identities hold:
Where and are called the comultiplication and counit, and are called the right and left unit constraints (resp.), and is sometimes called the Drinfeld associator. This definition is constructed so that the category is a tensor category under the usual vector space tensor product, and in fact this can be taken as the definition instead of the list of above identities. Since many of the quasi-bialgebras that appear "in nature" have trivial unit constraints, ie. the definition may sometimes be given with this assumed. Note that a bialgebra is just a quasi-bialgebra with trivial unit and associativity constraints: and .
Braided quasi-bialgebras
A braided quasi-bialgebra (also called a quasi-triangular quasi-bialgebra) is a quasi-bialgebra whose corresponding tensor category is braided. Equivalently, by analogy with braided bialgebras, we can construct a notion of a universal R-matrix which controls the non-cocommutativity of a quasi-bialgebra. The definition is the same as in the braided bialgebra case except for additional complications in the formulas caused by adding in the associator.
Proposition: A quasi-bialgebra is braided if it has a universal R-matrix, ie an invertible element such that the following 3 identities hold:
Where, for every , is the monomial with in the th spot, where any omitted numbers correspond to the identity in that spot. Finally we extend this by linearity to all of .
Again, similar to the braided bialgebra case, this universal R-matrix satisfies (a non-associative version of) the Yang–Baxter equation:
Twisting
Given a quasi-bialgebra, further quasi-bialgebras can be generated by twisting (from now on we will assume ) .
If is a quasi-bialgebra and is an invertible element such that , set
Then, the set is also a quasi-bialgebra obtained by twisting by F, which is called a twist or gauge transformation. If was a braided quasi-bialgebra with universal R-matrix , then so is with universal R-matrix (using the notation from the above section). However, the twist of a bialgebra is only in general a quasi-bialgebra. Twistings fulfill many expected properties. For example, twisting by and then is equivalent to twisting by , and twisting by then recovers the original quasi-bialgebra.
Twistings have the important property that they induce categorical equivalences on the tensor category of modules:
Theorem: Let , be quasi-bialgebras, let
|
https://en.wikipedia.org/wiki/Quasi-Hopf%20algebra
|
A quasi-Hopf algebra is a generalization of a Hopf algebra, which was defined by the Russian mathematician Vladimir Drinfeld in 1989.
A quasi-Hopf algebra is a quasi-bialgebra for which there exist and a bijective antihomomorphism S (antipode) of such that
for all and where
and
where the expansions for the quantities and are given by
and
As for a quasi-bialgebra, the property of being quasi-Hopf is preserved under twisting.
Usage
Quasi-Hopf algebras form the basis of the study of Drinfeld twists and the representations in terms of F-matrices associated with finite-dimensional irreducible representations of quantum affine algebra. F-matrices can be used to factorize the corresponding R-matrix. This leads to applications in Statistical mechanics, as quantum affine algebras, and their representations give rise to solutions of the Yang–Baxter equation, a solvability condition for various statistical models, allowing characteristics of the model to be deduced from its corresponding quantum affine algebra. The study of F-matrices has been applied to models such as the Heisenberg XXZ model in the framework of the algebraic Bethe ansatz. It provides a framework for solving two-dimensional integrable models by using the quantum inverse scattering method.
See also
Quasitriangular Hopf algebra
Quasi-triangular quasi-Hopf algebra
Ribbon Hopf algebra
References
Vladimir Drinfeld, "Quasi-Hopf algebras", Leningrad Math J. 1 (1989), 1419-1457
J. M. Maillet and J. Sanchez de Santos, Drinfeld Twists and Algebraic Bethe Ansatz, Amer. Math. Soc. Transl. (2) Vol. 201, 2000
Coalgebras
|
https://en.wikipedia.org/wiki/Proxy%20%28statistics%29
|
In statistics, a proxy or proxy variable is a variable that is not in itself directly relevant, but that serves in place of an unobservable or immeasurable variable. In order for a variable to be a good proxy, it must have a close correlation, not necessarily linear, with the variable of interest. This correlation might be either positive or negative.
Proxy variable must relate to an unobserved variable, must correlate with disturbance, and must not correlate with regressors once the disturbance is controlled for.
Examples
In social sciences, proxy measurements are often required to stand in for variables that cannot be directly measured. This process of standing in is also known as operationalization. Per-capita gross domestic product (GDP) is often used as a proxy for measures of standard of living or quality of life. Montgomery et al. examine several proxies used, and point out limitations with each, stating "In poor countries, no single empirical measure can be expected to display all of the facets of the concept of income. Our judgment is that consumption per adult is the best measure among those collected in cross-sectional surveys."
Likewise, country of origin or birthplace might be used as a proxy for race, or vice versa.
Frost lists several examples of proxy variables: Widths of tree rings: proxy for historical environmental conditions; Per-capita GDP: proxy for quality of life; body mass index (BMI): proxy for true body fat percentage; years of education and/or GPA: proxy for
cognitive ability; satellite images of ocean surface color: proxy for depth that light penetrates into the ocean over large areas; changes in height over a fixed time: proxy for
hormone levels in blood.
See also
Instrumental variable
Latent variable
Operationalization
Proxy (climate)
References
Statistical analysis
|
https://en.wikipedia.org/wiki/Segal%27s%20conjecture
|
Segal's Burnside ring conjecture, or, more briefly, the Segal conjecture, is a theorem in homotopy theory, a branch of mathematics. The theorem relates the Burnside ring of a finite group G to the stable cohomotopy of the classifying space BG. The conjecture was made in the mid 1970s by Graeme Segal and proved in 1984 by Gunnar Carlsson. , this statement is still commonly referred to as the Segal conjecture, even though it now has the status of a theorem.
Statement of the theorem
The Segal conjecture has several different formulations, not all of which are equivalent. Here is a weak form: there exists, for every finite group G, an isomorphism
Here, lim denotes the inverse limit, S* denotes the stable cohomotopy ring, B denotes the classifying space, the superscript k denotes the k-skeleton, and the subscript + denotes the addition of a disjoint basepoint. On the right-hand side, the hat denotes the completion of the Burnside ring with respect to its augmentation ideal.
The Burnside ring
The Burnside ring of a finite group G is constructed from the category of finite G-sets as a Grothendieck group. More precisely, let M(G) be the commutative monoid of isomorphism classes of finite G-sets, with addition the disjoint union of G-sets and identity element the empty set (which is a G-set in a unique way). Then A(G), the Grothendieck group of M(G), is an abelian group. It is in fact a free abelian group with basis elements represented by the G-sets G/H, where H varies over the subgroups of G. (Note that H is not assumed here to be a normal subgroup of G, for while G/H is not a group in this case, it is still a G-set.) The ring structure on A(G) is induced by the direct product of G-sets; the multiplicative identity is the (isomorphism class of any) one-point set, which becomes a G-set in a unique way.
The Burnside ring is the analogue of the representation ring in the category of finite sets, as opposed to the category of finite-dimensional vector spaces over a field (see motivation below). It has proven to be an important tool in the representation theory of finite groups.
The classifying space
For any topological group G admitting the structure of a CW-complex, one may consider the category of principal G-bundles. One can define a functor from the category of CW-complexes to the category of sets by assigning to each CW-complex X the set of principal G-bundles on X. This functor descends to a functor on the homotopy category of CW-complexes, and it is natural to ask whether the functor so obtained is representable. The answer is affirmative, and the representing object is called the classifying space of the group G and typically denoted BG. If we restrict our attention to the homotopy category of CW-complexes, then BG is unique. Any CW-complex that is homotopy equivalent to BG is called a model for BG.
For example, if G is the group of order 2, then a model for BG is infinite-dimensional real projective space. It can be shown that if G is finit
|
https://en.wikipedia.org/wiki/Planisphaerium
|
The Planisphaerium is a work by Ptolemy. The title can be translated as "celestial plane" or "star chart". In this work Ptolemy explored the mathematics of mapping figures inscribed in the celestial sphere onto a plane by what is now known as stereographic projection. This method of projection preserves the properties of circles.
Publication
Originally written in Ancient Greek, Planisphaerium was one of many scientific works which survived from antiquity in Arabic translation. One reason why Planisphaerium attracted interest was that stereographic projection was the mathematical basis of the plane astrolabe, an instrument which was widely used in the medieval Islamic world. In the 12th century the work was translated from Arabic into Latin by Herman of Carinthia, who also translated commentaries by Maslamah Ibn Ahmad al-Majriti. The oldest known translation is in Arabic done by an unknown scholar as part of the Translation Movement in Baghdad.
Planisphere
The word planisphere (Latin planisphaerium) was originally used in the second century by Ptolemy to describe the representation of a spherical Earth by a map drawn in the plane.
Planisphere
Editions and translations
References
External links
"Ptolemy on Astrolabes"
Ancient Greek mathematical works
Astronomy books
Works by Ptolemy
|
https://en.wikipedia.org/wiki/Castle%20Death
|
Castle Death is the seventh book in the Lone Wolf book series created by Joe Dever.
Gameplay
Lone Wolf books rely on a combination of thought and luck. Certain statistics such as combat skill and endurance attributes are determined randomly before play (reading). The player is then allowed to choose which Magnakai disciplines or skills he or she possess. This number depends directly on how many books in the series have been completed ("Magnakai rank"). With each additional book completed, the player chooses one additional Magnakai discipline.
Plot
In his quest to attain Kai Grand Master status, Lone Wolf must seek out and find 7 Lorestones. After obtaining the Lorestone of Varetta in the previous book and absorbing its wisdom and power, the location of the next Lorestone is revealed as the remote township of Herdos. Here, Lone Wolf is directed by friendly Elder Magi to search within the accursed fortress of Kazan-Oud, otherwise known as "Castle Death".
External links
Gamebooks - Lone Wolf
Gamebooks - Castle Death
Project Aon - Castle Death
Lone Wolf (gamebooks)
1986 fiction books
Berkley Books books
|
https://en.wikipedia.org/wiki/The%20Jungle%20of%20Horrors
|
The Jungle of Horrors is the eighth book in the award-winning Lone Wolf book series created by Joe Dever.
Gameplay
Lone Wolf books rely on a combination of thought and luck. Certain statistics such as combat skill and endurance attributes are determined randomly before play (reading). The player is then allowed to choose which Magnakai disciplines or skills he or she possess. This number depends directly on how many books in the series have been completed ("Magnakai rank"). With each additional book completed, the player chooses one additional Magnakai discipline.
This book provides the player (reader) with a companion/guide named Paido, who accompanies the player through much of the book, in contrast to most of the other books which feature only solo adventures.
Plot
After surviving the perils of Castle Death and being tutored by the Elder Magi, Lone Wolf and the reader now seek out the third Lorestone. The location of this Lorestone is thought to be hidden in a temple deep within a jungle-swamp known as the Danarg. Over the years, this fetid swamp has become the home for any number of evil creatures who seek to protect the jungle and its treasures. To make matters worse, news is delivered that the Darklords have united behind a new leader, and may soon again bring war to Magnamund, increasing Lone Wolf's sense of urgency.
External links
Gamebooks - Lone Wolf
Gamebooks - The Jungle of Horrors
Project Aon - The Jungle of Horrors
1987 fiction books
Lone Wolf (gamebooks)
Berkley Books books
|
https://en.wikipedia.org/wiki/List%20of%20triangle%20inequalities
|
In geometry, triangle inequalities are inequalities involving the parameters of triangles, that hold for every triangle, or for every triangle meeting certain conditions. The inequalities give an ordering of two different values: they are of the form "less than", "less than or equal to", "greater than", or "greater than or equal to". The parameters in a triangle inequality can be the side lengths, the semiperimeter, the angle measures, the values of trigonometric functions of those angles, the area of the triangle, the medians of the sides, the altitudes, the lengths of the internal angle bisectors from each angle to the opposite side, the perpendicular bisectors of the sides, the distance from an arbitrary point to another point, the inradius, the exradii, the circumradius, and/or other quantities.
Unless otherwise specified, this article deals with triangles in the Euclidean plane.
Main parameters and notation
The parameters most commonly appearing in triangle inequalities are:
the side lengths a, b, and c;
the semiperimeter s = (a + b + c) / 2 (half the perimeter p);
the angle measures A, B, and C of the angles of the vertices opposite the respective sides a, b, and c (with the vertices denoted with the same symbols as their angle measures);
the values of trigonometric functions of the angles;
the area T of the triangle;
the medians ma, mb, and mc of the sides (each being the length of the line segment from the midpoint of the side to the opposite vertex);
the altitudes ha, hb, and hc (each being the length of a segment perpendicular to one side and reaching from that side (or possibly the extension of that side) to the opposite vertex);
the lengths of the internal angle bisectors ta, tb, and tc (each being a segment from a vertex to the opposite side and bisecting the vertex's angle);
the perpendicular bisectors pa, pb, and pc of the sides (each being the length of a segment perpendicular to one side at its midpoint and reaching to one of the other sides);
the lengths of line segments with an endpoint at an arbitrary point P in the plane (for example, the length of the segment from P to vertex A is denoted PA or AP);
the inradius r (radius of the circle inscribed in the triangle, tangent to all three sides), the exradii ra, rb, and rc (each being the radius of an excircle tangent to side a, b, or c respectively and tangent to the extensions of the other two sides), and the circumradius R (radius of the circle circumscribed around the triangle and passing through all three vertices).
Side lengths
The basic triangle inequality is
or equivalently
In addition,
where the value of the right side is the lowest possible bound, approached asymptotically as certain classes of triangles approach the degenerate case of zero area. The left inequality, which holds for all positive a, b, c, is Nesbitt's inequality.
We have
If angle C is obtuse (greater than 90°) then
if C is acute (less than 90°) then
The in-between case of equality when C is a
|
https://en.wikipedia.org/wiki/The%20Prisoners%20of%20Time
|
The Prisoners of Time is the eleventh book in the Lone Wolf book series created by Joe Dever.
Gameplay
Lone Wolf books rely on a combination of thought and luck. Certain statistics such as combat skill and endurance attributes are determined randomly before play (reading). The player is then allowed to choose which Magnakai disciplines or skills he or she possess. This number depends directly on how many books in the series have been completed ("Magnakai rank"). With each additional book completed, the player chooses one additional Magnakai discipline.
Like several of the previous books in the series, this book again suffers from relative linearity. Furthermore, there are a set of difficult battles near the end of the book which can make completing the book frustrating, particularly for those who are not carrying characters over from previous books (and thus do not have the advantage of additional Magnakai disciplines or ranks).
Plot
Although Lone Wolf is successful in rescuing one of the captive Lorestones from Torgar, both he and the remaining two Lorestones are blasted through a dimensional portal (Shadow Gate) by Darklord Gnaag. After plummeting through the Shadow Gate, Lone Wolf finds himself trapped on the Daziarn Plane and must join strange allies and face old enemies if he hopes to make his way back from the Daziarn in time to save his homeland from destruction at the hands of the Darklords and their powerful new armies.
External links
Gamebooks - Lone Wolf
Gamebooks - The Prisoners of Time
Project Aon - The Prisoners of Time
1987 fiction books
Lone Wolf (gamebooks)
Berkley Books books
|
https://en.wikipedia.org/wiki/Weyl%27s%20inequality
|
In linear algebra, Weyl's inequality is a theorem about the changes to eigenvalues of an Hermitian matrix that is perturbed. It can be used to estimate the eigenvalues of a perturbed Hermitian matrix.
Weyl's inequality about perturbation
Let and be n×n Hermitian matrices, with their respective eigenvalues ordered as follows:
Then the following inequalities hold:
and, more generally,
In particular, if is positive definite then plugging into the above inequalities leads to
Note that these eigenvalues can be ordered, because they are real (as eigenvalues of Hermitian matrices).
Weyl's inequality between eigenvalues and singular values
Let have singular values and eigenvalues ordered so that . Then
For , with equality for .
Applications
Estimating perturbations of the spectrum
Assume that is small in the sense that its spectral norm satisfies for some small . Then it follows that all the eigenvalues of are bounded in absolute value by . Applying Weyl's inequality, it follows that the spectra of the Hermitian matrices M and N are close in the sense that
Note, however, that this eigenvalue perturbation bound is generally false for non-Hermitian matrices (or more accurately, for non-normal matrices). For a counterexample, let be arbitrarily small, and consider
whose eigenvalues and do not satisfy .
Weyl's inequality for singular values
Let be a matrix with . Its singular values are the positive eigenvalues of the Hermitian augmented matrix
Therefore, Weyl's eigenvalue perturbation inequality for Hermitian matrices extends naturally to perturbation of singular values. This result gives the bound for the perturbation in the singular values of a matrix due to an additive perturbation :
where we note that the largest singular value coincides with the spectral norm .
Notes
References
Matrix Theory, Joel N. Franklin, (Dover Publications, 1993)
"Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen", H. Weyl, Math. Ann., 71 (1912), 441–479
Diophantine approximation
Inequalities
Linear algebra
|
https://en.wikipedia.org/wiki/Monge%E2%80%93Amp%C3%A8re%20equation
|
In mathematics, a (real) Monge–Ampère equation is a nonlinear second-order partial differential equation of special kind. A second-order equation for the unknown function u of two variables x,y is of Monge–Ampère type if it is linear in the determinant of the Hessian matrix of u and in the second-order partial derivatives of u. The independent variables (x,y) vary over a given domain D of R2. The term also applies to analogous equations with n independent variables. The most complete results so far have been obtained when the equation is elliptic.
Monge–Ampère equations frequently arise in differential geometry, for example, in the Weyl and Minkowski problems in differential geometry of surfaces. They were first studied by Gaspard Monge in 1784 and later by André-Marie Ampère in 1820. Important results in the theory of Monge–Ampère equations have been obtained by Sergei Bernstein, Aleksei Pogorelov, Charles Fefferman, and Louis Nirenberg. More recently, Alessio Figalli and Luis Caffarelli were recognized for their work on the regularity of the Monge–Ampère equation, with the former winning the Fields Medal in 2018 and the latter the Abel Prize in 2023.
Description
Given two independent variables x and y, and one dependent variable u, the general Monge–Ampère equation is of the form
where A, B, C, D, and E are functions depending on the first-order variables x, y, u, ux, and uy only.
Rellich's theorem
Let Ω be a bounded domain in R3, and suppose that on Ω A, B, C, D, and E are continuous functions of x and y only. Consider the Dirichlet problem to find u so that
If
then the Dirichlet problem has at most two solutions.
Ellipticity results
Suppose now that x is a variable with values in a domain in Rn, and that f(x,u,Du) is a positive function. Then the Monge–Ampère equation
is a nonlinear elliptic partial differential equation (in the sense that its linearization is elliptic), provided one confines attention to convex solutions.
Accordingly, the operator L satisfies versions of the maximum principle, and in particular solutions to the Dirichlet problem are unique, provided they exist.
Applications
Monge–Ampère equations arise naturally in several problems in Riemannian geometry, conformal geometry, and CR geometry. One of the simplest of these applications is to the problem of prescribed Gauss curvature. Suppose that a real-valued function K is specified on a domain Ω in Rn, the problem of prescribed Gauss curvature seeks to identify a hypersurface of Rn+1 as a graph z = u(x) over x ∈ Ω so that at each point of the surface the Gauss curvature is given by K(x). The resulting partial differential equation is
The Monge–Ampère equations are related to the Monge–Kantorovich optimal mass transportation problem, when the "cost functional" therein is given by the Euclidean distance.
See also
List of nonlinear partial differential equations
Complex Monge–Ampère equation
References
Additional references
External links
Partial diff
|
https://en.wikipedia.org/wiki/Random%20permutation%20statistics
|
The statistics of random permutations, such as the cycle structure of a random permutation are of fundamental importance in the analysis of algorithms, especially of sorting algorithms, which operate on random permutations. Suppose, for example, that we are using quickselect (a cousin of quicksort) to select a random element of a random permutation. Quickselect will perform a partial sort on the array, as it partitions the array according to the pivot. Hence a permutation will be less disordered after quickselect has been performed. The amount of disorder that remains may be analysed with generating functions. These generating functions depend in a fundamental way on the generating functions of random permutation statistics. Hence it is of vital importance to compute these generating functions.
The article on random permutations contains an introduction to random permutations.
The fundamental relation
Permutations are sets of labelled cycles. Using the labelled case of the Flajolet–Sedgewick fundamental theorem and writing for the set of permutations and for the singleton set, we have
Translating into exponential generating functions (EGFs), we have
where we have used the fact that the EGF of the combinatorial species of permutations (there are n! permutations of n elements) is
This one equation allows one to derive a large number of permutation statistics. Firstly, by dropping terms from , i.e. exp, we may constrain the number of cycles that a permutation contains, e.g. by restricting the EGF to we obtain permutations containing two cycles. Secondly, note that the EGF of labelled cycles, i.e. of , is
because there are k! / k labelled cycles. This means that by dropping terms from this generating function, we may constrain the size of the cycles that occur in a permutation and obtain an EGF of the permutations containing only cycles of a given size.
Instead of removing and selecting cycles, one can also put different weights on different size cycles. If is a weight function that depends only on the size k of the cycle and for brevity we write
defining the value of b for a permutation to be the sum of its values on the cycles, then we may mark cycles of length k with ub(k) and obtain a two-variable generating function
This is a "mixed" generating function: it is an exponential generating function in z and an ordinary generating function in the secondary parameter u. Differentiating and evaluating at u = 1, we have
This is the probability generating function of the expectation of b. In other words, the coefficient of in this power series is the expected value of b on permutations in , given that each permutation is chosen with the same probability .
This article uses the coefficient extraction operator [zn], documented on the page for formal power series.
Number of permutations that are involutions
An involution is a permutation σ so that σ2 = 1 under permutation composition. It follows that σ may only contain cycles of len
|
https://en.wikipedia.org/wiki/Midy%27s%20theorem
|
In mathematics, Midy's theorem, named after French mathematician E. Midy, is a statement about the decimal expansion of fractions a/p where p is a prime and a/p has a repeating decimal expansion with an even period . If the period of the decimal representation of a/p is 2n, so that
then the digits in the second half of the repeating decimal period are the 9s complement of the corresponding digits in its first half. In other words,
For example,
Extended Midy's theorem
If k is any divisor of h (where h is the number of digits of the period of the decimal expansion of a/p (where p is again a prime)), then Midy's theorem can be generalised as follows. The extended Midy's theorem states that if the repeating portion of the decimal expansion of a/p is divided into k-digit numbers, then their sum is a multiple of 10k − 1.
For example,
has a period of 18. Dividing the repeating portion into 6-digit numbers and summing them gives
Similarly, dividing the repeating portion into 3-digit numbers and summing them gives
Midy's theorem in other bases
Midy's theorem and its extension do not depend on special properties of the decimal expansion, but work equally well in any base b, provided we replace 10k − 1 with bk − 1 and carry out addition in base b.
For example, in octal
In duodecimal (using inverted two and three for ten and eleven, respectively)
Proof of Midy's theorem
Short proofs of Midy's theorem can be given using results from group theory. However, it is also possible to prove Midy's theorem using elementary algebra and modular arithmetic:
Let p be a prime and a/p be a fraction between 0 and 1. Suppose the expansion of a/p in base b has a period of ℓ, so
where N is the integer whose expansion in base b is the string a1a2...aℓ.
Note that b ℓ − 1 is a multiple of p because (b ℓ − 1)a/p is an integer. Also bn−1 is not a multiple of p for any value of n less than ℓ, because otherwise the repeating period of a/p in base b would be less than ℓ.
Now suppose that ℓ = hk. Then b ℓ − 1 is a multiple of bk − 1. (To see this, substitute x for bk; then bℓ = xh and x − 1 is a factor of xh − 1. ) Say b ℓ − 1 = m(bk − 1), so
But b ℓ − 1 is a multiple of p; bk − 1 is not a multiple of p (because k is less than ℓ ); and p is a prime; so m must be a multiple of p and
is an integer. In other words,
Now split the string a1a2...aℓ into h equal parts of length k, and let these represent the integers N0...Nh − 1 in base b, so that
To prove Midy's extended theorem in base b we must show that the sum of the h integers Ni is a multiple of bk − 1.
Since bk is congruent to 1 modulo bk − 1, any power of bk will also be congruent to 1 modulo bk − 1. So
which proves Midy's extended theorem in base b.
To prove the original Midy's theorem, take the special case where h = 2. Note that N0 and N1 are both represented by strings of k digits in base b so both satisfy
N0 and N1 cannot both equal 0 (otherwise a/p = 0) and cannot both equal bk − 1 (otherwis
|
https://en.wikipedia.org/wiki/Inverse%20scattering%20transform
|
In mathematics, the inverse scattering transform is a method for solving some non-linear partial differential equations. The method is a non-linear analogue, and in some sense generalization, of the Fourier transform, which itself is applied to solve many linear partial differential equations. The name "inverse scattering method" comes from the key idea of recovering the time evolution of a potential from the time evolution of its scattering data: inverse scattering refers to the problem of recovering a potential from its scattering matrix, as opposed to the direct scattering problem of finding the scattering matrix from the potential.
The inverse scattering transform may be applied to many of the so-called exactly solvable models, that is to say completely integrable infinite dimensional systems.
Overview
The inverse scattering transform was first introduced by for the Korteweg–de Vries equation, and soon extended to the nonlinear Schrödinger equation, the Sine-Gordon equation, and the Toda lattice equation. It was later used to solve many other equations, such as the Kadomtsev–Petviashvili equation, the Ishimori equation, the Dym equation, and so on. A further family of examples is provided by the Bogomolny equations (for a given gauge group and oriented Riemannian 3-fold), the solutions of which are magnetic monopoles.
A characteristic of solutions obtained by the inverse scattering method is the existence of solitons, solutions resembling both particles and waves, which have no analogue for linear partial differential equations. The term "soliton" arises from non-linear optics.
The inverse scattering problem can be written as a Riemann–Hilbert factorization problem, at least in the case of equations of one space dimension. This formulation can be generalized to differential operators of order greater than 2 and also to periodic potentials.
In higher space dimensions one has instead a "nonlocal" Riemann–Hilbert factorization problem (with convolution instead of multiplication) or a d-bar problem.
Example: the Korteweg–de Vries equation
The Korteweg–de Vries equation is a nonlinear, dispersive, evolution partial differential equation for a function u; of two real variables, one space variable x and one time variable t :
with and denoting partial derivatives with respect to t and x, respectively.
To solve the initial value problem for this equation where is a known function of x, one associates to this equation the Schrödinger eigenvalue equation
where is an unknown function of t and x and u is the solution of the Korteweg–de Vries equation that is unknown except at . The constant is an eigenvalue.
From the Schrödinger equation we obtain
Substituting this into the Korteweg–de Vries equation and integrating gives the equation
where C and D are constants.
Method of solution
Step 1. Determine the nonlinear partial differential equation. This is usually accomplished by analyzing the physics of the situation being studied.
|
https://en.wikipedia.org/wiki/Binomial%20regression
|
In statistics, binomial regression is a regression analysis technique in which the response (often referred to as Y) has a binomial distribution: it is the number of successes in a series of independent Bernoulli trials, where each trial has probability of success . In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables.
Binomial regression is closely related to binary regression: a binary regression can be considered a binomial regression with , or a regression on ungrouped binary data, while a binomial regression can be considered a regression on grouped binary data (see comparison). Binomial regression models are essentially the same as binary choice models, one type of discrete choice model: the primary difference is in the theoretical motivation (see comparison). In machine learning, binomial regression is considered a special case of probabilistic classification, and thus a generalization of binary classification.
Example application
In one published example of an application of binomial regression, the details were as follows. The observed outcome variable was whether or not a fault occurred in an industrial process. There were two explanatory variables: the first was a simple two-case factor representing whether or not a modified version of the process was used and the second was an ordinary quantitative variable measuring the purity of the material being supplied for the process.
Specification of model
The response variable Y is assumed to be binomially distributed conditional on the explanatory variables X. The number of trials n is known, and the probability of success for each trial p is specified as a function θ(X). This implies that the conditional expectation and conditional variance of the observed fraction of successes, Y/n, are
The goal of binomial regression is to estimate the function θ(X). Typically the statistician assumes , for a known function m, and estimates β. Common choices for m include the logistic function.
The data are often fitted as a generalised linear model where the predicted values μ are the probabilities that any individual event will result in a success. The likelihood of the predictions is then given by
where 1A is the indicator function which takes on the value one when the event A occurs, and zero otherwise: in this formulation, for any given observation yi, only one of the two terms inside the product contributes, according to whether yi=0 or 1. The likelihood function is more fully specified by defining the formal parameters μi as parameterised functions of the explanatory variables: this defines the likelihood in terms of a much reduced number of parameters. Fitting of the model is usually achieved by employing the method of maximum likelihood to determine these parameters. In practice, the use of a formulation as a generalised line
|
https://en.wikipedia.org/wiki/Renate%20Loll
|
Renate Loll (born 19 June 1962, Aachen) Is a German physicist. She is a Professor in Theoretical Physics at the Institute for Mathematics, Astrophysics and Particle Physics of the Radboud University in Nijmegen, Netherlands. She previously worked at the Institute for Theoretical Physics of Utrecht University. She received her Ph.D. from Imperial College, London, in 1989. In 2001 she joined the permanent staff of the ITP, after spending several years at the Max Planck Institute for Gravitational Physics in Golm, Germany. With Jan Ambjørn and Polish physicist Jerzy Jurkiewicz she helped develop a new approach to nonperturbative quantization of gravity, that of Causal Dynamical Triangulations.
She has been a member of the Royal Netherlands Academy of Arts and Sciences since 2015.
References
External links
Prof Loll's website
1962 births
20th-century German physicists
20th-century German women scientists
21st-century German physicists
21st-century German women scientists
Alumni of Imperial College London
German women physicists
Living people
Members of the Royal Netherlands Academy of Arts and Sciences
People from Aachen
Academic staff of Radboud University Nijmegen
University of Freiburg alumni
University of Potsdam alumni
Academic staff of Utrecht University
20th-century German women
21st-century German women
|
https://en.wikipedia.org/wiki/Squared%20triangular%20number
|
In number theory, the sum of the first cubes is the square of the th triangular number. That is,
The same equation may be written more compactly using the mathematical notation for summation:
This identity is sometimes called Nicomachus's theorem, after Nicomachus of Gerasa (c. 60 – c. 120 CE).
History
Nicomachus, at the end of Chapter 20 of his Introduction to Arithmetic, pointed out that if one writes a list of the odd numbers, the first is the cube of 1, the sum of the next two is the cube of 2, the sum of the next three is the cube of 3, and so on. He does not go further than this, but from this it follows that the sum of the first cubes equals the sum of the first odd numbers, that is, the odd numbers from 1 to . The average of these numbers is obviously , and there are of them, so their sum is
Many early mathematicians have studied and provided proofs of Nicomachus's theorem. claims that "every student of number theory surely must have marveled at this miraculous fact". finds references to the identity not only in the works of Nicomachus in what is now Jordan in the first century CE, but also in those of Aryabhata in India in the fifth century, and in those of Al-Karaji circa 1000 in Persia. mentions several additional early mathematical works on this formula, by Al-Qabisi (tenth century Arabia), Gersonides (circa 1300 France), and Nilakantha Somayaji (circa 1500 India); he reproduces Nilakantha's visual proof.
Numeric values; geometric and probabilistic interpretation
The sequence of squared triangular numbers is
These numbers can be viewed as figurate numbers, a four-dimensional hyperpyramidal generalization of the triangular numbers and square pyramidal numbers.
As observes, these numbers also count the number of rectangles with horizontal and vertical sides formed in an grid. For instance, the points of a grid (or a square made up of three smaller squares on a side) can form 36 different rectangles. The number of squares in a square grid is similarly counted by the square pyramidal numbers.
The identity also admits a natural probabilistic interpretation as follows. Let be four integer numbers independently and uniformly chosen at random between and . Then, the probability that is the largest of the four numbers equals the probability that is at least as large as and that is at least as large as . That is, . For any particular value of , the combinations of , , and that make largest form a cube so (adding the size of this cube over all choices of ) the number of combinations of for which is largest is a sum of cubes, the left hand side of the Nichomachus identity. The sets of pairs with and of pairs with form isosceles right triangles, and the set counted by the right hand side of the equation of probabilities is the Cartesian product of these two triangles, so its size is the square of a triangular number on the right hand side of the Nichomachus identity. The probabilities themselves are respectively
|
https://en.wikipedia.org/wiki/Pseudolikelihood
|
In statistical theory, a pseudolikelihood is an approximation to the joint probability distribution of a collection of random variables. The practical use of this is that it can provide an approximation to the likelihood function of a set of observed data which may either provide a computationally simpler problem for estimation, or may provide a way of obtaining explicit estimates of model parameters.
The pseudolikelihood approach was introduced by Julian Besag in the context of analysing data having spatial dependence.
Definition
Given a set of random variables the pseudolikelihood of is
in discrete case and
in continuous one.
Here is a vector of variables, is a vector of values, is conditional density and is the vector of parameters we are to estimate. The expression above means that each variable in the vector has a corresponding value in the vector and means that the coordinate has been omitted. The expression is the probability that the vector of variables has values equal to the vector . This probability of course depends on the unknown parameter . Because situations can often be described using state variables ranging over a set of possible values, the expression can therefore represent the probability of a certain state among all possible states allowed by the state variables.
The pseudo-log-likelihood is a similar measure derived from the above expression, namely (in discrete case)
One use of the pseudolikelihood measure is as an approximation for inference about a Markov or Bayesian network, as the pseudolikelihood of an assignment to may often be computed more efficiently than the likelihood, particularly when the latter may require marginalization over a large number of variables.
Properties
Use of the pseudolikelihood in place of the true likelihood function in a maximum likelihood analysis can lead to good estimates, but a straightforward application of the usual likelihood techniques to derive information about estimation uncertainty, or for significance testing, would in general be incorrect.
References
Statistical inference
|
https://en.wikipedia.org/wiki/Perron%27s%20formula
|
In mathematics, and more particularly in analytic number theory, Perron's formula is a formula due to Oskar Perron to calculate the sum of an arithmetic function, by means of an inverse Mellin transform.
Statement
Let be an arithmetic function, and let
be the corresponding Dirichlet series. Presume the Dirichlet series to be uniformly convergent for . Then Perron's formula is
Here, the prime on the summation indicates that the last term of the sum must be multiplied by 1/2 when x is an integer. The integral is not a convergent Lebesgue integral; it is understood as the Cauchy principal value. The formula requires that c > 0, c > σ, and x > 0.
Proof
An easy sketch of the proof comes from taking Abel's sum formula
This is nothing but a Laplace transform under the variable change Inverting it one gets Perron's formula.
Examples
Because of its general relationship to Dirichlet series, the formula is commonly applied to many number-theoretic sums. Thus, for example, one has the famous integral representation for the Riemann zeta function:
and a similar formula for Dirichlet L-functions:
where
and is a Dirichlet character. Other examples appear in the articles on the Mertens function and the von Mangoldt function.
Generalizations
Perron's formula is just a special case of the Mellin discrete convolution
where
and
the Mellin transform. The Perron formula is just the special case of the test function for the Heaviside step function.
References
Page 243 of
Theorems in analytic number theory
Calculus
Integral transforms
Summability methods
|
https://en.wikipedia.org/wiki/Burnside%20ring
|
In mathematics, the Burnside ring of a finite group is an algebraic construction that encodes the different ways the group can act on finite sets. The ideas were introduced by William Burnside at the end of the nineteenth century. The algebraic ring structure is a more recent development, due to Solomon (1967).
Formal definition
Given a finite group G, the generators of its Burnside ring Ω(G) are the formal sums of isomorphism classes of finite G-sets. For the ring structure, addition is given by disjoint union of G-sets and multiplication by their Cartesian product.
The Burnside ring is a free Z-module, whose generators are the (isomorphism classes of) orbit types of G.
If G acts on a finite set X, then one can write (disjoint union), where each Xi is a single G-orbit. Choosing any element xi in Xi creates an isomorphism G/Gi → Xi, where Gi is the stabilizer (isotropy) subgroup of G at xi. A different choice of representative yi in Xi gives a conjugate subgroup to Gi as stabilizer. This shows that the generators of Ω(G) as a Z-module are the orbits G/H as H ranges over conjugacy classes of subgroups of G.
In other words, a typical element of Ω(G) is
where ai in Z and G1, G2, ..., GN are representatives of the conjugacy classes of subgroups of G.
Marks
Much as character theory simplifies working with group representations, marks simplify working with permutation representations and the Burnside ring.
If G acts on X, and H ≤ G (H is a subgroup of G), then the mark of H on X is the number of elements of X that are fixed by every element of H: , where
If H and K are conjugate subgroups, then mX(H) = mX(K) for any finite G-set X; indeed, if K = gHg−1 then XK = g · XH.
It is also easy to see that for each H ≤ G, the map Ω(G) → Z : X ↦ mX(H) is a homomorphism. This means that to know the marks of G, it is sufficient to evaluate them on the generators of Ω(G), viz. the orbits G/H.
For each pair of subgroups H,K ≤ G define
This is mX(H) for X = G/K. The condition HgK = gK is equivalent to g−1Hg ≤ K, so if H is not conjugate to a subgroup of K then m(K, H) = 0.
To record all possible marks, one forms a table, Burnside's Table of Marks, as follows: Let G1 (= trivial subgroup), G2, ..., GN = G be representatives of the N conjugacy classes of subgroups of G, ordered in such a way that whenever Gi is conjugate to a subgroup of Gj, then i ≤ j. Now define the N × N table (square matrix) whose (i, j)th entry is m(Gi, Gj). This matrix is lower triangular, and the elements on the diagonal are non-zero so it is invertible.
It follows that if X is a G-set, and u its row vector of marks, so ui = mX(Gi), then X decomposes as a disjoint union of ai copies of the orbit of type Gi, where the vector a satisfies,
aM = u,
where M is the matrix of the table of marks. This theorem is due to .
Examples
The table of marks for the cyclic group of order 6:
The table of marks for the symmetric group S3:
The dots in the two tables are all zeros, merely emphasizi
|
https://en.wikipedia.org/wiki/Set%20theory%20of%20the%20real%20line
|
Set theory of the real line is an area of mathematics concerned with the application of set theory to aspects of the real numbers.
For example, one knows that all countable sets of reals are null, i.e. have Lebesgue measure 0; one might therefore ask the least possible size of a set
which is not Lebesgue null. This invariant is called the uniformity of the ideal of null sets, denoted . There are many such invariants associated with this and other ideals, e.g. the ideal of meagre sets, plus more which do not have a characterisation in terms of ideals. If the continuum hypothesis (CH) holds, then all such invariants are equal to , the least uncountable cardinal. For example, we know is uncountable, but being the size of some set of reals under CH it can be at most .
On the other hand, if one assumes Martin's Axiom (MA) all common invariants are "big", that is equal to , the cardinality of the continuum. Martin's Axiom is consistent with . In fact one should view Martin's Axiom as a forcing axiom that negates the need to do specific forcings of a certain class (those satisfying the ccc, since the consistency of MA with large continuum is proved by doing all such forcings (up to a certain size shown to be sufficient). Each invariant can be made large by some ccc forcing, thus each is big given MA.
If one restricts to specific forcings, some invariants will become big while others remain small. Analysing these effects is the major work of the area, seeking to determine which inequalities between invariants are provable and which are inconsistent with ZFC. The inequalities among the ideals of measure (null sets) and category (meagre sets) are captured in Cichon's diagram. Seventeen models (forcing constructions) were produced during the 1980s, starting with work of Arnold Miller, to demonstrate that no other inequalities are provable. These are analysed in detail in the book by Tomek Bartoszynski and Haim Judah, two of the eminent workers in the field.
One curious result is that if you can cover the real line with meagre sets (where ) then ; conversely if you can cover the real line with null sets then the least non-meagre set has size at least ; both of these results follow from the existence of a decomposition of as the union of a meagre set and a null set.
One of the last great unsolved problems of the area was the consistency of
proved in 1998 by Saharon Shelah.
See also
Cichoń's diagram
Cardinal invariant
References
Bartoszynski, Tomek & Judah, Haim Set theory: On the structure of the real line A.. K. Peters Ltd. (1995).
Miller, Arnold Some properties of measure and category Transactions of the American Mathematical Society, 266(1):93-114, (1981)
Set theory
|
https://en.wikipedia.org/wiki/Molina%2C%20Chile
|
Molina is a Chilean city and commune in Curicó Province, Maule Region. Molina is named after Chilean Jesuit Juan Ignacio Molina.
Demographics
According to the 2002 census of the National Statistics Institute, Molina spans an area of and has 38,521 inhabitants (19,392 men and 19,129 women). Of these, 28,232 (73.3%) lived in urban areas and 10,289 (26.7%) in rural areas. The population grew by 8% (2,847 persons) between the 1992 and 2002 censuses.
Administration
As a commune, Molina is a third-level administrative division of Chile administered by a municipal council, headed by an alcalde who is directly elected every four years. The 2008-2012 alcalde is Mirtha Segura Ovalle (UDI).
Within the electoral divisions of Chile, Molina is represented in the Chamber of Deputies by Roberto León (PDC) and Celso Morales (UDI) as part of the 36th electoral district, together with Curicó, Teno, Romeral, Sagrada Familia, Hualañé, Licantén, Vichuquén and Rauco. The commune is represented in the Senate by Juan Antonio Coloma Correa (UDI) and Andrés Zaldívar Larraín (PDC) as part of the 10th senatorial constituency (Maule-North).
Notable people
Laureano Ladrón de Guevara (1889–1968), painter, printmaker and muralist
References
External links
Municipality of Molina
Communes of Chile
Populated places in Curicó Province
|
https://en.wikipedia.org/wiki/Transitively%20normal%20subgroup
|
In mathematics, in the field of group theory, a subgroup of a group is said to be transitively normal in the group if every normal subgroup of the subgroup is also normal in the whole group. In symbols, is a transitively normal subgroup of if for every normal in , we have that is normal in .
An alternate way to characterize these subgroups is: every normal subgroup preserving automorphism of the whole group must restrict to a normal subgroup preserving automorphism of the subgroup.
Here are some facts about transitively normal subgroups:
Every normal subgroup of a transitively normal subgroup is normal.
Every direct factor, or more generally, every central factor is transitively normal. Thus, every central subgroup is transitively normal.
A transitively normal subgroup of a transitively normal subgroup is transitively normal.
A transitively normal subgroup is normal.
References
See also
Normal subgroup
Subgroup properties
|
https://en.wikipedia.org/wiki/Central%20subgroup
|
In mathematics, in the field of group theory, a subgroup of a group is termed central if it lies inside the center of the group.
Given a group , the center of , denoted as , is defined as the set of those elements of the group which commute with every element of the group. The center is a characteristic subgroup. A subgroup of is termed central if .
Central subgroups have the following properties:
They are abelian groups (because, in particular, all elements of the center must commute with each other).
They are normal subgroups. They are central factors, and are hence transitively normal subgroups.
References
.
Subgroup properties
|
https://en.wikipedia.org/wiki/C-function
|
In mathematics, c-function may refer to:
Smooth function
Harish-Chandra's c-function in the theory of Lie groups
List of C functions for the programming language C
|
https://en.wikipedia.org/wiki/A-group
|
In mathematics, in the area of abstract algebra known as group theory, an A-group is a type of group that is similar to abelian groups. The groups were first studied in the 1940s by Philip Hall, and are still studied today. A great deal is known about their structure.
Definition
An A-group is a finite group with the property that all of its Sylow subgroups are abelian.
History
The term A-group was probably first used in , where attention was restricted to soluble A-groups. Hall's presentation was rather brief without proofs, but his remarks were soon expanded with proofs in . The representation theory of A-groups was studied in . Carter then published an important relationship between Carter subgroups and Hall's work in . The work of Hall, Taunt, and Carter was presented in textbook form in . The focus on soluble A-groups broadened, with the classification of finite simple A-groups in which allowed generalizing Taunt's work to finite groups in . Interest in A-groups also broadened due to an important relationship to varieties of groups discussed in . Modern interest in A-groups was renewed when new enumeration techniques enabled tight asymptotic bounds on the number of distinct isomorphism classes of A-groups in .
Properties
The following can be said about A-groups:
Every subgroup, quotient group, and direct product of A-groups are A-groups.
Every finite abelian group is an A-group.
A finite nilpotent group is an A-group if and only if it is abelian.
The symmetric group on three points is an A-group that is not abelian.
Every group of cube-free order is an A-group.
The derived length of an A-group can be arbitrarily large, but no larger than the number of distinct prime divisors of the order, stated in , and presented in textbook form as .
The lower nilpotent series coincides with the derived series .
A soluble A-group has a unique maximal abelian normal subgroup .
The Fitting subgroup of a solvable A-group is equal to the direct product of the centers of the terms of the derived series, first stated in , then proven in , and presented in textbook form in .
A non-abelian finite simple group is an A-group if and only if it is isomorphic to the first Janko group or to PSL(2,q) where q > 3 and either q = 2n or q ≡ 3,5 mod 8, as shown in .
All the groups in the variety generated by a finite group are finitely approximable if and only if that group is an A-group, as shown in .
Like Z-groups, whose Sylow subgroups are cyclic, A-groups can be easier to study than general finite groups because of the restrictions on the local structure. For instance, a more precise enumeration of soluble A-groups was found after an enumeration of soluble groups with fixed, but arbitrary Sylow subgroups . A more leisurely exposition is given in .
References
, especially Kap. VI, §14, p751–760
Properties of groups
Finite groups
|
https://en.wikipedia.org/wiki/M-group
|
In mathematics, especially in the field of group theory, the term M-group may refer to a few distinct concepts:
monomial group, in character theory, a group whose complex irreducible characters are all monomial
Iwasawa group or modular group, in the study of subgroup lattices, a group whose subgroup lattice is modular
virtually polycyclic group, in infinite group theory, a group with a polycyclic subgroup of finite index
|
https://en.wikipedia.org/wiki/HN%20group
|
In mathematics, in the field of group theory, a HN group or hypernormalizing group is a group with the property that the hypernormalizer of any subnormal subgroup is the whole group.
For finite groups, this is equivalent to the condition that the normalizer of any subnormal subgroup be subnormal.
Some facts about HN groups:
Subgroups of solvable HN groups are solvable HN groups.
Metanilpotent A-groups are HN groups.
References
Finite Soluble Hypernormalizing Groups by Alan R. Camina in Journal of Algebra Vol 8 (362–375), 1968
Properties of groups
|
https://en.wikipedia.org/wiki/Component%20%28group%20theory%29
|
In mathematics, in the field of group theory, a component of a finite group is a quasisimple subnormal subgroup. Any two distinct components commute. The product of all the components is the layer of the group.
For finite abelian (or nilpotent) groups, p-component is used in a different sense to mean the Sylow p-subgroup, so the abelian group is the product of its p-components for primes p. These are not components in the sense above, as abelian groups are not quasisimple.
A quasisimple subgroup of a finite group is called a standard component if its centralizer has even order, it is normal in the centralizer of every involution centralizing it, and it commutes with none of its conjugates. This concept is used in the classification of finite simple groups, for instance, by showing that under mild restrictions on the standard component one of the following always holds:
a standard component is normal (so a component as above),
the whole group has a nontrivial solvable normal subgroup,
the subgroup generated by the conjugates of the standard component is on a short list,
or the standard component is a previously unknown quasisimple group .
References
Group theory
Subgroup properties
|
https://en.wikipedia.org/wiki/Perfect%20core
|
In mathematics, in the field of group theory, the perfect core (or perfect radical) of a group is its largest perfect subgroup. Its existence is guaranteed by the fact that the subgroup generated by a family of perfect subgroups is again a perfect subgroup. The perfect core is also the point where the transfinite derived series stabilizes for any group.
A group whose perfect core is trivial is termed a hypoabelian group. Every solvable group is hypoabelian, and so is every free group. More generally, every residually solvable group is hypoabelian.
The quotient of a group G by its perfect core is hypoabelian, and is called the hypoabelianization of G.
References
Functional subgroups
Group theory
Solvable groups
|
https://en.wikipedia.org/wiki/Imperfect%20group
|
In mathematics, in the area of algebra known as group theory, an imperfect group is a group with no nontrivial perfect quotients. Some of their basic properties were established in . The study of imperfect groups apparently began in .
The class of imperfect groups is closed under extension and quotient groups, but not under subgroups. If G is a group, N, M are normal subgroups with G/N and G/M imperfect, then G/(N∩M) is imperfect, showing that the class of imperfect groups is a formation. The (restricted or unrestricted) direct product of imperfect groups is imperfect.
Every solvable group is imperfect. Finite symmetric groups are also imperfect. The general linear groups PGL(2,q) are imperfect for q an odd prime power. For any group H, the wreath product H wr Sym2 of H with the symmetric group on two points is imperfect. In particular, every group can be embedded as a two-step subnormal subgroup of an imperfect group of roughly the same cardinality (2|H|2).
References
Properties of groups
|
https://en.wikipedia.org/wiki/Locally%20finite%20group
|
In mathematics, in the field of group theory, a locally finite group is a type of group that can be studied in ways analogous to a finite group. Sylow subgroups, Carter subgroups, and abelian subgroups of locally finite groups have been studied. The concept is credited to work in the 1930s by Russian mathematician Sergei Chernikov.
Definition and first consequences
A locally finite group is a group for which every finitely generated subgroup is finite.
Since the cyclic subgroups of a locally finite group are finitely generated hence finite, every element has finite order, and so the group is periodic.
Examples and non-examples
Examples:
Every finite group is locally finite
Every infinite direct sum of finite groups is locally finite (Although the direct product may not be.)
Omega-categorical groups
The Prüfer groups are locally finite abelian groups
Every Hamiltonian group is locally finite
Every periodic solvable group is locally finite .
Every subgroup of a locally finite group is locally finite. (Proof. Let G be a locally finite group and S a subgroup. Every finitely generated subgroup of S is a (finitely generated) subgroup of G.)
Hall's universal group is a countable locally finite group containing each countable locally finite group as subgroup.
Every group has a unique maximal normal locally finite subgroup
Every periodic subgroup of the general linear group over the complex numbers is locally finite. Since all locally finite groups are periodic, this means that for linear groups and periodic groups the conditions are identical.
Non-examples:
No group with an element of infinite order is a locally finite group
No nontrivial free group is locally finite
A Tarski monster group is periodic, but not locally finite.
Properties
The class of locally finite groups is closed under subgroups, quotients, and extensions .
Locally finite groups satisfy a weaker form of Sylow's theorems. If a locally finite group has a finite p-subgroup contained in no other p-subgroups, then all maximal p-subgroups are finite and conjugate. If there are finitely many conjugates, then the number of conjugates is congruent to 1 modulo p. In fact, if every countable subgroup of a locally finite group has only countably many maximal p-subgroups, then every maximal p-subgroup of the group is conjugate .
The class of locally finite groups behaves somewhat similarly to the class of finite groups. Much of the 1960s theory of formations and Fitting classes, as well as the older 19th century and 1930s theory of Sylow subgroups has an analogue in the theory of locally finite groups .
Similarly to the Burnside problem, mathematicians have wondered whether every infinite group contains an infinite abelian subgroup. While this need not be true in general, a result of Philip Hall and others is that every infinite locally finite group contains an infinite abelian group. The proof of this fact in infinite group theory relies upon the Feit–Thompson theo
|
https://en.wikipedia.org/wiki/Metanilpotent%20group
|
In mathematics, in the field of group theory, a metanilpotent group is a group that is nilpotent by nilpotent. In other words, it has a normal nilpotent subgroup such that the quotient group is also nilpotent.
In symbols, is metanilpotent if there is a normal subgroup such that both and are nilpotent.
The following are clear:
Every metanilpotent group is a solvable group.
Every subgroup and every quotient of a metanilpotent group is metanilpotent.
References
J.C. Lennox, D.J.S. Robinson, The Theory of Infinite Soluble Groups, Oxford University Press, 2004, . P.27.
D.J.S. Robinson, A Course in the Theory of Groups, GTM 80, Springer Verlag, 1996, . P.150.
Solvable groups
Properties of groups
|
https://en.wikipedia.org/wiki/Noga%20Alon
|
Noga Alon (; born 1956) is an Israeli mathematician and a professor of mathematics at Princeton University noted for his contributions to combinatorics and theoretical computer science, having authored hundreds of papers.
Education and career
Alon was born in 1956 in Haifa, where he graduated from the Hebrew Reali School in 1974. He graduated summa cum laude from the Technion – Israel Institute of Technology in 1979, earned a master's degree in mathematics in 1980 from Tel Aviv University, and received his Ph.D. in Mathematics at the Hebrew University of Jerusalem in 1983 with the dissertation Extremal Problems in Combinatorics supervised by Micha Perles.
After postdoctoral research at the Massachusetts Institute of Technology he returned to Tel Aviv University as a senior lecturer in 1985, obtained a permanent position as an associate professor there in 1986, and was promoted to full professor in 1988. He was head of the School of Mathematical Science from 1999 to 2001, and was given the Florence and Ted Baumritter Combinatorics and Computer Science Chair, before retiring as professor emeritus and moving to Princeton University in 2018.
He was editor-in-chief of the journal Random Structures and Algorithms beginning in 2008.
Research
Alon has published more than five hundred research papers, mostly in combinatorics and in theoretical computer science, and one book, on the probabilistic method. He has also published under the pseudonym "A. Nilli", based on the name of his daughter Nilli Alon.
His research contributions include the combinatorial Nullstellensatz, an algebraic tool with many applications in combinatorics; color-coding, a technique for fixed-parameter tractability of pattern-matching algorithms in graphs; and the Alon–Boppana bound in spectral graph theory.
Selected works
Book
The Probabilistic Method, with Joel Spencer, Wiley, 1992. 2nd ed., 2000; 3rd ed., 2008; 4th ed., 2016.
Research articles
Previously in the ACM Symposium on Theory of Computing (STOC), 1996.
Awards
Alon has received a number of awards, including the following:
1989 – Erdős Prize;
2000 – George Pólya Prize in Applied Combinatorics of the Society for Industrial and Applied Mathematics
2001 – Michael Bruno Memorial Award of the Israel Institute for Advanced Studies;
2005 – Gödel Prize, with Yossi Matias and Mario Szegedy, for their paper "The space complexity of approximating the frequency moments" on streaming algorithms
2008 – Israel Prize, for mathematics.
2011 – EMET Prize, with Saharon Shelah, for mathematics.
2019 – Paris Kanellakis Award, with Phillip Gibbons, Yossi Matias and Mario Szegedy, "for foundational work on streaming algorithms and their application to large scale data analytics"
2021 – Leroy P. Steele Prize for Mathematical Exposition, with Joel Spencer, for The Probabilistic Method
2022 – Shaw Prize in Mathematical Sciences, with Ehud Hrushovski, "for their remarkable contributions to discrete mathematics and model theor
|
https://en.wikipedia.org/wiki/Periodic%20point
|
In mathematics, in the study of iterated functions and dynamical systems, a periodic point of a function is a point which the system returns to after a certain number of function iterations or a certain amount of time.
Iterated functions
Given a mapping from a set into itself,
a point in is called periodic point if there exists an >0 so that
where is the th iterate of . The smallest positive integer satisfying the above is called the prime period or least period of the point . If every point in is a periodic point with the same period , then is called periodic with period (this is not to be confused with the notion of a periodic function).
If there exist distinct and such that
then is called a preperiodic point. All periodic points are preperiodic.
If is a diffeomorphism of a differentiable manifold, so that the derivative is defined, then one says that a periodic point is hyperbolic if
that it is attractive if
and it is repelling if
If the dimension of the stable manifold of a periodic point or fixed point is zero, the point is called a source; if the dimension of its unstable manifold is zero, it is called a sink; and if both the stable and unstable manifold have nonzero dimension, it is called a saddle or saddle point.
Examples
A period-one point is called a fixed point.
The logistic map
exhibits periodicity for various values of the parameter . For between 0 and 1, 0 is the sole periodic point, with period 1 (giving the sequence which attracts all orbits). For between 1 and 3, the value 0 is still periodic but is not attracting, while the value is an attracting periodic point of period 1. With greater than 3 but less than there are a pair of period-2 points which together form an attracting sequence, as well as the non-attracting period-1 points 0 and As the value of parameter rises toward 4, there arise groups of periodic points with any positive integer for the period; for some values of one of these repeating sequences is attracting while for others none of them are (with almost all orbits being chaotic).
Dynamical system
Given a real global dynamical system with the phase space and the evolution function,
a point in is called periodic with period if
The smallest positive with this property is called prime period of the point .
Properties
Given a periodic point with period , then for all in
Given a periodic point then all points on the orbit through are periodic with the same prime period.
See also
Limit cycle
Limit set
Stable set
Sharkovsky's theorem
Stationary point
Periodic points of complex quadratic mappings
Limit sets
|
https://en.wikipedia.org/wiki/Conjugacy%20problem
|
In abstract algebra, the conjugacy problem for a group G with a given presentation is the decision problem of determining, given two words x and y in G, whether or not they represent conjugate elements of G. That is, the problem is to determine whether there exists an element z of G such that
The conjugacy problem is also known as the transformation problem.
The conjugacy problem was identified by Max Dehn in 1911 as one of the fundamental decision problems in group theory; the other two being the word problem and the isomorphism problem. The conjugacy problem contains the word problem as a special case: if x and y are words, deciding if they are the same word is equivalent to deciding if is the identity, which is the same as deciding if it's conjugate to the identity. In 1912 Dehn gave an algorithm that solves both the word and conjugacy problem for the fundamental groups of closed orientable two-dimensional manifolds of genus greater than or equal to 2 (the genus 0 and genus 1 cases being trivial).
It is known that the conjugacy problem is undecidable for many classes of groups.
Classes of group presentations for which it is known to be soluble include:
free groups (no defining relators)
one-relator groups with torsion
braid groups
knot groups
finitely presented conjugacy separable groups
finitely generated abelian groups (relators include all commutators)
Gromov-hyperbolic groups
biautomatic groups
CAT(0) groups
Fundamental groups of geometrizable 3-manifolds
References
Group theory
|
https://en.wikipedia.org/wiki/Totally%20disconnected%20group
|
In mathematics, a totally disconnected group is a topological group that is totally disconnected. Such topological groups are necessarily Hausdorff.
Interest centres on locally compact totally disconnected groups (variously referred to as groups of td-type, locally profinite groups, or t.d. groups). The compact case has been heavily studied – these are the profinite groups – but for a long time not much was known about the general case. A theorem of van Dantzig from the 1930s, stating that every such group contains a compact open subgroup, was all that was known. Then groundbreaking work by George Willis in 1994, opened up the field by showing that every locally compact totally disconnected group contains a so-called tidy subgroup and a special function on its automorphisms, the scale function, giving a quantifiable parameter for the local structure. Advances on the global structure of totally disconnected groups were obtained in 2011 by Caprace and Monod, with notably a classification of characteristically simple groups and of Noetherian groups.
Locally compact case
In a locally compact, totally disconnected group, every neighbourhood of the identity contains a compact open subgroup. Conversely, if a group is such that the identity has a neighbourhood basis consisting of compact open subgroups, then it is locally compact and totally disconnected.
Tidy subgroups
Let G be a locally compact, totally disconnected group, U a compact open subgroup of G and a continuous automorphism of G.
Define:
U is said to be tidy for if and only if and and are closed.
The scale function
The index of in is shown to be finite and independent of the U which is tidy for . Define the scale function as this index. Restriction to inner automorphisms gives a function on G with interesting properties. These are in particular:
Define the function on G by
,
where is the inner automorphism of on G.
Properties
is continuous.
, whenever x in G is a compact element.
for every non-negative integer .
The modular function on G is given by .
Calculations and applications
The scale function was used to prove a conjecture by Hofmann and Mukherja and has been explicitly calculated for p-adic Lie groups and linear groups over local skew fields by Helge Glöckner.
Notes
References
G.A. Willis - The structure of totally disconnected, locally compact groups, Mathematische Annalen 300, 341-363 (1994)
Topological groups
|
https://en.wikipedia.org/wiki/Unknotting%20problem
|
In mathematics, the unknotting problem is the problem of algorithmically recognizing the unknot, given some representation of a knot, e.g., a knot diagram. There are several types of unknotting algorithms. A major unresolved challenge is to determine if the problem admits a polynomial time algorithm; that is, whether the problem lies in the complexity class P.
Computational complexity
First steps toward determining the computational complexity were undertaken in proving that the problem is
in larger complexity classes, which contain the class P. By using normal surfaces to describe the Seifert surfaces of a given knot, showed that the unknotting problem is in the complexity class NP. claimed the weaker result that unknotting is in AM ∩ co-AM; however, later they retracted this claim. In 2011, Greg Kuperberg proved that (assuming the generalized Riemann hypothesis) the unknotting problem is in co-NP, and in 2016, Marc Lackenby provided an unconditional proof of co-NP membership.
The unknotting problem has the same computational complexity as testing whether an embedding of an undirected graph in Euclidean space is linkless.
Unknotting algorithms
Several algorithms solving the unknotting problem are based on Haken's theory of normal surfaces:
Haken's algorithm uses the theory of normal surfaces to find a disk whose boundary is the knot. Haken originally used this algorithm to show that unknotting is decidable, but did not analyze its complexity in more detail.
Hass, Lagarias, and Pippenger showed that the set of all normal surfaces may be represented by the integer points in a polyhedral cone and that a surface witnessing the unknottedness of a curve (if it exists) can always be found on one of the extreme rays of this cone. Therefore, vertex enumeration methods can be used to list all of the extreme rays and test whether any of them corresponds to a bounding disk of the knot. Hass, Lagarias, and Pippenger used this method to show that the unknottedness is in NP; later researchers such as refined their analysis, showing that this algorithm can be useful (though not polynomial time), with its complexity being a low-order singly-exponential function of the number of crossings.
The algorithm of uses braid foliations, a somewhat different type of structure than a normal surface. However to analyze its behavior they return to normal surface theory.
Other approaches include:
The number of Reidemeister moves needed to change an unknot diagram to the standard unknot diagram is at most polynomial in the number of crossings. Therefore, a brute force search for all sequences of Reidemeister moves can detect unknottedness in exponential time.
Similarly, any two triangulations of the same knot complement may be connected by a sequence of Pachner moves of length at most doubly exponential in the number of crossings. Therefore, it is possible to determine whether a knot is the unknot by testing all sequences of Pachner moves of this length, starti
|
https://en.wikipedia.org/wiki/Arnold%20Walfisz
|
Arnold Walfisz (2 July 1892 – 29 May 1962) was a Jewish-Polish mathematician working in analytic number theory.
Life
After the Abitur in Warsaw (Poland), Arnold Walfisz studied (1909−14 and 1918−21) in Germany at Munich, Berlin, Heidelberg and Göttingen. Edmund Landau was his doctoral-thesis supervisor at the University of Göttingen. Walfisz lived in Wiesbaden from 1922 through 1927, then he returned to Warsaw, worked at an insurance company and at the mathematical institute of the university (habilitation in 1930). In 1935, together with , he founded the mathematical journal Acta Arithmetica. In 1936, Walfisz became professor at the University of Tbilisi in the nation of Georgia (at the time a part of the Soviet Union). He wrote approximately 100 mathematical articles and three books.
Work
By using a theorem by Carl Ludwig Siegel providing an upper bound for the real zeros (see Siegel zero) of Dirichlet L-functions formed with real non-principal characters, Walfisz obtained the Siegel–Walfisz theorem, from which the prime number theorem for arithmetic progressions can be deduced.
By using estimates on exponential sums due to I. M. Vinogradov and , Walfisz obtained the currently best O-estimates for the remainder terms of the summatory functions of both the sum-of-divisors function and the Euler function (in: "Weylsche Exponentialsummen in der neueren Zahlentheorie", see below).
Works
Pell's equation (in Russian), Tbilisi, 1952
Gitterpunkte in mehrdimensionalen Kugeln [Lattice points in multi-dimensional spheres], Panstwowe Wydawnictwo Naukowe, Monografi Matematyczne, vol. 33. Warszawa, 1957, online
Weylsche Exponentialsummen in der neueren Zahlentheorie [Weyl exponential sums in the newer number theory], VEB Deutscher Verlag der Wissenschaften, Berlin, 1963.
Further reading
Ekkehard Krätzel, Christoph Lamm: Von Wiesbaden nach Tiflis – Die wechselvolle Lebensgeschichte des Zahlentheoretikers Arnold Walfisz [From Wiesbaden to Tiflis: The eventful life story of number-theorist Arnold Walfisz] (German), Mitteilungen der Deutschen Mathematiker-Vereinigung, Band 21 (1), 2013
Sophie Goetzel-Leviathan (née Walfisz): Der Krieg von Innen (German), Paul Lazarus Stiftung, Wiesbaden, 2011
References
External links
Short biography and list of works
Biography from Acta Arithmetica, 1964
Description of his work from Acta Arithmetica, 1964
1892 births
1962 deaths
20th-century Polish Jews
Number theorists
20th-century Polish mathematicians
Scientists from Warsaw
Academic staff of Tbilisi State University
Polish emigrants to the Soviet Union
|
https://en.wikipedia.org/wiki/Acta%20Arithmetica
|
Acta Arithmetica is a scientific journal of mathematics publishing papers on number theory. It was established in 1935 by Salomon Lubelski and Arnold Walfisz. The journal is published by the Institute of Mathematics of the Polish Academy of Sciences.
References
External links
Online archives (Library of Science, Issues: 1935–2000)
1935 establishments in Poland
Mathematics journals
Academic journals established in 1935
Polish Academy of Sciences academic journals
Multilingual journals
Biweekly journals
Academic journals associated with learned and professional societies
|
https://en.wikipedia.org/wiki/Poinsot%27s%20spirals
|
In mathematics, Poinsot's spirals are two spirals represented by the polar equations
where csch is the hyperbolic cosecant, and sech is the hyperbolic secant. They are named after the French mathematician Louis Poinsot.
Examples of the two types of Poinsot's spirals
See also
References
Spirals
|
https://en.wikipedia.org/wiki/Normal%20cone
|
In algebraic geometry, the normal cone of a subscheme of a scheme is a scheme analogous to the normal bundle or tubular neighborhood in differential geometry.
Definition
The normal cone or of an embedding , defined by some sheaf of ideals I is defined as the relative Spec
When the embedding i is regular the normal cone is the normal bundle, the vector bundle on X corresponding to the dual of the sheaf .
If X is a point, then the normal cone and the normal bundle to it are also called the tangent cone and the tangent space (Zariski tangent space) to the point. When Y = Spec R is affine, the definition means that the normal cone to X = Spec R/I is the Spec of the associated graded ring of R with respect to I.
If Y is the product X × X and the embedding i is the diagonal embedding, then the normal bundle to X in Y is the tangent bundle to X.
The normal cone (or rather its projective cousin) appears as a result of blow-up. Precisely, let
be the blow-up of Y along X. Then, by definition, the exceptional divisor is the pre-image ; which is the projective cone of . Thus,
The global sections of the normal bundle classify embedded infinitesimal deformations of Y in X; there is a natural bijection between the set of closed subschemes of , flat over the ring D of dual numbers and having X as the special fiber, and H0(X, NX Y).
Properties
Compositions of regular embeddings
If are regular embeddings, then is a regular embedding and there is a natural exact sequence of vector bundles on X:
If are regular embeddings of codimensions and if is a regular embedding of codimension then
In particular, if is a smooth morphism, then the normal bundle to the diagonal embedding (r-fold) is the direct sum of copies of the relative tangent bundle .
If is a closed immersion and if is a flat morphism such that , then
If is a smooth morphism and is a regular embedding, then there is a natural exact sequence of vector bundles on X:
(which is a special case of an exact sequence for cotangent sheaves.)
Cartesian square
For a Cartesian square of schemes with the vertical map, there is a closed embedding of normal cones.
Dimension of components
Let be a scheme of finite type over a field and a closed subscheme. If is of ; i.e., every irreducible component has dimension r, then is also of pure dimension r. (This can be seen as a consequence of #Deformation to the normal cone.) This property is a key to an application in intersection theory: given a pair of closed subschemes in some ambient space, while the scheme-theoretic intersection has irreducible components of various dimensions, depending delicately on the positions of , the normal cone to is of pure dimension.
Examples
Let be an effective Cartier divisor. Then the normal bundle to it (or equivalently the normal cone to it) is
Non-regular Embedding
Consider the non-regular embedding
then, we can compute the normal cone by first observing
If we make the auxiliary variables an
|
https://en.wikipedia.org/wiki/LVP
|
LVP may refer to:
Science, mathematics, and computing
Laser voltage prober, a tool for analysing integrated circuits
Left ventricular pressure, blood pressure in the heart
Large volume parenterals, a type of injectable pharmaceutical product
Lithium vanadium phosphate battery, a proposed type of lithium ion battery
Low voltage programming, see
LView Pro, a bitmap graphics editor for Microsoft Windows
Political parties
Liberal Vannin Party, a political party on the Isle of Man founded in 2006
Lithuanian Peasants Party (1990–2001), a former party
Latvian Unity Party (1992–2001), a former party
Other
LView, image editing software
Lakshmi Vilas Palace, Vadodara, Gujarat, India, a palace
An abbreviation for works by Dutch composer Leopold van der Pals
An abbreviation for actress and television personality Lisa Vanderpump
Limited Validity Passport, a type of Australian passport
Learner Variability Project, an education research translation initiative of Digital Promise that focuses on whole learner education practices
Luxury vinyl plank, vinyl composition tile with a wood-like appearance
|
https://en.wikipedia.org/wiki/Coarse%20structure
|
In the mathematical fields of geometry and topology, a coarse structure on a set X is a collection of subsets of the cartesian product X × X with certain properties which allow the large-scale structure of metric spaces and topological spaces to be defined.
The concern of traditional geometry and topology is with the small-scale structure of the space: properties such as the continuity of a function depend on whether the inverse images of small open sets, or neighborhoods, are themselves open. Large-scale properties of a space—such as boundedness, or the degrees of freedom of the space—do not depend on such features. Coarse geometry and coarse topology provide tools for measuring the large-scale properties of a space, and just as a metric or a topology contains information on the small-scale structure of a space, a coarse structure contains information on its large-scale properties.
Properly, a coarse structure is not the large-scale analog of a topological structure, but of a uniform structure.
Definition
A on a set is a collection of subsets of (therefore falling under the more general categorization of binary relations on ) called , and so that possesses the identity relation, is closed under taking subsets, inverses, and finite unions, and is closed under composition of relations. Explicitly:
Identity/diagonal:
The diagonal is a member of —the identity relation.
Closed under taking subsets:
If and then
Closed under taking inverses:
If then the inverse (or transpose) is a member of —the inverse relation.
Closed under taking unions:
If then their union is a member of
Closed under composition:
If then their product is a member of —the composition of relations.
A set endowed with a coarse structure is a .
For a subset of the set is defined as We define the of by to be the set also denoted The symbol denotes the set These are forms of projections.
A subset of is said to be a if is a controlled set.
Intuition
The controlled sets are "small" sets, or "negligible sets": a set such that is controlled is negligible, while a function such that its graph is controlled is "close" to the identity. In the bounded coarse structure, these sets are the bounded sets, and the functions are the ones that are a finite distance from the identity in the uniform metric.
Coarse maps
Given a set and a coarse structure we say that the maps and are if is a controlled set.
For coarse structures and we say that is a if for each bounded set of the set is bounded in and for each controlled set of the set is controlled in and are said to be if there exists coarse maps and such that is close to and is close to
Examples
The on a metric space is the collection of all subsets of such that is finite. With this structure, the integer lattice is coarsely equivalent to -dimensional Euclidean space.
A space where is controlled is called a . Such a space is coarsely equivalent to a po
|
https://en.wikipedia.org/wiki/Hua%27s%20lemma
|
In mathematics, Hua's lemma, named for Hua Loo-keng, is an estimate for exponential sums.
It states that if P is an integral-valued polynomial of degree k, is a positive real number, and f a real function defined by
then
,
where lies on a polygonal line with vertices
References
Lemmas
Analytic number theory
|
https://en.wikipedia.org/wiki/Quasi-triangular%20quasi-Hopf%20algebra
|
A quasi-triangular quasi-Hopf algebra is a specialized form of a quasi-Hopf algebra defined by the Ukrainian mathematician Vladimir Drinfeld in 1989. It is also a generalized form of a quasi-triangular Hopf algebra.
A quasi-triangular quasi-Hopf algebra is a set where is a quasi-Hopf algebra and known as the R-matrix, is an invertible element such that
for all , where is the switch map given by , and
where and .
The quasi-Hopf algebra becomes triangular if in addition, .
The twisting of by is the same as for a quasi-Hopf algebra, with the additional definition of the twisted R-matrix
A quasi-triangular (resp. triangular) quasi-Hopf algebra with is a quasi-triangular (resp. triangular) Hopf algebra as the latter two conditions in the definition reduce the conditions of quasi-triangularity of a Hopf algebra.
Similarly to the twisting properties of the quasi-Hopf algebra, the property of being quasi-triangular or triangular quasi-Hopf algebra is preserved by twisting.
See also
Ribbon Hopf algebra
References
Vladimir Drinfeld, "Quasi-Hopf algebras", Leningrad mathematical journal (1989), 1419–1457
J. M. Maillet and J. Sanchez de Santos, "Drinfeld Twists and Algebraic Bethe Ansatz", American Mathematical Society Translations: Series 2 Vol. 201, 2000
Coalgebras
|
https://en.wikipedia.org/wiki/Alison%20Wong
|
Alison Wong (born 1960) is a New Zealand poet and novelist of Chinese heritage. Her background in mathematics comes across in her poetry, not as a subject, but in the careful formulation of words to white space and precision. She has a half-Chinese son with New Zealand poet Linzy Forbes. She now lives in Geelong.
Career and awards
Wong's first novel As the Earth Turns Silver was published in late June 2009 by Penguin NZ and won the fiction award at the 2010 New Zealand Post Book Awards, and was shortlisted for the Australian Prime Minister's Literary Awards.
Wong has received various other awards for her fiction and poetry including the 2002 Robert Burns Fellowship at the University of Otago, a Reader's Digest - New Zealand Society of Authors Fellowship at the Stout Research Centre and a NZ Founders Society Research Award. She has been a finalist in several poetry competitions and received grants from Creative NZ and the Willi Fels Memorial Trust.
Her first poetry collection, Cup, was released in February 2006 by Steele Roberts.
It was shortlisted for a poetry prize in the Montana Book awards.
In 2003 she was a guest writer at the Auckland Writers and Readers Festival and the Wordstruck! Festival in Dunedin, as well as a speaker for the Stout Research Centre Chinese New Zealand Seminar Series. In 2001 together with Linzy Forbes, she received a Porirua City Council Civic Honour Award for co-founding and running Poetry Cafe.
References
External links
Alison Wong Biography
1960 births
Living people
New Zealand women poets
New Zealand women novelists
New Zealand people of Chinese descent
21st-century New Zealand novelists
21st-century New Zealand women writers
|
https://en.wikipedia.org/wiki/Dalcahue
|
Dalcahue is a port city and a commune in Chiloé Province, on Chiloé Island, Los Lagos Region, Chile.
Demographics
According to the 2002 census by the National Statistics Institute, the Dalcachue commune spans an area of and had 10,693 inhabitants; of these, 4,933 (46.1%) lived in urban areas and 5,760 (53.9%) in rural areas. At that time, there were 5,420 men and 5,273 women. The population grew by 37.7% (2,931 persons) between the 1992 and 2002 censuses.
Administration
As a commune, Dalcahue is a third-level administrative division of Chile administered by a municipal council, headed by an alcalde who is directly elected every four years. The 2008-2012 alcalde is Alfredo Hurtado Alvarez (PDC).
Within the electoral divisions of Chile, Dalcahue is represented in the Chamber of Deputies by Gabriel Ascencio (PDC) and Alejandro Santana (RN) as part of the 58th electoral district, together with Castro, Ancud, Quemchi, Curaco de Vélez, Quinchao, Puqueldón, Chonchi, Queilén, Quellón, Chaitén, Hualaihué, Futaleufú and Palena. The commune is represented in the Senate by Camilo Escalona Medina (PS) and Carlos Kuschel Silva (RN) as part of the 17th senatorial constituency (Los Lagos Region).
Transportation
The city is served by the Mocopulli Airport that connects the Chiloé island with the rest of Chile
Gallery
See also
Quíquel
References
External links
Municipality of Dalcahue
Communes of Chile
Populated places in Chiloé
Populated places in Chiloé Province
|
https://en.wikipedia.org/wiki/Ascendant%20subgroup
|
In mathematics, in the field of group theory, a subgroup of a group is said to be ascendant if there is an ascending series starting from the subgroup and ending at the group, such that every term in the series is a normal subgroup of its successor.
The series may be infinite. If the series is finite, then the subgroup is subnormal. Here are some properties of ascendant subgroups:
Every subnormal subgroup is ascendant; every ascendant subgroup is serial.
In a finite group, the properties of being ascendant and subnormal are equivalent.
An arbitrary intersection of ascendant subgroups is ascendant.
Given any subgroup, there is a minimal ascendant subgroup containing it.
See also
Descendant subgroup
References
Subgroup properties
|
https://en.wikipedia.org/wiki/Temporal%20Process%20Language
|
In theoretical computer science, Temporal Process Language (TPL) is a process calculus which extends Robin Milner's CCS with the notion of multi-party synchronization, which allows multiple process to synchronize on a global 'clock'. This clock measures time, though not concretely, but rather as an abstract signal which defines when the entire process can step onward.
Informal definition
TPL is a conservative extension of CCS, with the addition of a special action called σ representing the passage of time by a process - the ticking of an abstract clock. As in CCS, TPL features action prefix and it can be described as being patient, that is to say a process will idly accept the ticking of the clock, written as
Key to the use of abstract time is the timeout operator, which presents two processes, one to behave as if the clock ticks, one to behave as if it can't, i.e.
provided process E does not prevent the clock from ticking.
provided E can perform action a to become E'.
In TPL, there are two ways to prevent the clock from ticking. First is via the presence of the ω operator, for example in process the clock is prevented from ticking. It can be said that the action a is insistent, i.e. it insists on acting before the clock can tick again.
The second way in which ticking can be prevented is via the concept of maximal-progress, which states that silent actions (i.e. τ actions) always take precedence over and thus suppress σ actions. Thus is two parallel processes are capable of synchronizing at a given instant, it is not possible for the clock to tick.
Thus a simple way of viewing multi-party synchronization is that a group of composed processes will allow time to pass provided none of them prevent it, i.e. the system agrees that it is time to move on.
Formal definition
Syntax
Let a be a non-silent action name, α be any action name (including τ, the silent action) and X be a process label used for recursion.
References
Matthew Hennessy and Tim Regan : A Process Algebra for Timed Systems. Information and Computation, 1995.
Process calculi
|
https://en.wikipedia.org/wiki/Characteristically%20simple%20group
|
In mathematics, in the field of group theory, a group is said to be characteristically simple if it has no proper nontrivial characteristic subgroups. Characteristically simple groups are sometimes also termed elementary groups. Characteristically simple is a weaker condition than being a simple group, as simple groups must not have any proper nontrivial normal subgroups, which include characteristic subgroups.
A finite group is characteristically simple if and only if it is the direct product of isomorphic simple groups. In particular, a finite solvable group is characteristically simple if and only if it is an elementary abelian group. This does not hold in general for infinite groups; for example, the rational numbers form a characteristically simple group that is not a direct product of simple groups.
A minimal normal subgroup of a group G is a nontrivial normal subgroup N of G such that the only proper subgroup of N that is normal in G is the trivial subgroup. Every minimal normal subgroup of a group is characteristically simple. This follows from the fact that a characteristic subgroup of a normal subgroup is normal.
References
Properties of groups
|
https://en.wikipedia.org/wiki/Strictly%20simple%20group
|
In mathematics, in the field of group theory, a group is said to be strictly simple if it has no proper nontrivial ascendant subgroups. That is, is a strictly simple group if the only ascendant subgroups of are (the trivial subgroup), and itself (the whole group).
In the finite case, a group is strictly simple if and only if it is simple. However, in the infinite case, strictly simple is a stronger property than simple.
See also
Serial subgroup
Absolutely simple group
References
Simple Group Encyclopedia of Mathematics, retrieved 1 January 2012
Properties of groups
|
https://en.wikipedia.org/wiki/Absolutely%20simple%20group
|
In mathematics, in the field of group theory, a group is said to be absolutely simple if it has no proper nontrivial serial subgroups. That is, is an absolutely simple group if the only serial subgroups of are (the trivial subgroup), and itself (the whole group).
In the finite case, a group is absolutely simple if and only if it is simple. However, in the infinite case, absolutely simple is a stronger property than simple. The property of being strictly simple is somewhere in between.
See also
Ascendant subgroup
Strictly simple group
References
Properties of groups
|
https://en.wikipedia.org/wiki/Supersolvable%20group
|
In mathematics, a group is supersolvable (or supersoluble) if it has an invariant normal series where all the factors are cyclic groups. Supersolvability is stronger than the notion of solvability.
Definition
Let G be a group. G is supersolvable if there exists a normal series
such that each quotient group is cyclic and each is normal in .
By contrast, for a solvable group the definition requires each quotient to be abelian. In another direction, a polycyclic group must have a subnormal series with each quotient cyclic, but there is no requirement that each be normal in . As every finite solvable group is polycyclic, this can be seen as one of the key differences between the definitions. For a concrete example, the alternating group on four points, , is solvable but not supersolvable.
Basic Properties
Some facts about supersolvable groups:
Supersolvable groups are always polycyclic, and hence solvable.
Every finitely generated nilpotent group is supersolvable.
Every metacyclic group is supersolvable.
The commutator subgroup of a supersolvable group is nilpotent.
Subgroups and quotient groups of supersolvable groups are supersolvable.
A finite supersolvable group has an invariant normal series with each factor cyclic of prime order.
In fact, the primes can be chosen in a nice order: For every prime p, and for π the set of primes greater than p, a finite supersolvable group has a unique Hall π-subgroup. Such groups are sometimes called ordered Sylow tower groups.
Every group of square-free order, and every group with cyclic Sylow subgroups (a Z-group), is supersolvable.
Every irreducible complex representation of a finite supersolvable group is monomial, that is, induced from a linear character of a subgroup. In other words, every finite supersolvable group is a monomial group.
Every maximal subgroup in a supersolvable group has prime index.
A finite group is supersolvable if and only if every maximal subgroup has prime index.
A finite group is supersolvable if and only if every maximal chain of subgroups has the same length. This is important to those interested in the lattice of subgroups of a group, and is sometimes called the Jordan–Dedekind chain condition.
By Baum's theorem, every supersolvable finite group has a DFT algorithm running in time O(n log n).
References
Schenkman, Eugene. Group Theory. Krieger, 1975.
Schmidt, Roland. Subgroup Lattices of Groups. de Gruyter, 1994.
Keith Conrad, SUBGROUP SERIES II, Section 4 , http://www.math.uconn.edu/~kconrad/blurbs/grouptheory/subgpseries2.pdf
Solvable groups
|
https://en.wikipedia.org/wiki/FC-group
|
In mathematics, in the field of group theory, an FC-group is a group in which every conjugacy class of elements has finite cardinality.
The following are some facts about FC-groups:
Every finite group is an FC-group.
Every abelian group is an FC-group.
The following property is stronger than the property of being FC: every subgroup has finite index in its normal closure.
Notes
References
. Reprint of Prentice-Hall edition, 1964.
Infinite group theory
Properties of groups
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.