source
stringlengths 31
168
| text
stringlengths 51
3k
|
|---|---|
https://en.wikipedia.org/wiki/Exceptional%20object
|
Many branches of mathematics study objects of a given type and prove a classification theorem. A common theme is that the classification results in a number of series of objects and a finite number of exceptions — often with desirable properties — that do not fit into any series. These are known as exceptional objects. In many cases, these exceptional objects play a further and important role in the subject. Furthermore, the exceptional objects in one branch of mathematics often relate to the exceptional objects in others.
A related phenomenon is exceptional isomorphism, when two series are in general different, but agree for some small values. For example, spin groups in low dimensions are isomorphic to other classical Lie groups.
Regular polytopes
The prototypical examples of exceptional objects arise in the classification of regular polytopes: in two dimensions, there is a series of regular n-gons for n ≥ 3. In every dimension above 2, one can find analogues of the cube, tetrahedron and octahedron. In three dimensions, one finds two more regular polyhedra — the dodecahedron (12-hedron) and the icosahedron (20-hedron) — making five Platonic solids. In four dimensions, a total of six regular polytopes exist, including the 120-cell, the 600-cell and the 24-cell. There are no other regular polytopes, as the only regular polytopes in higher dimensions are of the hypercube, simplex, orthoplex series. In all dimensions combined, there are therefore three series and five exceptional polytopes.
Moreover, the pattern is similar if non-convex polytopes are included: in two dimensions, there is a regular star polygon for every rational number . In three dimensions, there are four Kepler–Poinsot polyhedra, and in four dimensions, ten Schläfli–Hess polychora; in higher dimensions, there are no non-convex regular figures.
These can be generalized to tessellations of other spaces, especially uniform tessellations, notably tilings of Euclidean space (honeycombs), which have exceptional objects, and tilings of hyperbolic space. There are various exceptional objects in dimension below 6, but in dimension 6 and above, the only regular polyhedra/tilings/hyperbolic tilings are the simplex, hypercube, cross-polytope, and hypercube lattice.
Schwarz triangles
Related to tilings and the regular polyhedra, there are exceptional Schwarz triangles (triangles that tile the sphere, or more generally Euclidean plane or hyperbolic plane via their triangle group of reflections in their edges), particularly the Möbius triangles. In the sphere, there are 3 Möbius triangles (and 1 1-parameter family), corresponding to the 3 exceptional Platonic solid groups, while in the Euclidean plane, there are 3 Möbius triangles, corresponding to the 3 special triangles: 60-60-60 (equilateral), 45-45-90 (isosceles right), and 30-60-90. There are additional exceptional Schwarz triangles in the sphere and Euclidean plane. By contrast, in the hyperbolic plane, there is a 3-parameter fa
|
https://en.wikipedia.org/wiki/Deane%20Montgomery
|
Deane Montgomery (September 2, 1909 – March 15, 1992) was an American mathematician specializing in topology who was one of the contributors to the final resolution of Hilbert's fifth problem in the 1950s. He served as president of the American Mathematical Society from 1961 to 1962.
Born in the small town of Weaver, Minnesota, he received his B.S. from Hamline University in St. Paul, MN and his Master's and Ph.D. from the University of Iowa in 1933; his dissertation advisor was Edward Chittenden.
In 1941 Montgomery was awarded a Guggenheim Fellowship. In 1988, he was awarded the American Mathematical Society Leroy P. Steele Prize for Lifetime Achievement.
He was a member of the United States National Academy of Sciences, the American Philosophical Society, and of the American Academy of Arts and Sciences.
Publications
with Leo Zippin:
with Leo Zippin:
with Leo Zippin:
Deane Montgomery and Leo Zippin, Topological Transformation Groups, Interscience Publishers, 1955.
with Hans Samelson and C. T. Yang:
with C. T. Yang:
References
External links
Interview with Montgomery about his experience at Princeton
A biography of Montgomery
A Tribute to Deane Montgomery, by Ronald Fintushel
20th-century American mathematicians
Members of the United States National Academy of Sciences
Topologists
Institute for Advanced Study faculty
University of Iowa alumni
1909 births
1992 deaths
Presidents of the American Mathematical Society
Hamline University alumni
Mathematicians from Minnesota
Presidents of the International Mathematical Union
Members of the American Philosophical Society
|
https://en.wikipedia.org/wiki/Meinhard%20E.%20Mayer
|
Meinhard Edwin Mayer (March 18, 1929 – December 11, 2011) was a Romanian–born American Professor Emeritus of Physics and Mathematics at the University of California, Irvine, which he joined in 1966.
Biography
He was born on March 18, 1929, in Cernăuți. He experienced both the Soviet occupation of Northern Bukovina and, as a Jew, deportation to the Transnistria Governorate. He received his Ph.D. from the University of Bucharest in 1957, where he taught until 1961.
He then taught at Brandeis University and Indiana University before moving to the University of California, Irvine (UCI) in 1966, where he taught until his retirement. He also took sabbaticals to various institutes, including the Institut des Hautes Etudes Scientifiques and MIT.
He had a deep interest in music, and in Yiddish language and literature.
He died in Newport Beach, California, on December 11, 2011. He was survived by his wife Ruth, his children Elma Mayer and Niels Mayer, and his grandchildren Jonathan Mayer, Juniper Woodbury, and Moss Woodbury.
Research
His research interests ranged from geometric methods in gauge theory, to the application of wavelets in turbulence. He was an early contributor (1958) to the theory of vector-bosons (W and Z bosons) and electro-weak unification, which later became the Standard model, and an early advocate of the use of fiber bundles in gauge theory.
He was a co-author (with Gerald Jay Sussman and Jack Wisdom) of Structure and Interpretation of Classical Mechanics, MIT Press, Cambridge, MA, 2001
Notes
References
Lie Groupoids versus Principal Bundles in Gauge Theories, in Proceedings of the International Conference on Differential-Geometric Methods in Physics, L.-L. Chau and W. Nahm, Eds., Plenum Press, 1990.
From Poisson Groupoids to Quantum Groupoids, and Back, in Proceedings of the XIX International Conference on Differential-Geometric Methods in Physics, R. Cianci and U. Bruzzo, Eds. Rapallo, 1990; 12 pages, Springer Verlag, Heidelberg, 1991.
Wavelet Transforms and Atmospheric Turbulence, with Carl A. Friehe and Lonnie H. Hudgins, Physical Review Letters, 71, 3279-3282 (November 15, 1993)
External links
Obituary in Physics Today
Web Page (somewhat obsolete)
Faculty Profile
Paul Celan Article
An article about the 1908 Yiddish Language Conference (Yiddish and English)
QuickTime version (with sound) of a talk On Yiddish and German Poets from Czernowitz at the 2008 La Jolla Yiddish Conference
Slides (without sound) of the talk On Yiddish and German Poets from Czernowitz at the 2008 La Jolla Yiddish Conference
American physicists
20th-century American mathematicians
21st-century American mathematicians
Romanian mathematicians
University of California, Irvine faculty
University of Bucharest alumni
American people of Romanian-Jewish descent
Romanian emigrants to the United States
Fellows of the American Physical Society
Survivors of World War II deportations to Transnistria
Deaths from cancer in California
Deaths from esophageal cance
|
https://en.wikipedia.org/wiki/Pan%20Chengdong
|
Pan Chengdong (; 26 May 1934 – 27 December 1997) was a Chinese mathematician who made numerous contributions to number theory, including progress on Goldbach's conjecture. He was vice president of Shandong University and took the role of president from 1986 to 1997.
Born in Suzhou, Jiangsu Province on 26 May 1934, he entered the Department of Mathematics and Mechanics of Peking University in 1952 and obtained a postgraduate degree in 1961 advised by Min Sihe, a student of Edward Charles Titchmarsh. He then went to work at the Department of Mathematics of Shandong University.
He was honored with an Academician of the Chinese Academy of Science in 1991.
Previously, Wang Yuan made progress toward Goldbach’s Conjecture on the distribution of prime numbers. His result was that for any sufficiently large even number, that number is the sum of two numbers—one a product of at most two primes, the other a product of at most three primes. This case is denoted by (2,3). In 1962, Pan Chengdong also made progress in proving Goldbach’s conjecture by proving the (1,5) case independently and the (1,4) case the following year with N.B. Barban and Wang Yuan.
External links
References
1934 births
1997 deaths
20th-century Chinese mathematicians
Educators from Suzhou
Mathematicians from Jiangsu
Members of the Chinese Academy of Sciences
Peking University alumni
Presidents of Shandong University
Scientists from Suzhou
|
https://en.wikipedia.org/wiki/Fulton%E2%80%93Hansen%20connectedness%20theorem
|
In mathematics, the Fulton–Hansen connectedness theorem is a result from intersection theory in algebraic geometry, for the case of subvarieties of projective space with codimension large enough to make the intersection have components of dimension at least 1. It is named after William Fulton and Johan Hansen, who proved it in 1979.
The formal statement is that if V and W are irreducible algebraic subvarieties of a projective space P, all over an algebraically closed field, and if
in terms of the dimension of an algebraic variety, then the intersection U of V and W is connected.
More generally, the theorem states that if is a projective variety and is any morphism such that , then is connected, where is the diagonal in . The special case of intersections is recovered by taking , with the natural inclusion.
See also
Zariski's connectedness theorem
Grothendieck's connectedness theorem
Deligne's connectedness theorem
References
External links
PDF lectures with the result as Theorem 15.3 (attributed to Faltings, also)
Intersection theory
Theorems in algebraic geometry
|
https://en.wikipedia.org/wiki/Victor%20Buchstaber
|
Victor Matveevich Buchstaber (, born 1 April 1943, Tashkent, Soviet Union) is a Soviet and Russian mathematician known for his work on algebraic topology, homotopy theory, and mathematical physics.
Work
Buchstaber's first research work was in cobordism theory. He calculated the differential in the Atiyah-Hirzebruch spectral sequence in K-theory and complex cobordism theory, constructed Chern-Dold characters and the universal Todd genus in cobordism, and gave an alternative effective solution of the Milnor-Hirzebruch problem. He went on to develop a theory of double-valued formal groups that led to the calculation of cobordism rings of complex manifolds having symplectic coverings and to the explicit construction of what are now known as Buchstaber manifolds. He devised filtrations in Hopf algebras and the Buchstaber spectral sequence, which were successfully applied to the calculation of stable homotopy groups of spheres.
He worked on the deformation theory for mappings to groups, which led to the solution of the Novikov problem on multiplicative subgroups in operator doubles, and to construction of the quantum group of complex cobordisms. He went on to treat problems related both with algebraic geometry and integrable systems. He is also well known for his work on sigma-functions on universal spaces of Jacobian varieties of algebraic curves that give effective solutions of important integrable systems. Buchstaber created an algebro-functional theory of symmetric products of spaces and described algebraic varieties of polysymmetric polynomials.
Academic career
Buchstaber gained his Ph.D. in 1970 under Sergei Novikov and Dr. Sci. in 1984 from Moscow State University. He is currently a professor at the Faculty of Mathematics and Mechanics, Moscow State University, and an emeritus professor at the School of Mathematics, University of Manchester. He has supervised more than 30 Ph.D. students, including Serge Ochanine, Iosif Polterovich, Taras Panov and Alexander Gaifullin.
In 1974 Buchstaber was an Invited Speaker at the International Congress of Mathematicians in Vancouver (but he did not give a lecture there). In 2004 was elected a corresponding fellow of the Royal Society of Edinburgh. In 2006 he was elected a corresponding member of the Russian Academy of Sciences.
Works
Victor Buchstaber and Taras Panov, Toric Topology, American Mathematical Society, Providence, RI, 2015.
References
External links
Home page at Russian Academy of Sciences
Birthday tribute in Moscow Mathematical Journal
1943 births
Living people
Russian mathematicians
Topologists
Moscow State University alumni
Academic staff of Moscow State University
Corresponding Members of the Russian Academy of Sciences
Academics of the University of Manchester
Scientists from Tashkent
Fellows of the Royal Society of Edinburgh
|
https://en.wikipedia.org/wiki/University%20of%20Ulm
|
Ulm University () is a public university in Ulm, Baden-Württemberg, Germany. The University was founded in 1967 and focuses on natural sciences, medicine, engineering sciences, mathematics, economics and computer science. With 9,891 students (summer semester 2018), it is one of the youngest public universities in Germany. The campus of the university is located north of the city on a hill called Oberer Eselsberg, while the university hospital has additional sites across the city.
History
The university is the youngest public university in the state of Baden-Württemberg, which boasts several old, renowned universities in Heidelberg (founded in 1386), Freiburg (1457) and Tübingen (1477). The idea was to create a university with a new approach in both research and teaching. An important concept since the foundation of the university has always been to promote interdisciplinarity. In the decades following the foundation, the spectrum of subjects has steadily been extended, and the university has grown significantly.
An important step in combining the strength of industrial and academic research was the realization of the idea of a science park around the main university campus. Research centers of companies like Daimler, BMW, Siemens and in the past also Nokia, AEG, have been established at the site, in addition to institutes of the university focusing on applied research. Among other large research projects, the university features four Collaborative Research Centers (German: Sonderforschungsbereiche), which are established on a competitive basis by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG). Third-party funding of research reached 67.5 million euros in 2009.
In 1998, Ulm University introduced an International Masters program in English: M.Sc. in Communication Technology, which is the first of its kind in Germany. Since then, this program attracts students from different countries around the world. C-Tech Program has research collaboration with many renowned Universities around the world.
It also offers other English programs, namely M.Sc. and PhD in molecular medicine, M.Sc. in advanced materials, M.Sc. in energy science and technology, M.Sc. in finance, M.Sc. in biology and M.Sc. in advanced oncology, the latter being an extra-occupational program.
In 2003, the Ulm University was involved in founding a private university in Egypt, the German University in Cairo.
Since 2007, the university has been participating in the German Universities Excellence Initiative with the newly founded International Graduate School in Molecular Medicine Ulm.
Name
As Albert Einstein was born in Ulm in 1879, it was suggested repeatedly that the university be named after him. In November 2006, the senate of the university finally decided to rename the university. As this decision was however not confirmed by the Ministry of Science, Research and Arts of the State of Baden-Württemberg, the university was not renamed to Albert Einstein
|
https://en.wikipedia.org/wiki/Koszul%E2%80%93Tate%20resolution
|
In mathematics, a Koszul–Tate resolution or Koszul–Tate complex of the quotient ring R/M is a projective resolution of it as an R-module which also has a structure of a dg-algebra over R, where R is a commutative ring and M ⊂ R is an ideal. They were introduced by as a generalization of the Koszul resolution for the quotient R/(x1, ...., xn) of R by a regular sequence of elements. used the Koszul–Tate resolution to calculate BRST cohomology. The differential of this complex is called the Koszul–Tate derivation or Koszul–Tate differential.
Construction
First suppose for simplicity that all rings contain the rational numbers Q. Assume we have a graded supercommutative ring X, so that
ab = (−1)deg(a)deg (b)ba,
with a differential d, with
d(ab) = d(a)b + (−1)deg(a)ad(b)),
and x ∈ X is a homogeneous cycle (dx = 0). Then we can form a new ring
Y = X[T]
of polynomials in a variable T, where the differential is extended to T by
dT=x.
(The polynomial ring is understood in the super sense, so if T has odd degree then T2 = 0.) The result of adding the element T is to kill off the element of the homology of X represented by x, and Y is still a supercommutative ring with derivation.
A Koszul–Tate resolution of R/M can be constructed as follows. We start with the commutative ring R (graded so that all elements have degree 0). Then add new variables as above of degree 1 to kill off all elements of the ideal M in the homology. Then keep on adding more and more new variables (possibly an infinite number) to kill off all homology of positive degree. We end up with a supercommutative graded ring with derivation d whose
homology is just R/M.
If we are not working over a field of characteristic 0, the construction above still works, but it is usually neater to use the following variation of it. Instead of using polynomial rings X[T], one can use a "polynomial ring with divided powers" X〈T〉, which has a basis of elements
T(i) for i ≥ 0,
where
T(i)T(j) = ((i + j)!/i!j!)T(i+j).
Over a field of characteristic 0,
T(i) is just Ti/i!.
See also
Lie algebra cohomology
References
M. Henneaux and C. Teitelboim, Quantization of Gauge Systems, Princeton University Press, 1992
Homological algebra
Commutative algebra
|
https://en.wikipedia.org/wiki/Elkies%20trinomial%20curves
|
In number theory, the Elkies trinomial curves are certain hyperelliptic curves constructed by Noam Elkies which have the property that rational points on them correspond to trinomial polynomials giving an extension of Q with particular Galois groups.
One curve, C168, gives Galois group PSL(2,7) from a polynomial of degree seven, and the other, C1344, gives Galois group AL(8), the semidirect product of a 2-elementary group of order eight acted on by PSL(2, 7), giving a transitive permutation subgroup of the symmetric group on eight roots of order 1344.
The equation of the curve C168 is:
The curve is a plane algebraic curve model for a Galois resolvent for the trinomial polynomial equation x7 + bx + c = 0. If there exists a point (x, y) on the (projectivized) curve, there is a corresponding pair (b, c) of rational numbers, such that the trinomial polynomial either factors or has Galois group PSL(2,7), the finite simple group of order 168. The curve has genus two, and so by Faltings theorem there are only a finite number of rational points on it. These rational points were proven by Nils Bruin using the computer program Kash to be the only ones on C168, and they give only four distinct trinomial polynomials with Galois group PSL(2,7): x7-7x+3 (the Trinks polynomial), (1/11)x7-14x+32 (the Erbach-Fisher-McKay polynomial) and two new polynomials with Galois group PSL(2,7),
and
.
On the other hand, the equation of curve C1344 is:
Once again the genus is two, and by Faltings theorem the list of rational points is finite. It is thought the only rational points on it correspond to polynomials x8+16x+28, x8+576x+1008, 19453x8+19x+2 which have Galois group AL(8), and x8+324x+567, which comes from two different rational points and has Galois group PSL(2, 7) again, this time as the Galois group of a polynomial of degree eight.
References
Galois theory
Number theory
Algebraic curves
|
https://en.wikipedia.org/wiki/NSMB
|
NSMB may refer to:
NSMB (mathematics), a Navier-Stokes finite volume solver
Nature Structural & Molecular Biology, an academic journal
New Super Mario Bros. (series), a series of 2D platform games by Nintendo consisting of new revivals of classic Mario platformers
New Super Mario Bros., the first game in the series, released in 2006 for the Nintendo DS
, Server Message Block implementation on FreeBSD and other BSD systems including macOS
|
https://en.wikipedia.org/wiki/S2S
|
In mathematics, S2S is the monadic second order theory of the infinite complete binary tree.
S2S may also refer to:
Server-to-server, protocol exchange between servers
Site-to-site VPN
S2S Pte Ltd, a Japanese record label
Ski to Sea Race, a race in Whatcom County, Washington
Sister2Sister, Christine and Sharon Muscat, Maltese-Australian singers
Sales to Support, type of transfer on Call Center lines, Sales to Technical Support Transfer
In woodworking or lumber terms S2S= surfaced two sides, while S3S = surfaced three sides and s4s = surfaced four sides
|
https://en.wikipedia.org/wiki/Dowling%20geometry
|
In combinatorial mathematics, a Dowling geometry, named after Thomas A. Dowling, is a matroid associated with a group. There is a Dowling geometry of each rank for each group. If the rank is at least 3, the Dowling geometry uniquely determines the group. Dowling geometries have a role in matroid theory as universal objects (Kahn and Kung, 1982); in that respect they are analogous to projective geometries, but based on groups instead of fields.
A Dowling lattice is the geometric lattice of flats associated with a Dowling geometry. The lattice and the geometry are mathematically equivalent: knowing either one determines the other. Dowling lattices, and by implication Dowling geometries, were introduced by Dowling (1973a,b).
A Dowling lattice or geometry of rank n of a group G is often denoted Qn(G).
The original definitions
In his first paper (1973a) Dowling defined the rank-n Dowling lattice of the multiplicative group of a finite field F. It is the set of all those subspaces of the vector space Fn that are generated by subsets of the set E that consists of vectors with at most two nonzero coordinates. The corresponding Dowling geometry is the set of 1-dimensional vector subspaces generated by the elements of E.
In his second paper (1973b) Dowling gave an intrinsic definition of the rank-n Dowling lattice of any finite group G. Let S be the set {1,...,n}. A G-labelled set (T, α) is a set T together with a function α: T → G. Two G-labelled sets, (T, α) and (T, β), are equivalent if there is a group element, g, such that β = gα.
An equivalence class is denoted [T, α].
A partial G-partition of S is a set γ = {[B1,α1], ..., [Bk,αk]} of equivalence classes of G-labelled sets such that B1, ..., Bk are nonempty subsets of S that are pairwise disjoint. (k may equal 0.)
A partial G-partition γ is said to be ≤ another one, γ*, if
every block of the second is a union of blocks of the first, and
for each Bi contained in B*j, αi is equivalent to the restriction of α*j to domain Bi .
This gives a partial ordering of the set of all partial G-partitions of S. The resulting partially ordered set is the Dowling lattice Qn(G).
The definitions are valid even if F or G is infinite, though Dowling mentioned only finite fields and groups.
Graphical definitions
A graphical definition was then given by Doubilet, Rota, and Stanley (1972). We give the slightly simpler (but essentially equivalent) graphical definition of Zaslavsky (1991), expressed in terms of gain graphs.
Take n vertices, and between each pair of vertices, v and w, take a set of |G| parallel edges labelled by each of the elements of the group G. The labels are oriented, in that, if the label in the direction from v to w is the group element g, then the label of the same edge in the opposite direction, from w to v, is g−1. The label of an edge therefore depends on the direction of the edge; such labels are called gains. Also add to each vertex a loop whose gain is any value other than 1. (1 is
|
https://en.wikipedia.org/wiki/Halpin%E2%80%93Tsai%20model
|
Halpin–Tsai model is a mathematical model for the prediction of elasticity of composite material based on the geometry and orientation of the filler and the elastic properties of the filler and matrix. The model is based on the self-consistent field method although often consider to be empirical.
See also
Cadec-online.com implements the Halpin–Tsai model among others.
References
J. C. Halpin Effect of Environmental Factors on Composite Materials, US Air Force Material Laboratory, Technical Report AFML-TR-67-423, June 1969
J.C. Halpin and J. L. Kardos Halpin-Tsai equations:A review, Polymer Engineering and Science, 1976, v16, N5, pp 344-352
Halpin-Tsai model on about.com
Composite materials
Continuum mechanics
Materials science
|
https://en.wikipedia.org/wiki/Daniel%20Kleitman
|
Daniel J. Kleitman (born October 4, 1934) is an American mathematician and professor of applied mathematics at MIT. His research interests include combinatorics, graph theory, genomics, and operations research.
Biography
Kleitman was born in 1934 in Brooklyn, New York, the younger of Bertha and Milton Kleitman's two sons. His father was a lawyer who after WWII became a commodities trader and investor. In 1942 the family moved to Morristown, New Jersey, and he graduated from Morristown High School in 1950.
Kleitman then attended Cornell University, from which he graduated in 1954, and received his PhD in Physics from Harvard University in 1958 under Nobel Laureates Julian Schwinger and Roy Glauber. He is the "k" in G. W. Peck, a pseudonym for a group of six mathematicians that includes Kleitman. Formerly a physics professor at Brandeis University, Kleitman was encouraged by Paul Erdős to change his field of study to mathematics. Perhaps humorously, Erdős once asked him, "Why are you only a physicist?"
Kleitman joined the applied mathematics faculty at MIT in 1966, and was promoted to professor in 1969.
Kleitman coauthored at least six papers with Erdős, giving him an Erdős number of 1.
He was a math advisor and extra for the film Good Will Hunting. Since Minnie Driver, who appeared in Good Will Hunting, also appeared in Sleepers with Kevin Bacon, Kleitman has a Bacon number of 2. Adding the two numbers results in an Erdős–Bacon number of 3, which is a tie with Bruce Reznick for the lowest number anyone has.
Personal life
On July 26, 1964 Kleitman married Sharon Ruth Alexander. They have three children.
Selected publications
See also
Kleitman–Wang algorithms
Littlewood–Offord problem
References
External links
Kleitman's homepage
(article available on Douglas West's web page, University of Illinois at Urbana–Champaign)
20th-century American mathematicians
21st-century American mathematicians
Combinatorialists
American operations researchers
Harvard University alumni
Massachusetts Institute of Technology School of Science faculty
Brandeis University faculty
1934 births
Living people
Educators from New York City
Mathematicians from New Jersey
Mathematicians from New York (state)
Morristown High School (Morristown, New Jersey) alumni
People from Morristown, New Jersey
|
https://en.wikipedia.org/wiki/Spiric%20section
|
In geometry, a spiric section, sometimes called a spiric of Perseus, is a quartic plane curve defined by equations of the form
Equivalently, spiric sections can be defined as bicircular quartic curves that are symmetric with respect to the x and y-axes. Spiric sections are included in the family of toric sections and include the family of hippopedes and the family of Cassini ovals. The name is from σπειρα meaning torus in ancient Greek.
A spiric section is sometimes defined as the curve of intersection of a torus and a plane parallel to its rotational symmetry axis. However, this definition does not include all of the curves given by the previous definition unless imaginary planes are allowed.
Spiric sections were first described by the ancient Greek geometer Perseus in roughly 150 BC, and are assumed to be the first toric sections to be described. The name spiric is due to the ancient notation spira of a torus.,
Equations
Start with the usual equation for the torus:
Interchanging y and z so that the axis of revolution is now on the xy-plane, and setting z=c to find the curve of intersection gives
In this formula, the torus is formed by rotating a circle of radius a with its center following another circle of radius b (not necessarily larger than a, self-intersection is permitted). The parameter c is the distance from the intersecting plane to the axis of revolution. There are no spiric sections with c > b + a, since there is no intersection; the plane is too far away from the torus to intersect it.
Expanding the equation gives the form seen in the definition
where
In polar coordinates this becomes
or
Spiric sections on a spindle torus
Spiric sections on a spindle torus, whose planes intersect the spindle (inner part), consist of an outer and an inner curve (s. picture).
Spriric sections as isoptics
Isoptics of ellipses and hyperbolas are spiric sections. (S. also weblink The Mathematics Enthusiast.)
Examples of spiric sections
Examples include the hippopede and the Cassini oval and their relatives, such as the lemniscate of Bernoulli. The Cassini oval has the remarkable property that the product of distances to two foci are constant. For comparison, the sum is constant in ellipses, the difference is constant in hyperbolae and the ratio is constant in circles.
References
MacTutor history
2Dcurves.com description
MacTutor biography of Perseus
The Mathematics Enthusiast Number 9, article 4
Specific
Algebraic curves
Plane curves
Toric sections
|
https://en.wikipedia.org/wiki/WKB%20%28disambiguation%29
|
The WKB approximation is a method for solving equations in applied mathematics.
WKB may also refer to:
Warracknabeal Airport (IATA: WKB), in Warracknabeal, Victoria, Australia
Well-known binary, a language for marking up vector geometry objects on a map
See also
|
https://en.wikipedia.org/wiki/Defective%20matrix
|
In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonalizable. In particular, an n × n matrix is defective if and only if it does not have n linearly independent eigenvectors. A complete basis is formed by augmenting the eigenvectors with generalized eigenvectors, which are necessary for solving defective systems of ordinary differential equations and other problems.
An n × n defective matrix always has fewer than n distinct eigenvalues, since distinct eigenvalues always have linearly independent eigenvectors. In particular, a defective matrix has one or more eigenvalues λ with algebraic multiplicity m > 1 (that is, they are multiple roots of the characteristic polynomial), but fewer than m linearly independent eigenvectors associated with λ. If the algebraic multiplicity of λ exceeds its geometric multiplicity (that is, the number of linearly independent eigenvectors associated with λ), then λ is said to be a defective eigenvalue. However, every eigenvalue with algebraic multiplicity m always has m linearly independent generalized eigenvectors.
A Hermitian matrix (or the special case of a real symmetric matrix) or a unitary matrix is never defective; more generally, a normal matrix (which includes Hermitian and unitary as special cases) is never defective.
Jordan block
Any nontrivial Jordan block of size or larger (that is, not completely diagonal) is defective. (A diagonal matrix is a special case of the Jordan normal form with all trivial Jordan blocks of size and is not defective.) For example, the Jordan block
has an eigenvalue, with algebraic multiplicity n (or greater if there are other Jordan blocks with the same eigenvalue), but only one distinct eigenvector , where The other canonical basis vectors form a chain of generalized eigenvectors such that for .
Any defective matrix has a nontrivial Jordan normal form, which is as close as one can come to diagonalization of such a matrix.
Example
A simple example of a defective matrix is
which has a double eigenvalue of 3 but only one distinct eigenvector
(and constant multiples thereof).
See also
Notes
References
Linear algebra
Matrices
|
https://en.wikipedia.org/wiki/Chronology%20of%20ancient%20Greek%20mathematicians
|
This is a chronology of ancient Greek mathematicians.
See also
References
Ancient Greek mathematicians
Greek mathematics
History of geometry
History of mathematics
Mathematics timelines
|
https://en.wikipedia.org/wiki/David%20Neft
|
David S. Neft (born January 9, 1937) is an American writer and historian who creates sports encyclopedias.
Early career
Neft was born in New York City, received a BA, MBA, and PhD (Statistics) from Columbia University, and worked as chief statistician for the polling company Louis Harris & Associates from 1963 to 1965.
Big Mac
In 1965, he was a founder of a company called Information Concepts, Inc. (ICI) and headed the first effort to compile a computerized database of baseball statistics. The task took more than three years, as Neft and a team of researchers travelled across the country to fill the gaping holes in baseball's statistical and biographical records. The resulting work was published in 1969 by the Macmillan Publishing Company. Although the official title was The Baseball Encyclopedia, the massive book was generally referred to as "Big Mac". It was a quantum leap from early baseball encyclopedias, with a breadth and depth that far exceeded anything that had come before it.
Sports Encyclopedia series
Neft left ICI in 1970, spending the next few years developing dice-based sports games for Sports Illustrated Enterprises. He returned to the sports reference field when he founded Sports Products, Inc. with partner Richard M. Cohen. The company produced a new baseball encyclopedia in 1974 called "The Sports Encyclopedia: Baseball". That same year, they published groundbreaking new encyclopedias for football (The Sports Encyclopedia: Pro Football) and basketball (The Sports Encyclopedia: Pro Basketball).
The baseball encyclopedia has been updated each spring, with the 27th edition appearing in 2007. Seventeen editions of the football encyclopedia were published (the last in 1998). The basketball encyclopedia was published until 1992, a total of five editions.
Beside this line of encyclopedias, Neft edited more than a dozen other sports books.
Pre-1933 football research
In 1978, Neft & Cohen published Pro Football: The Early Years, a startling new look at professional football before 1933. Because the National Football League hadn't kept official statistics in its first thirteen seasons, knowledge of the period was extremely limited. Neft led a team of researchers that meticulously reconstructed the statistical record using box scores, play-by-play accounts, and game stories from local newspapers. The result was a remarkable new look at the teams and players from the NFL's earliest seasons.
Gannett
Neft returned to Harris, then owned by Gannett, in 1977, serving as executive vice president. In 1985, he became Gannett's director of research, a position he held until his retirement in 2002. He is widely recognized within the newspaper industry as a market research expert.
References
Schwarz, Alan The Numbers Game: Baseball's Lifelong Fascination with Statistics, 2004
External links
Article on Neft's retirement from Gannett
1937 births
Living people
Baseball writers
American statisticians
Journalists from New York City
Col
|
https://en.wikipedia.org/wiki/General%20Achievement%20Test
|
The General Achievement Test (often abbreviated GAT) is a test of general knowledge and skills including communication, mathematics, science and technology, the arts, humanities and social sciences in the Australian state of Victoria.
Although the GAT is not a part of the graduation requirements and does not count towards a student's final VCE results or ATAR, the GAT plays an important role in checking that a school's assessments and examinations have been accurately assessed.
History
The General Achievement Test was introduced as a pilot program in 1987, designed to test the feasibility and effectiveness of a general test that assesses skills and knowledge that was not specific to any VCE subjects. After the successful pilot program, the GAT was fully implemented as a compulsory test for all Year 12 students studying for the Victorian Certificate of Education in 1992. The GAT has since then been conducted annually and remains an important part of the VCE assessment process.
From 2006 to 2007, Year 12 Western Australian students sat the GAT for a short period. This test was introduced into Western Australia as a trial to provide schools with feedback on the standard of assessment used for the new WACE courses. However, the results of the trial were inconclusive due to the test not being taken seriously by a large number of students, and a more sophisticated analysis than the initially suggested regression analysis was found to be required. Also, the renewed primacy of marks in scaling scores for WACE meant the original purpose for the GAT no longer existed. Therefore, in 2007 the Curriculum Council of Western Australia decided to discontinue the test after an independent review.
In 2007, Monash University began taking the GAT into consideration for middle band students. It was initially for Victorian students who missed out on courses because their ATAR score was just below the cut-off score. Currently, it is only considered if two students have the same ATAR, prerequisite study scores and are trying to get into the same course. Their GAT score can then be used to differentiate between one getting in and the other not.
In 2020, the GAT was rescheduled from June to October due to the COVID-19 pandemic, with masks being mandatory for all students undergoing the test.
These issues continued into 2021, with the GAT being rescheduled four separate times due to COVID lockdowns. In the lead-up, Victorian Education Minister James Merlino encouraged students in hotspot areas to receive COVID tests before sitting the GAT, uncovering 33 cases. After the exam was conducted, at least four positive cases were linked to students that attended.
Since 2022, the GAT had been split into two sections, and the total exam time was increased from 3 hours and 15 minutes to 4 hours. It also started explicitly reporting a student’s literacy and numeracy skills against the new standards in addition to its original role in quality-assuring VCE assessments, bringing
|
https://en.wikipedia.org/wiki/Space%20%28mathematics%29
|
In mathematics, a space is a set (sometimes called a universe) with some added structure.
While modern mathematics uses many types of spaces, such as Euclidean spaces, linear spaces, topological spaces, Hilbert spaces, or probability spaces, it does not define the notion of "space" itself.
A space consists of selected mathematical objects that are treated as points, and selected relationships between these points.
The nature of the points can vary widely: for example, the points can be elements of a set, functions on another space, or subspaces of another space. It is the relationships that define the nature of the space. More precisely, isomorphic spaces are considered identical, where an isomorphism between two spaces is a one-to-one correspondence between their points that preserves the relationships. For example, the relationships between the points of a three-dimensional Euclidean space are uniquely determined by Euclid's axioms, and all three-dimensional Euclidean spaces are considered identical.
Topological notions such as continuity have natural definitions in every Euclidean space.
However, topology does not distinguish straight lines from curved lines, and the relation between Euclidean and topological spaces is thus "forgetful". Relations of this kind are treated in more detail in the Section "Types of spaces".
It is not always clear whether a given mathematical object should be considered as a geometric "space", or an algebraic "structure". A general definition of "structure", proposed by Bourbaki, embraces all common types of spaces, provides a general definition of isomorphism, and justifies the transfer of properties between isomorphic structures.
History
Before the golden age of geometry
In ancient Greek mathematics, "space" was a geometric abstraction of the three-dimensional reality observed in everyday life. About 300 BC, Euclid gave axioms for the properties of space. Euclid built all of mathematics on these geometric foundations, going so far as to define numbers by comparing the lengths of line segments to the length of a chosen reference segment.
The method of coordinates (analytic geometry) was adopted by René Descartes in 1637. At that time, geometric theorems were treated as absolute objective truths knowable through intuition and reason, similar to objects of natural science; and axioms were treated as obvious implications of definitions.
Two equivalence relations between geometric figures were used: congruence and similarity. Translations, rotations and reflections transform a figure into congruent figures; homotheties — into similar figures. For example, all circles are mutually similar, but ellipses are not similar to circles. A third equivalence relation, introduced by Gaspard Monge in 1795, occurs in projective geometry: not only ellipses, but also parabolas and hyperbolas, turn into circles under appropriate projective transformations; they all are projectively equivalent figures.
The relation betwee
|
https://en.wikipedia.org/wiki/Random%20compact%20set
|
In mathematics, a random compact set is essentially a compact set-valued random variable. Random compact sets are useful in the study of attractors for random dynamical systems.
Definition
Let be a complete separable metric space. Let denote the set of all compact subsets of . The Hausdorff metric on is defined by
is also а complete separable metric space. The corresponding open subsets generate a σ-algebra on , the Borel sigma algebra of .
A random compact set is а measurable function from а probability space into .
Put another way, a random compact set is a measurable function such that is almost surely compact and
is a measurable function for every .
Discussion
Random compact sets in this sense are also random closed sets as in Matheron (1975). Consequently, under the additional assumption that the carrier space is locally compact, their distribution is given by the probabilities
for
(The distribution of а random compact convex set is also given by the system of all inclusion probabilities )
For , the probability is obtained, which satisfies
Thus the covering function is given by
for
Of course, can also be interpreted as the mean of the indicator function :
The covering function takes values between and . The set of all with is called the support of . The set , of all with is called the kernel, the set of fixed points, or essential minimum . If , is а sequence of i.i.d. random compact sets, then almost surely
and converges almost surely to
References
Matheron, G. (1975) Random Sets and Integral Geometry. J.Wiley & Sons, New York.
Molchanov, I. (2005) The Theory of Random Sets. Springer, New York.
Stoyan D., and H.Stoyan (1994) Fractals, Random Shapes and Point Fields. John Wiley & Sons, Chichester, New York.
Random dynamical systems
Statistical randomness
|
https://en.wikipedia.org/wiki/Lebesgue%20spine
|
In mathematics, in the area of potential theory, a Lebesgue spine or Lebesgue thorn is a type of set used for discussing solutions to the Dirichlet problem and related problems of potential theory. The Lebesgue spine was introduced in 1912 by Henri Lebesgue to demonstrate that the Dirichlet problem does not always have a solution, particularly when the boundary has a sufficiently sharp edge protruding into the interior of the region.
Definition
A typical Lebesgue spine in , for is defined as follows
The important features of this set are that it is connected and path-connected in the euclidean topology in and the origin is a limit point of the set, and yet the set is thin at the origin, as defined in the article Fine topology (potential theory).
Observations
The set is not closed in the euclidean topology since it does not contain the origin which is a limit point of , but the set is closed in the fine topology in .
In comparison, it is not possible in to construct such a connected set which is thin at the origin.
References
J. L. Doob. Classical Potential Theory and Its Probabilistic Counterpart, Springer-Verlag, Berlin Heidelberg New York, .
L. L. Helms (1975). Introduction to potential theory. R. E. Krieger .
Potential theory
|
https://en.wikipedia.org/wiki/Fano%20surface
|
In algebraic geometry, a Fano surface is a surface of general type (in particular, not a Fano variety) whose points index the lines on a non-singular cubic threefold. They were first studied by .
Hodge diamond:
Fano surfaces are perhaps the simplest and most studied examples of irregular surfaces of general type that are not related to a product of two curves and are not a complete intersection of divisors in an Abelian variety.
The Fano surface S of a smooth cubic threefold F into P4 carries many remarkable geometric properties.
The surface S is naturally embedded into the grassmannian of lines G(2,5) of P4. Let U be the restriction to S of the universal rank 2 bundle on G. We have the:
Tangent bundle Theorem (Fano, Clemens-Griffiths, Tyurin): The tangent bundle of S is isomorphic to U.
This is a quite interesting result because, a priori, there should be no link between these two bundles. It has many powerful applications. By example, one can recover the fact that the cotangent space of S is generated by global sections. This space of global 1-forms can be identified with the space of global sections of the tautological line bundle O(1) restricted to the cubic F and moreover:
Torelli-type Theorem : Let g' be the natural morphism from S to the grassmannian G(2,5) defined by the cotangent sheaf of S generated by its 5-dimensional space of global sections. Let F' be the union of the lines corresponding to g'(S). The threefold F' is isomorphic to F.
Thus knowing a Fano surface S, we can recover the threefold F.
By the Tangent Bundle Theorem, we can also understand geometrically the invariants of S:
a) Recall that the second Chern number of a rank 2 vector bundle on a surface is the number of zeroes of a generic section. For a Fano surface S, a 1-form w defines also a hyperplane section {w=0} into P4 of the cubic F. The zeros of the generic w on S corresponds bijectively to the numbers of lines into the smooth cubic surface intersection of {w=0} and F, therefore we recover that the second Chern class of S equals 27.
b) Let w1, w2 be two 1-forms on S. The canonical divisor K on S associated to the canonical form w1 ∧ w2 parametrizes the lines on F that cut the plane P={w1=w2=0} into P4. Using w1 and w2 such that the intersection of P and F is the union of 3 lines, one can recover the fact that K2=45.
Let us give some details of that computation:
By a generic point of the cubic F goes 6 lines. Let s be a point of S and let Ls be the corresponding line on the cubic F. Let Cs be the divisor on S parametrizing lines that cut the line Ls. The self-intersection of Cs is equal to the intersection number of Cs and Ct for t a generic point. The intersection of Cs and Ct is the set of lines on F that cuts the disjoint lines Ls and Lt. Consider the linear span of Ls and Lt : it is an hyperplane into P4 that cuts F into a smooth cubic surface. By well known results on a cubic surface, the number of lines that cuts two disjoints lines is 5, thus we get
|
https://en.wikipedia.org/wiki/Municipality%20of%20the%20District%20of%20Yarmouth
|
Yarmouth, officially named the Municipality of the District of Yarmouth, is a district municipality in Yarmouth County, Nova Scotia, Canada. Statistics Canada classifies the district municipality as a municipal district.
The district municipality forms the western part of Yarmouth County. It is one of three municipal units in the county, the other two being the Town of Yarmouth and the Municipality of the District of Argyle.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, the Municipality of the District of Yarmouth had a population of living in of its total private dwellings, a change of from its 2016 population of . With a land area of , it had a population density of in 2021.
Education:
No certificate, diploma or degree: 35.32%
High school certificate: 18.16%
Apprenticeship or trade certificate or diploma: 13.43%
Community college, CEGEP or other non-university certificate or diploma: 20.06%
University certificate or diploma: 12.96%
Unemployment rate:
10.9%
Average house value:
$141,461
Communities
Communities include:
See also
List of municipalities in Nova Scotia
Royal eponyms in Canada
References
External links
Communities in Yarmouth County
District municipalities in Nova Scotia
|
https://en.wikipedia.org/wiki/The%20Algebra%20of%20Ice
|
The Algebra of Ice is a BBC Books original novel written by Lloyd Rose and based on the long-running British science fiction television series Doctor Who. It features the Seventh Doctor and Ace.
Synopsis
The Doctor and Ace investigate a 'crop circle' in the Kentish countryside; they are helped by a maths expert, a web-magazine publish and the Doctor's friend, the Brigadier. However, this crop circle is made of ice and is not circular, instead being filled with square-sided shapes. It draws the Doctor and Ace into a new level of reality.
Trivia
The story makes reference to the Riemann hypothesis, featuring a sequence set in a 'world' modelled on the Riemann zeta function.
External links
The Cloister Library - The Algebra of Ice
2004 British novels
2004 science fiction novels
Past Doctor Adventures
Seventh Doctor novels
Novels by Lloyd Rose
Novels set in Kent
|
https://en.wikipedia.org/wiki/Axiom%20of%20global%20choice
|
In mathematics, specifically in class theories, the axiom of global choice is a stronger variant of the axiom of choice that applies to proper classes of sets as well as sets of sets. Informally it states that one can simultaneously choose an element from every non-empty set.
Statement
The axiom of global choice states that there is a global choice function τ, meaning a function such that for every non-empty set z, τ(z) is an element of z.
The axiom of global choice cannot be stated directly in the language of Zermelo–Fraenkel set theory (ZF) with the axiom of choice (AC), known as ZFC, as the choice function τ is a proper class and in ZFC one cannot quantify over classes. It can be stated by adding a new function symbol τ to the language of ZFC, with the property that τ is a global choice function. This is a conservative extension of ZFC: every provable statement of this extended theory that can be stated in the language of ZFC is already provable in ZFC . Alternatively, Gödel showed that given the axiom of constructibility one can write down an explicit (though somewhat complicated) choice function τ in the language of ZFC, so in some sense the axiom of constructibility implies global choice (in fact, [ZFC proves that] in the language extended by the unary function symbol τ, the axiom of constructibility implies that if τ is said explicitly definable function, then this τ is a global choice function. And then global choice morally holds, with τ as a witness).
In the language of von Neumann–Bernays–Gödel set theory (NBG) and Morse–Kelley set theory, the axiom of global choice can be stated directly , and is equivalent to various other statements:
Every class of nonempty sets has a choice function.
V \ {∅} has a choice function (where V is the class of all sets).
There is a well-ordering of V.
There is a bijection between V and the class of all ordinal numbers.
In von Neumann–Bernays–Gödel set theory, global choice does not add any consequence about sets (not proper classes) beyond what could have been deduced from the ordinary axiom of choice.
Global choice is a consequence of the axiom of limitation of size.
References
Jech, Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. .
John L. Kelley; General Topology;
Axioms of set theory
Axiom of choice
|
https://en.wikipedia.org/wiki/Axiom%20of%20limitation%20of%20size
|
In set theory, the axiom of limitation of size was proposed by John von Neumann in his 1925 axiom system for sets and classes. It formalizes the limitation of size principle, which avoids the paradoxes encountered in earlier formulations of set theory by recognizing that some classes are too big to be sets. Von Neumann realized that the paradoxes are caused by permitting these big classes to be members of a class. A class that is a member of a class is a set; a class that is not a set is a proper class. Every class is a subclass of V, the class of all sets. The axiom of limitation of size says that a class is a set if and only if it is smaller than V—that is, there is no function mapping it onto V. Usually, this axiom is stated in the equivalent form: A class is a proper class if and only if there is a function that maps it onto V.
Von Neumann's axiom implies the axioms of replacement, separation, union, and global choice. It is equivalent to the combination of replacement, union, and global choice in Von Neumann–Bernays–Gödel set theory (NBG) and Morse–Kelley set theory. Later expositions of class theories—such as those of Paul Bernays, Kurt Gödel, and John L. Kelley—use replacement, union, and a choice axiom equivalent to global choice rather than von Neumann's axiom. In 1930, Ernst Zermelo defined models of set theory satisfying the axiom of limitation of size.
Abraham Fraenkel and Azriel Lévy have stated that the axiom of limitation of size does not capture all of the "limitation of size doctrine" because it does not imply the power set axiom. Michael Hallett has argued that the limitation of size doctrine does not justify the power set axiom and that "von Neumann's explicit assumption [of the smallness of power-sets] seems preferable to Zermelo's, Fraenkel's, and Lévy's obscurely hidden implicit assumption of the smallness of power-sets."
Formal statement
The usual version of the axiom of limitation of size—a class is a proper class if and only if there is a function that maps it onto V—is expressed in the formal language of set theory as:
Gödel introduced the convention that uppercase variables range over all the classes, while lowercase variables range over all the sets. This convention allows us to write:
instead of
instead of
With Gödel's convention, the axiom of limitation of size can be written:
Implications of the axiom
Von Neumann proved that the axiom of limitation of size implies the axiom of replacement, which can be expressed as: If F is a function and A is a set, then F(A) is a set. This is proved by contradiction. Let F be a function and A be a set. Assume that F(A) is a proper class. Then there is a function G that maps F(A) onto V. Since the composite function G ∘ F maps A onto V, the axiom of limitation of size implies that A is a proper class, which contradicts A being a set. Therefore, F(A) is a set. Since the axiom of replacement implies the axiom of separation, the axiom of limitation of size implies the a
|
https://en.wikipedia.org/wiki/David%20Cain%20%28composer%29
|
David Cain was a composer and technician for the BBC Radiophonic Workshop. He was educated at Imperial College London, where he earned a degree in mathematics. In 1963, he joined the BBC as a studio manager, specialising in radio drama. He transferred to the Radiophonic Workshop in 1967 where he composed various jingles and signature tunes as well as the complete incidental music for the BBC's radio productions of The War of the Worlds in 1967, and The Hobbit in 1968. He also produced the Workshop's 1973 adaptation of Isaac Asimov's Foundation series. He remained with the Radiophonic Workshop until 1973. His 30-second composition "Crossbeat" was used as the original theme for the Australian Broadcasting Corporation's morning radio current affairs program AM, which premiered in 1967.
See also
Neasden#BBC Radiophonic Workshop
References
David Cain bio at Ether.net BBC Radiophonic Workshop album review
1941 births
Living people
Alumni of Imperial College London
BBC Radiophonic Workshop
British electronic musicians
Musicians from Stoke-on-Trent
|
https://en.wikipedia.org/wiki/Georg%20Nees
|
Georg Nees (23 June 1926 – 3 January 2016) was a German academic who was a pioneer of computer art and generative graphics. He studied mathematics, physics and philosophy in Erlangen and Stuttgart and was scientific advisor at the SEMIOSIS, International Journal of semiotics and aesthetics. In 1977, he was appointed Honorary Professor of Applied computer science at the University of Erlangen Nees is one of the "3N" computer pioneers, an abbreviation that has become acknowledged for Frieder Nake, Georg Nees and A. Michael Noll, whose computer graphics were created with digital computers.
Early life and studies
Georg Nees was born in 1926 in Nuremberg, where he spent his childhood. He showed scientific curiosity and interest in art from a young age and among his favorite pastimes were viewing art postcards and looking through a microscope. He attended a school in Schwabach near Nuremberg, graduating in 1945. From 1945 to 1951, he studied mathematics and physics at the University of Erlangen then worked as an industry mathematician for the Siemens Schuckertwerk in Erlangen from 1951 to 1985. There he started to write his first programs in 1959. The company was later incorporated into the Siemens AG.
From 1964 onwards, he studied philosophy at the Technische Hochschule Stuttgart (since 1967 the University of Stuttgart), under Max Bense. He received his doctorate with his thesis on Generative Computergraphik under Max Bense in 1969. His work is considered one of the first theses on Generative Computer Graphics. In 1969, his thesis was published as a book entitled "Generative Computergraphik" and also included examples of program code and graphics produced thereby. After his retirement in 1985 Nees worked as an author and in the field of computer art.
Computer art
In February 1965, Nees showed - as works of art - the world's first computer graphics created with a digital computer. The exhibition, titled computer graphik took place at the public premises of the "Study Gallery of Stuttgart College". In 1966, he started to work on "computer-sculptures". In the catalog of the Biennale 1969 Nuremberg, Nees describes how the computer program controlled the milling machine so that instead of a workpiece, a sculpture was created. Three painted wooden sculptures and several graphics were shown at the Biennale 1969 Nuremberg. In 1970 at the 35th Venice Biennale his work was part of the special exhibition "Research and Design. Proposals for an experimental exposure" and showcased his sculptures and graphics of art and architectural design.
In 1963, Nees was instrumental in the purchase of a flatbed plotter, the Zuse Graphomat Z64 designed by Konrad Zuse, for the data center at the Schuckertwerke in Erlangen. At the exhibition Georg Nees – The Great Temptation at the ZKM Nees said: ″There it was, the great temptation for me, for once not to represent something technical with this machine but rather something ‘useless’ – geometrical patterns.″
Using the ALG
|
https://en.wikipedia.org/wiki/Partition%20topology
|
In mathematics, the partition topology is a topology that can be induced on any set by partitioning into disjoint subsets these subsets form the basis for the topology. There are two important examples which have their own names:
The is the topology where and Equivalently,
The is defined by letting and
The trivial partitions yield the discrete topology (each point of is a set in so ) or indiscrete topology (the entire set is in so ).
Any set with a partition topology generated by a partition can be viewed as a pseudometric space with a pseudometric given by:
This is not a metric unless yields the discrete topology.
The partition topology provides an important example of the independence of various separation axioms. Unless is trivial, at least one set in contains more than one point, and the elements of this set are topologically indistinguishable: the topology does not separate points. Hence is not a Kolmogorov space, nor a T1 space, a Hausdorff space or an Urysohn space. In a partition topology the complement of every open set is also open, and therefore a set is open if and only if it is closed. Therefore, is regular, completely regular, normal and completely normal. is the discrete topology.
See also
References
Topological spaces
|
https://en.wikipedia.org/wiki/Highfields%2C%20Queensland
|
Highfields is a small town in the Toowoomba Region, Queensland, Australia. In 2022 the Australian Bureau of Statistics estimated the resident population of the Highfields region was 15478.
Geography
Highfields is situated on the Great Dividing Range, slightly north of Mount Kynoch. It is on the New England Highway. It serves as a satellite town to the city of Toowoomba, accommodating many of Toowoomba businesses' employees. The Australian Bureau of Statistics also defines a larger growth area, named Highfields, that includes the suburb and several of those surrounding.
Climate
Along with Meringandan, the climate is oceanic (Köppen: Cfb) due to elevation, usually located further south of Australia.
History
The area probably takes its name from the Highfields pastoral run, north of the township. The area was first developed in the 1860s. Initially, there were a number of sawmills in the area, harvesting the local timber. Then the construction of the railway line between Ipswich and Toowoomba (completed in 1867) brought railway workers to the district. As the timber-getters cleared the land, dairy farms were established. The first post office openly briefly in 1866 with a weekly mail service from Toowoomba. It re-opened in 1868 and changed its name in December 1877 to Koojarewon.
The Highfields School opened on 17 January 1870 in the Rising Sun Hotel under teacher Mr Larkin. The first school building was constructed in the 1880s. In 1906, the school was renamed Koojarawon.
In 1879, a Baptist Church opened in Highfields. On Sunday 22 November 1908, the church was reopened following a major reconstruction.
In 1907, the protests of residents resulted in both the school and the post office returning to the name Highfields. Another post office in the Highfields area is now the Geham Post Office.
View Glen State School opened on Highfields Road on 25 May 1914. It closed on 1924.
Coming into the 1960s,Highfields remained a rural community with, at one stage, only 9 children enrolled in the Highfields State Schooll. However, residential subdivision started to occur in the 1960s, to a point where it is now considered a satellite town of Toowoomba. As at 2014, the school was one of the largest primary schools in the Toowoomba and Darling Downs region.
Toowoomba Christian College opened on 30 January 1979.
Mary MacKillop Catholic School opened on 26 January 2003 as primary school. In 2015 it was renamed Mary Mackillop Catholic College in 2015 to reflect its expansion to secondary schooling.
The Cabarlah Community School opened in Wirraglen Road, Highfields, in January 2006. It used the Reggio Emilia teaching philosophy. In March 2008 it was closed when the Queensland Government's Non-State Schools Accreditation Board refused to accredit the school, claiming it did not meet the requirements of the Education (Accreditation of Non-State Schools) Act 2001. Although the school appealed the decision, the Queensland Education Minister, Rod Weldford, uphe
|
https://en.wikipedia.org/wiki/Particular%20point%20topology
|
In mathematics, the particular point topology (or included point topology) is a topology where a set is open if it contains a particular point of the topological space. Formally, let X be any non-empty set and p ∈ X. The collection
of subsets of X is the particular point topology on X. There are a variety of cases that are individually named:
If X has two points, the particular point topology on X is the Sierpiński space.
If X is finite (with at least 3 points), the topology on X is called the finite particular point topology.
If X is countably infinite, the topology on X is called the countable particular point topology.
If X is uncountable, the topology on X is called the uncountable particular point topology.
A generalization of the particular point topology is the closed extension topology. In the case when X \ {p} has the discrete topology, the closed extension topology is the same as the particular point topology.
This topology is used to provide interesting examples and counterexamples.
Properties
Closed sets have empty interior
Given a nonempty open set every is a limit point of A. So the closure of any open set other than is . No closed set other than contains p so the interior of every closed set other than is .
Connectedness Properties
Path and locally connected but not arc connected
For any x, y ∈ X, the function f: [0, 1] → X given by
is a path. However, since p is open, the preimage of p under a continuous injection from [0,1] would be an open single point of [0,1], which is a contradiction.
Dispersion point, example of a set with
p is a dispersion point for X. That is X \ {p} is totally disconnected.
Hyperconnected but not ultraconnected
Every non-empty open set contains p, and hence X is hyperconnected. But if a and b are in X such that p, a, and b are three distinct points, then {a} and {b} are disjoint closed sets and thus X is not ultraconnected. Note that if X is the Sierpiński space then no such a and b exist and X is in fact ultraconnected.
Compactness Properties
Compact only if finite. Lindelöf only if countable.
If X is finite, it is compact; and if X is infinite, it is not compact, since the family of all open sets forms an open cover with no finite subcover.
For similar reasons, if X is countable, it is a Lindelöf space; and if X is uncountable, it is not Lindelöf.
Closure of compact not compact
The set {p} is compact. However its closure (the closure of a compact set) is the entire space X, and if X is infinite this is not compact. For similar reasons if X is uncountable then we have an example where the closure of a compact set is not a Lindelöf space.
Pseudocompact but not weakly countably compact
First there are no disjoint non-empty open sets (since all open sets contain p). Hence every continuous function to the real line must be constant, and hence bounded, proving that X is a pseudocompact space. Any set not containing p does not have a limit point thus if X if
|
https://en.wikipedia.org/wiki/E7%C2%BD
|
{{DISPLAYTITLE:E7½}}
In mathematics, the Lie algebra E7½ is a subalgebra of E8 containing E7 defined by Landsberg and Manivel in order
to fill the "hole" in a dimension formula for the exceptional series En of simple Lie algebras. This hole was observed by Cvitanovic, Deligne, Cohen and de Man. E7½ has dimension 190, and is not simple: as a representation of its subalgebra E7, it splits as , where (56) is the 56-dimensional irreducible representation of E7. This representation has an invariant symplectic form, and this symplectic form equips with the structure of a Heisenberg algebra; this Heisenberg algebra is the nilradical in E7½.
See also
Vogel plane
References
A.M. Cohen, R. de Man, "Computational evidence for Deligne's conjecture regarding exceptional Lie groups", Comptes rendus de l'Académie des Sciences, Série I 322 (1996) 427–432.
P. Deligne, "La série exceptionnelle de groupes de Lie", Comptes rendus de l'Académie des Sciences, Série I 322 (1996) 321–326.
P. Deligne, R. de Man, "La série exceptionnelle de groupes de Lie II", Comptes rendus de l'Académie des Sciences, Série I 323 (1996) 577–582.
Lie groups
|
https://en.wikipedia.org/wiki/FSU%20Young%20Scholars%20Program
|
FSU Young Scholars Program (YSP) is a six-week residential science and mathematics summer program for 40 high school students from Florida, USA, with significant potential for careers in the fields of science, technology, engineering and mathematics. The program was developed in 1983 and is currently administered by the Office of Science Teaching Activities in the College of Arts and Sciences at Florida State University (FSU).
Academic program
Each young scholar attends three courses in the fields of mathematics, science and computer programming. The courses are designed specifically for this program — they are neither high school nor college courses.
Research
Each student who attends YSP is assigned an independent research project (IRP) based on his or her interests. Students join the research teams of FSU professors, participating in scientific research for two days each week. The fields of study available include robotics, molecular biology, chemistry, geology, physics and zoology. At the conclusion of the program, students present their projects in an academic conference, documenting their findings and explaining their projects to both students and faculty.
Selection process
YSP admits students who have completed the eleventh grade in a Florida public or private high school. A few exceptionally qualified and mature tenth graders have been selected in past years, though this is quite rare.
All applicants must have completed pre-calculus and maintain at least a 3.0 unweighted GPA to be considered for acceptance. Additionally, students must have scored at the 90th percentile or better in science or mathematics on a nationally standardized exam, such as the SAT, PSAT, ACT or PLAN. Students are required to submit an application package, including high school transcripts and a letter of recommendation.
Selection is extremely competitive, as there are typically over 200 highly qualified applicants competing for only 40 positions. The majority of past participants graduated in the top ten of their respective high school classes, with over 25% of students entering their senior year ranked first in their class. The average PSAT score of past young scholars was in the 97th percentile in mathematics and the 94th percentile in critical reading nationally.
References
External links
YSP home page
Florida State University
Summer camps in Florida
Mathematics summer camps
Science education in the United States
1983 establishments in Florida
|
https://en.wikipedia.org/wiki/Gradient%20theorem
|
The gradient theorem, also known as the fundamental theorem of calculus for line integrals, says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve. The theorem is a generalization of the second fundamental theorem of calculus to any curve in a plane or space (generally n-dimensional) rather than just the real line.
For as a differentiable function and as any continuous curve in which starts at a point and ends at a point , then
where denotes the gradient vector field of .
The gradient theorem implies that line integrals through gradient fields are path-independent. In physics this theorem is one of the ways of defining a conservative force. By placing as potential, is a conservative field. Work done by conservative forces does not depend on the path followed by the object, but only the end points, as the above equation shows.
The gradient theorem also has an interesting converse: any path-independent vector field can be expressed as the gradient of a scalar field. Just like the gradient theorem itself, this converse has many striking consequences and applications in both pure and applied mathematics.
Proof
If is a differentiable function from some open subset to and is a differentiable function from some closed interval to (Note that is differentiable at the interval endpoints and . To do this, is defined on an interval that is larger than and includes .), then by the multivariate chain rule, the composite function is differentiable on :
for all in . Here the denotes the usual inner product.
Now suppose the domain of contains the differentiable curve with endpoints and . (This is oriented in the direction from to ). If parametrizes for in (i.e., represents as a function of ), then
where the definition of a line integral is used in the first equality, the above equation is used in the second equality, and the second fundamental theorem of calculus is used in the third equality.
Even if the gradient theorem (also called fundamental theorem of calculus for line integrals) has been proved for a differentiable (so looked as smooth) curve so far, the theorem is also proved for a piecewise-smooth curve since this curve is made by joining multiple differentiable curves so the proof for this curve is made by the proof per differentiable curve component.
Examples
Example 1
Suppose is the circular arc oriented counterclockwise from to . Using the definition of a line integral,
This result can be obtained much more simply by noticing that the function has gradient , so by the Gradient Theorem:
Example 2
For a more abstract example, suppose has endpoints , , with orientation from to . For in , let denote the Euclidean norm of . If is a real number, then
Here the final equality follows by the gradient theorem, since the function is differentiable on if .
If then this equality will still hold in most cases, but caution must
|
https://en.wikipedia.org/wiki/Daniel%20Kastler
|
Daniel Kastler (; 4 March 1926 – 4 July 2015) was a French theoretical physicist, working on the foundations of quantum field theory and on non-commutative geometry.
Biography
Daniel Kastler was born on March 4, 1926, in Colmar, a city of north-eastern France. He is the son of the Physics Nobel Prize laureate Alfred Kastler. In 1946 he enrolled at the École Normale Superieure in Paris. In 1950 he moved to Germany and became lecturer at the Saarland University. In 1953, he was promoted to associate professor and obtained a doctorate in quantum chemistry. In 1957 Kastler moved to the University of Aix-Marseille and became a full professor in 1959. In 1968 he founded, together with Jean-Marie Souriau and Andrea Visconti, the Center of Theoretical Physics in Marseille. Daniel Kastler died on July 8, 2015, in Bandol, in southern France.
Daniel Kastler is known in particular for his work with Rudolf Haag on the foundation of the algebraic approach to quantum field theory. Their collaboration started at the famous Lille Conference in 1957, where both were present, and culminated in the Haag–Kastler axioms for local observables of quantum field theories. This framework uses elements of the theory of operator algebras and is therefore referred to as algebraic quantum field theory or, from the physical point of view, as local quantum physics. In other collaborations, Kastler showed the importance of C*-algebras in the foundations of quantum statistical mechanics and in abelian asymptotic systems. In the 1980s he started working on Alain Connes' non-commutative geometry, especially studying the applications in elementary particle physics. In the same period Kastler, in collaboration with Raymond Stora, developed the geometrical setting for the BRST transformations for the quantization of gauge theories.
Honors and awards
In 1984 Daniel Kastler was awarded the Prix Ampère of the French Academy of Sciences. Since 1977 he was a corresponding member of the Göttingen Academy of Sciences and since 1981 of the Austrian Academy of Sciences. Since 1995 he was a member of the German National Academy of Sciences Leopoldina.
Selected publications
See also
Axiomatic quantum field theory
Hilbert's sixth problem
Kadison–Kastler metric
Local quantum physics
Non-commutative geometry
Quantum field theory
References
Further reading
External links
.
.
Kastler's genealogy at kastler.net
20th-century French physicists
21st-century French physicists
Theoretical physicists
Members of the Austrian Academy of Sciences
Members of the German National Academy of Sciences Leopoldina
1926 births
2015 deaths
|
https://en.wikipedia.org/wiki/En%20%28Lie%20algebra%29
|
{{DISPLAYTITLE:En (Lie algebra)}}
In mathematics, especially in Lie theory, En is the Kac–Moody algebra whose Dynkin diagram is a bifurcating graph with three branches of length 1, 2 and k, with k = n − 4.
In some older books and papers, E2 and E4 are used as names for G2 and F4.
Finite-dimensional Lie algebras
The En group is similar to the An group, except the nth node is connected to the 3rd node. So the Cartan matrix appears similar, -1 above and below the diagonal, except for the last row and column, have −1 in the third row and column. The determinant of the Cartan matrix for En is 9 − n.
E3 is another name for the Lie algebra A1A2 of dimension 11, with Cartan determinant 6.
E4 is another name for the Lie algebra A4 of dimension 24, with Cartan determinant 5.
E5 is another name for the Lie algebra D5 of dimension 45, with Cartan determinant 4.
E6 is the exceptional Lie algebra of dimension 78, with Cartan determinant 3.
E7 is the exceptional Lie algebra of dimension 133, with Cartan determinant 2.
E8 is the exceptional Lie algebra of dimension 248, with Cartan determinant 1.
Infinite-dimensional Lie algebras
E9 is another name for the infinite-dimensional affine Lie algebra (also as E8+ or E8(1) as a (one-node) extended E8) (or E8 lattice) corresponding to the Lie algebra of type E8. E9 has a Cartan matrix with determinant 0.
E10 (or E8++ or E8(1)^ as a (two-node) over-extended E8) is an infinite-dimensional Kac–Moody algebra whose root lattice is the even Lorentzian unimodular lattice II9,1 of dimension 10. Some of its root multiplicities have been calculated; for small roots the multiplicities seem to be well behaved, but for larger roots the observed patterns break down. E10 has a Cartan matrix with determinant −1:
E11 (or E8+++ as a (three-node) very-extended E8) is a Lorentzian algebra, containing one time-like imaginary dimension, that has been conjectured to generate the symmetry "group" of M-theory.
En for n≥12 is an infinite-dimensional Kac–Moody algebra that has not been studied much.
Root lattice
The root lattice of En has determinant 9 − n, and can be constructed as the lattice of vectors in the unimodular Lorentzian lattice Zn,1 that are orthogonal to the vector (1,1,1,1,...,1|3) of norm n × 12 − 32 = n − 9.
E7½
Landsberg and Manivel extended the definition of En for integer n to include the case n = 7. They did this in order to fill the "hole" in dimension formulae for representations of the En series which was observed by Cvitanovic, Deligne, Cohen and de Man. E7 has dimension 190, but is not a simple Lie algebra: it contains a 57 dimensional Heisenberg algebra as its nilradical.
See also
k21, 2k1, 1k2 polytopes based on En Lie algebras.
References
Further reading
Class. Quantum Grav. 18 (2001) 4443-4460
Guersey Memorial Conference Proceedings '94
Connections between Kac-Moody algebras and M-theory, Paul P. Cook, 2006
A class of Lorentzian Kac-Moody algebras, Matthias R. Gaberdiel, David I.
|
https://en.wikipedia.org/wiki/Proportional%20hazards%20model
|
Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time. In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate. For example, taking a drug may halve one's hazard rate for a stroke occurring, or, changing the material from which a manufactured component is constructed may double its hazard rate for failure. Other types of survival models such as accelerated failure time models do not exhibit proportional hazards. The accelerated failure time model describes a situation where the biological or mechanical life history of an event is accelerated (or decelerated).
Background
Survival models can be viewed as consisting of two parts: the underlying baseline hazard function, often denoted , describing how the risk of event per time unit changes over time at baseline levels of covariates; and the effect parameters, describing how the hazard varies in response to explanatory covariates. A typical medical example would include covariates such as treatment assignment, as well as patient characteristics such as age at start of study, gender, and the presence of other diseases at start of study, in order to reduce variability and/or control for confounding.
The proportional hazards condition states that covariates are multiplicatively related to the hazard. In the simplest case of stationary coefficients, for example, a treatment with a drug may, say, halve a subject's hazard at any given time , while the baseline hazard may vary. Note however, that this does not double the lifetime of the subject; the precise effect of the covariates on the lifetime depends on the type of . The covariate is not restricted to binary predictors; in the case of a continuous covariate , it is typically assumed that the hazard responds exponentially; each unit increase in results in proportional scaling of the hazard.
The Cox model
Introduction
Sir David Cox observed that if the proportional hazards assumption holds (or, is assumed to hold) then it is possible to estimate the effect parameter(s), denoted below, without any consideration of the full hazard function. This approach to survival data is called application of the Cox proportional hazards model, sometimes abbreviated to Cox model or to proportional hazards model. However, Cox also noted that biological interpretation of the proportional hazards assumption can be quite tricky.
Let be the realized values of the p covariates for subject i. The hazard function for the Cox proportional hazards model has the form
This expression gives the hazard function at time t for subject i with covariate vector (explanatory variables) Xi. Note that between subjects, the baseline hazard is identical (has no dependency on i). The only difference between subjects' hazards comes fro
|
https://en.wikipedia.org/wiki/Harish-Chandra%20isomorphism
|
In mathematics, the Harish-Chandra isomorphism, introduced by ,
is an isomorphism of commutative rings constructed in the theory of Lie algebras. The isomorphism maps the center of the universal enveloping algebra of a reductive Lie algebra to the elements of the symmetric algebra of a Cartan subalgebra that are invariant under the Weyl group .
Introduction and setting
Let be a semisimple Lie algebra, its Cartan subalgebra and be two elements of the weight space (where is the dual of ) and assume that a set of positive roots have been fixed. Let and be highest weight modules with highest weights and respectively.
Central characters
The -modules and are representations of the universal enveloping algebra and its center acts on the modules by scalar multiplication (this follows from the fact that the modules are generated by a highest weight vector). So, for and ,
and similarly for , where the functions are homomorphisms from to scalars called central characters.
Statement of Harish-Chandra theorem
For any , the characters if and only if and are on the same orbit of the Weyl group of , where is the half-sum of the positive roots, sometimes known as the Weyl vector.
Another closely related formulation is that the Harish-Chandra homomorphism from the center of the universal enveloping algebra to (the elements of the symmetric algebra of the Cartan subalgebra fixed by the Weyl group) is an isomorphism.
Explicit isomorphism
More explicitly, the isomorphism can be constructed as the composition of two maps, one from to and another from to itself.
The first is a projection . For a choice of positive roots , defining
as the corresponding positive nilpotent subalgebra and negative nilpotent subalgebra respectively, due to the Poincaré–Birkhoff–Witt theorem there is a decomposition
If is central, then in fact
The restriction of the projection to the centre is , and is a homomorphism of algebras. This is related to the central characters by
The second map is the twist map . On viewed as a subspace of it is defined with the Weyl vector.
Then is the isomorphism. The reason this twist is introduced is that is not actually Weyl-invariant, but it can be proven that the twisted character is.
Applications
The theorem has been used to obtain a simple Lie algebraic proof of Weyl's character formula for finite-dimensional irreducible representations. The proof has been further simplified by Victor Kac, so that only the quadratic Casimir operator is required; there is a corresponding streamlined treatment proof of the character formula in the second edition of .
Further, it is a necessary condition for the existence of a non-zero homomorphism of some highest weight modules (a homomorphism of such modules preserves central character). A simple consequence is that for Verma modules or generalized Verma modules with highest weight , there exist only finitely many weights for which a non-zero homomorphism exists
|
https://en.wikipedia.org/wiki/Differential%20variational%20inequality
|
In mathematics, a differential variational inequality (DVI) is a dynamical system that incorporates ordinary differential equations and variational inequalities or complementarity problems.
DVIs are useful for representing models involving both dynamics and inequality constraints. Examples of such problems include, for example, mechanical impact problems, electrical circuits with ideal diodes, Coulomb friction problems for contacting bodies, and dynamic economic and related problems such as dynamic traffic networks and networks of queues (where the constraints can either be upper limits on queue length or that the queue length cannot become negative). DVIs are related to a number of other concepts including differential inclusions, projected dynamical systems, evolutionary inequalities, and parabolic variational inequalities.
Differential variational inequalities were first formally introduced by Pang and Stewart, whose definition should not be confused with the differential variational inequality used in Aubin and Cellina (1984).
Differential variational inequalities have the form to find such that
for every and almost all t; K a closed convex set, where
Closely associated with DVIs are dynamic/differential complementarity problems: if K is a closed convex cone, then the variational inequality is equivalent to the complementarity problem:
Examples
Mechanical Contact
Consider a rigid ball of radius falling from a height towards a table. Assume that the forces acting on the ball are gravitation and the contact forces of the table preventing penetration. Then the differential equation describing the motion is
where is the mass of the ball and is the contact force of the table, and is the gravitational acceleration. Note that both and are a priori unknown. While the ball and the table are separated, there is no contact force. There cannot be penetration (for a rigid ball and a rigid table), so for all . If then . On the other hand, if , then can take on any non-negative value. (We do not allow as this corresponds to some kind of adhesive.) This can be summarized by the complementarity relationship
In the above formulation, we can set , so that its dual cone is also the set of non-negative real numbers; this is a differential complementarity problem.
Ideal diodes in electrical circuits
An ideal diode is a diode that conducts electricity in the forward direction with no resistance if a forward voltage is applied, but allows no current to flow in the reverse direction. Then if the reverse voltage is , and the forward current is , then there is a complementarity relationship between the two:
for all . If the diode is in a circuit containing a memory element, such as a capacitor or inductor, then the circuit can be represented as a differential variational inequality.
Index
The concept of the index of a DVI is important and determines many questions of existence and uniqueness of solutions to a DVI.
|
https://en.wikipedia.org/wiki/Layer%20cake%20%28disambiguation%29
|
A layer cake is a pastry made from stacked layers of cake held together by filling.
Layer Cake or layer cake may also refer to:
In mathematics, the Layer cake representation is a representation of a function in terms of an integral of 'slices' of the function's area
Layer-cake federalism, is a political arrangement in which power is divided between a federal and state governments in clearly defined terms
Layer Cake (novel), a 2000 novel by J. J. Connolly
Layer Cake (film), a 2004 film based on the novel
Layer Cake, Soviet Sloika design for nuclear-weapon test Joe 4
Layer Cake, digital music imprint of Dreamlab (production team)
"Layer Cake", song by Kano (rapper) inspired by the film Layer Cake
|
https://en.wikipedia.org/wiki/Longest%20element%20of%20a%20Coxeter%20group
|
In mathematics, the longest element of a Coxeter group is the unique element of maximal length in a finite Coxeter group with respect to the chosen generating set consisting of simple reflections. It is often denoted by w0. See and .
Properties
A Coxeter group has a longest element if and only if it is finite; "only if" is because the size of the group is bounded by the number of words of length less than or equal to the maximum.
The longest element of a Coxeter group is the unique maximal element with respect to the Bruhat order.
The longest element is an involution (has order 2: ), by uniqueness of maximal length (the inverse of an element has the same length as the element).
For any the length satisfies
A reduced expression for the longest element is not in general unique.
In a reduced expression for the longest element, every simple reflection must occur at least once.
If the Coxeter group is finite then the length of w0 is the number of the positive roots.
The open cell Bw0B in the Bruhat decomposition of a semisimple algebraic group G is dense in Zariski topology; topologically, it is the top dimensional cell of the decomposition, and represents the fundamental class.
The longest element is the central element –1 except for (), for n odd, and for p odd, when it is –1 multiplied by the order 2 automorphism of the Coxeter diagram.
See also
Coxeter element, a different distinguished element
Coxeter number
Length function
References
Coxeter groups
|
https://en.wikipedia.org/wiki/Donald%20John%20Lewis
|
Donald John Lewis (25 January 1926 – 25 February 2015), better known as D.J. Lewis, was an American mathematician specializing in number theory.
Lewis received his PhD in 1950 at the University of Michigan under supervision of Richard Dagobert Brauer, and subsequently was an NSF fellow at the Institute for Advanced Study in Princeton (1952–1953), an NSF senior fellow (1959–1961), a senior visiting fellow at Cambridge University (1965, 1969), a visiting fellow at Oxford University (1976), and Humboldt Awardee (1980, 1983).
He chaired the Department of Mathematics at the University of Michigan (1984-1994), and served as director of the Division of Mathematical Sciences at the National Science Foundation (NSF). He was long active in the American Mathematical Society (AMS), and in 1995 received its Distinguished Public Service Award.
References
Notices of the American Mathematical Society, volume 42, number 6 (June 1995)
20th-century American mathematicians
21st-century American mathematicians
Number theorists
University of Michigan alumni
University of Michigan faculty
1926 births
2015 deaths
|
https://en.wikipedia.org/wiki/Coxeter%E2%80%93Dynkin%20diagram
|
In geometry, a Coxeter–Dynkin diagram (or Coxeter diagram, Coxeter graph) is a graph with numerically labeled edges (called branches) representing the spatial relations between a collection of mirrors (or reflecting hyperplanes). It describes a kaleidoscopic construction: each graph "node" represents a mirror (domain facet) and the label attached to a branch encodes the dihedral angle order between two mirrors (on a domain ridge), that is, the amount by which the angle between the reflective planes can be multiplied to get 180 degrees. An unlabeled branch implicitly represents order-3 (60 degrees), and each pair of nodes that is not connected by a branch at all (such as non-adjacent nodes) represents a pair of mirrors at order-2 (90 degrees).
Each diagram represents a Coxeter group, and Coxeter groups are classified by their associated diagrams.
Dynkin diagrams are closely related objects, which differ from Coxeter diagrams in two respects: firstly, branches labeled "4" or greater are directed, while Coxeter diagrams are undirected; secondly, Dynkin diagrams must satisfy an additional (crystallographic) restriction, namely that the only allowed branch labels are 2, 3, 4, and 6. Dynkin diagrams correspond to and are used to classify root systems and therefore semisimple Lie algebras.
Description
Branches of a Coxeter–Dynkin diagram are labeled with a rational number p, representing a dihedral angle of 180°/p. When the angle is 90° and the mirrors have no interaction, so the branch can be omitted from the diagram. If a branch is unlabeled, it is assumed to have , representing an angle of 60°. Two parallel mirrors have a branch marked with "∞". In principle, n mirrors can be represented by a complete graph in which all n( branches are drawn. In practice, nearly all interesting configurations of mirrors include a number of right angles, so the corresponding branches are omitted.
Diagrams can be labeled by their graph structure. The first forms studied by Ludwig Schläfli are the orthoschemes which have linear graphs that generate regular polytopes and regular honeycombs. Plagioschemes are simplices represented by branching graphs, and cycloschemes are simplices represented by cyclic graphs.
Schläfli matrix
Every Coxeter diagram has a corresponding Schläfli matrix
(so named after Ludwig Schläfli), with matrix elements where p is the branch order between the pairs of mirrors. As a matrix of cosines, it is also called a Gramian matrix after Jørgen Pedersen Gram. All Coxeter group Schläfli matrices are symmetric because their root vectors are normalized. It is related closely to the Cartan matrix, used in the similar but directed graph Dynkin diagrams in the limited cases of p = 2,3,4, and 6, which are not symmetric in general.
The determinant of the Schläfli matrix, called the Schläflian, and its sign determines whether the group is finite (positive), affine (zero), indefinite (negative). This rule is called Schläfli's Criterion.
The eigenval
|
https://en.wikipedia.org/wiki/Eckart%20Viehweg
|
Eckart Viehweg (born 30 December 1948 in Zwickau, died 29 January 2010) was a German mathematician. He was a professor of algebraic geometry at the University of Duisburg-Essen.
In 2003 he won the Gottfried Wilhelm Leibniz Prize with his wife, Hélène Esnault.
See also
Kawamata–Viehweg vanishing theorem
References
External links
Homepage
Book: Hélène Esnault, Eckart Viehweg: "Lectures on Vanishing Theorems" (PDF, 1.3 MB)
Book: Eckart Viehweg: "Quasi-projective Moduli for Polarized Manifolds" (PDF, 1.5 MB)
1948 births
2010 deaths
People from Zwickau
Gottfried Wilhelm Leibniz Prize winners
20th-century German mathematicians
21st-century German mathematicians
Academic staff of the University of Duisburg-Essen
|
https://en.wikipedia.org/wiki/H%C3%A9l%C3%A8ne%20Esnault
|
Hélène Esnault (born 17 July 1953) is a French and German mathematician, specializing in algebraic geometry.
Biography
Born in Paris, Esnault earned her PhD in 1976 from the University of Paris VII. She wrote her dissertation on Singularites rationnelles et groupes algebriques (Rational singularities and algebraic groups) under the direction of Lê Dũng Tráng.
She did her habilitation at the University of Bonn in 1985, and pursued her studies at the University of Duisburg-Essen. Afterwards, she was a Heisenberg scholar of the Deutsche Forschungsgemeinschaft (DFG) at the Max Planck Institute for Mathematics in Bonn.
She became the first Einstein Professor at Freie Universität Berlin in 2012, as head of the algebra and number theory research group, after working previously at the University of Duisburg-Essen, the Max-Planck-Institut für Mathematik in Bonn, and at the University of Paris VII.
In 2007 Esnault was editor-in-chief and founder of the journal Algebra & Number Theory. From 1998 to 2010 she co-edited Mathematische Annalen; she has also served as editor of Acta Mathematica Vietnamica, Astérisque, Duke Mathematical Journal, and Mathematical Research Letters.
Awards and honors
In 2001 she won the Prix Paul Doistau-Émile Blutet of the Académie des Sciences de Paris. In 2003, Esnault and Eckart Viehweg
received the Gottfried Wilhelm Leibniz Prize. In 2014 she was elected to the Academia Europaea and is a member of the Academy of Sciences Leopoldina, the Berlin-Brandenburg Academy of Sciences and Humanities and the . In 2019, she won the Cantor medal.
References
External links
Homepage
Book: Hélène Esnault, Eckart Viehweg: "Lectures on Vanishing Theorems" (PDF, 1.3 MB)
1953 births
Living people
École Normale Supérieure alumni
Gottfried Wilhelm Leibniz Prize winners
20th-century French mathematicians
21st-century French mathematicians
French women mathematicians
Academic staff of the Free University of Berlin
Members of Academia Europaea
20th-century women mathematicians
21st-century women mathematicians
Prix Paul Doistau–Émile Blutet laureates
Members of the German National Academy of Sciences Leopoldina
20th-century French women
European Research Council grantees
|
https://en.wikipedia.org/wiki/Leibniz%20formula%20for%20determinants
|
In algebra, the Leibniz formula, named in honor of Gottfried Leibniz, expresses the determinant of a square matrix in terms of permutations of the matrix elements. If is an matrix, where is the entry in the -th row and -th column of , the formula is
where is the sign function of permutations in the permutation group , which returns and for even and odd permutations, respectively.
Another common notation used for the formula is in terms of the Levi-Civita symbol and makes use of the Einstein summation notation, where it becomes
which may be more familiar to physicists.
Directly evaluating the Leibniz formula from the definition requires operations in general—that is, a number of operations asymptotically proportional to factorial—because is the number of order- permutations. This is impractically difficult for even relatively small . Instead, the determinant can be evaluated in operations by forming the LU decomposition (typically via Gaussian elimination or similar methods), in which case and the determinants of the triangular matrices and are simply the products of their diagonal entries. (In practical applications of numerical linear algebra, however, explicit computation of the determinant is rarely required.) See, for example, . The determinant can also be evaluated in fewer than operations by reducing the problem to matrix multiplication, but most such algorithms are not practical.
Formal statement and proof
Theorem.
There exists exactly one function which is alternating multilinear w.r.t. columns and such that .
Proof.
Uniqueness: Let be such a function, and let be an matrix. Call the -th column of , i.e. , so that
Also, let denote the -th column vector of the identity matrix.
Now one writes each of the 's in terms of the , i.e.
.
As is multilinear, one has
From alternation it follows that any term with repeated indices is zero. The sum can therefore be restricted to tuples with non-repeating indices, i.e. permutations:
Because F is alternating, the columns can be swapped until it becomes the identity. The sign function is defined to count the number of swaps necessary and account for the resulting sign change. One finally gets:
as is required to be equal to .
Therefore no function besides the function defined by the Leibniz Formula can be a multilinear alternating function with .
Existence: We now show that F, where F is the function defined by the Leibniz formula, has these three properties.
Multilinear:
Alternating:
For any let be the tuple equal to with the and indices switched.
Thus if then .
Finally, :
Thus the only alternating multilinear functions with are restricted to the function defined by the Leibniz formula, and it in fact also has these three properties. Hence the determinant can be defined as the only function with these three properties.
See also
Matrix
Laplace expansion
Cramer's rule
References
Determinants
Gottfried Wilhelm Leibniz
Linear algebr
|
https://en.wikipedia.org/wiki/Perron%20number
|
In mathematics, a Perron number is an algebraic integer α which is real and exceeds 1, but such that its conjugate elements are all less than α in absolute value. For example, the larger of the two roots of the irreducible polynomial is a Perron number.
Perron numbers are named after Oskar Perron; the Perron–Frobenius theorem asserts that, for a real square matrix with positive algebraic coefficients whose largest eigenvalue is greater than one, this eigenvalue is a Perron number. As a closely related case, the Perron number of a graph is defined to be the spectral radius of its adjacency matrix.
Any Pisot number or Salem number is a Perron number, as is the Mahler measure of a monic integer polynomial.
References
Algebraic numbers
Graph invariants
|
https://en.wikipedia.org/wiki/Silver%20machine
|
In set theory, Silver machines are devices used for bypassing the use of fine structure in proofs of statements holding in L. They were invented by set theorist Jack Silver as a means of proving global square holds in the constructible universe.
Preliminaries
An ordinal is *definable from a class of ordinals X if and only if there is a formula and such that is the unique ordinal for which where for all we define to be the name for within .
A structure is eligible if and only if:
.
< is the ordering on On restricted to X.
is a partial function from to X, for some integer k(i).
If is an eligible structure then is defined to be as before but with all occurrences of X replaced with .
Let be two eligible structures which have the same function k. Then we say if and we have:
Silver machine
A Silver machine is an eligible structure of the form which satisfies the following conditions:
Condensation principle. If then there is an such that .
Finiteness principle. For each there is a finite set such that for any set we have
Skolem property. If is *definable from the set , then ; moreover there is an ordinal , uniformly definable from , such that .
References
Constructible universe
|
https://en.wikipedia.org/wiki/Circled%20plus
|
Circled plus (⊕) or n-ary circled plus (⨁) (in Unicode, , ) may refer to:
Direct sum, an operation from abstract algebra
Dilation (morphology), mathematical morphology
Exclusive or, a logical operation that outputs true only when inputs differ
See also
(original version: )
(such as )
include some circled crosses, such as , and variants thereof.
Screw drives
Phillips head screw
Pozidriv head screw
JIS B 1012 head screw
. This symbol has many variants and many uses.
. One of these is
, includes the ideogram
, whose earliest form was that of the Phoenician letter Teth
() ()
, includes the letter
. The symbol used for the product operator is
⊖ (disambiguation)
|
https://en.wikipedia.org/wiki/Leibniz%20algebra
|
In mathematics, a (right) Leibniz algebra, named after Gottfried Wilhelm Leibniz, sometimes called a Loday algebra, after Jean-Louis Loday, is a module L over a commutative ring R with a bilinear product [ _ , _ ] satisfying the Leibniz identity
In other words, right multiplication by any element c is a derivation. If in addition the bracket is alternating ([a, a] = 0) then the Leibniz algebra is a Lie algebra. Indeed, in this case [a, b] = −[b, a] and the Leibniz's identity is equivalent to Jacobi's identity ([a, [b, c]] + [c, [a, b]] + [b, [c, a]] = 0). Conversely any Lie algebra is obviously a Leibniz algebra.
In this sense, Leibniz algebras can be seen as a non-commutative generalization of Lie algebras. The investigation of which theorems and properties of Lie algebras are still valid for
Leibniz algebras is a recurrent theme in the literature. For instance, it has been shown that Engel's theorem still holds for Leibniz algebras and that a weaker version of Levi-Malcev theorem also holds.
The tensor module, T(V) , of any vector space V can be turned into a Loday algebra such that
This is the free Loday algebra over V.
Leibniz algebras were discovered in 1965 by A. Bloh, who called them D-algebras. They attracted interest after Jean-Louis Loday noticed that the classical Chevalley–Eilenberg boundary map in the exterior module of a Lie algebra can be lifted to the tensor module which yields a new chain complex. In fact this complex is well-defined for any Leibniz algebra. The homology HL(L) of this chain complex is known as Leibniz homology. If L is the Lie algebra of (infinite) matrices over an associative R-algebra A then Leibniz homology
of L is the tensor algebra over the Hochschild homology of A.
A Zinbiel algebra is the Koszul dual concept to a Leibniz algebra. It has defining identity:
Notes
References
Lie algebras
Non-associative algebras
|
https://en.wikipedia.org/wiki/Institute%20of%20Applied%20Physics%20and%20Computational%20Mathematics
|
The Institute of Applied Physics and Computational Mathematics (IAPCM) was established in 1958 in Beijing in the People's Republic of China. The institution conducts research on nuclear warhead design computations for the Chinese Academy of Engineering Physics (CAEP) in Mianyang, Sichuan and focuses on applied theoretical research and on the study of fundamental theories. Its main research fields include: Theoretical physics, nuclear fusion, plasma physics, nuclear physics, atomic molecular physics, laser physics, fluid dynamics, applied mathematics, and arms control science and technology.
The Federal Bureau of Investigation has stated that IAPCM has targeted U.S. defense labs for industrial espionage.
From August 2012, the director of the institute was LI Hua.
References
External links
Physics research institutes
Mathematical institutes
Research institutes in China
|
https://en.wikipedia.org/wiki/Shlomo%20Sternberg
|
Shlomo Zvi Sternberg (born 1936), is an American mathematician known for his work in geometry, particularly symplectic geometry and Lie theory.
Education and career
Sternberg earned his PhD in 1955 from Johns Hopkins University, with a thesis entitled "Some Problems in Discrete Nonlinear Transformations in One and Two Dimensions", supervised by Aurel Wintner.
After postdoctoral work at New York University (1956–1957) and an instructorship at University of Chicago (1957–1959), Sternberg joined the Mathematics Department at Harvard University in 1959, where he was George Putnam Professor of Pure and Applied Mathematics until 2017. Since 2017, he is Emeritus Professor at the Harvard Mathematics Department.
Among other honors, Sternberg was awarded a Guggenheim fellowship in 1974 and a honorary doctorate by the University of Mannheim in 1991. He delivered the AMS Colloquium Lecture in 1990 and the Hebrew University's Albert Einstein Memorial Lecture in 2006.
Sternberg was elected member of the American Academy of Arts and Sciences in 1969, of the National Academy of Sciences in 1986, of the Spanish Royal Academy of Sciences In 1999, and of the American Philosophical Society in 2010.
Research
Sternberg's first well-known published result, based on his PhD thesis, is known as the "Sternberg linearization theorem" which asserts that a smooth map near a hyperbolic fixed point can be made linear by a smooth change of coordinates provided that certain non-resonance conditions are satisfied. He also proved generalizations of the Birkhoff canonical form theorems for volume preserving mappings in n-dimensions and symplectic mappings, all in the smooth case.
In the 1960s Sternberg became involved with Isadore Singer in the project of revisiting Élie Cartan's papers from the early 1900s on the classification of the simple transitive infinite Lie pseudogroups, and of relating Cartan's results to recent results in the theory of G-structures and supplying rigorous (by present-day standards) proofs of his main theorems. Also, together with Victor Guillemin and Daniel Quillen, he extended this classification to a larger class of pseudogroups: the primitive infinite pseudogroups. As a by-product, they also obtained the " integrability of characteristics" theorem for over-determined systems of partial differential equations.
Sternberg provided major contributions also to the topic of Lie group actions on symplectic manifolds, in particular involving various aspects of the theory of symplectic reduction. For instance, together with Bertram Kostant he showed how to use reduction techniques to give a rigorous mathematical treatment of what is known in the physics literature as the BRS quantization procedure. Together with David Kazhdan and Bertram Kostant, he showed how one can simplify the analysis of dynamical systems of Calogero type by describing them as symplectic reductions of much simpler systems. Together with Victor Guillemin he gave the first rigorous f
|
https://en.wikipedia.org/wiki/Ryll-Nardzewski%20fixed-point%20theorem
|
In functional analysis, a branch of mathematics, the Ryll-Nardzewski fixed-point theorem states that if is a normed vector space and is a nonempty convex subset of that is compact under the weak topology, then every group (or equivalently: every semigroup) of affine isometries of has at least one fixed point. (Here, a fixed point of a set of maps is a point that is fixed by each map in the set.)
This theorem was announced by Czesław Ryll-Nardzewski. Later Namioka and Asplund gave a proof based on a different approach. Ryll-Nardzewski himself gave a complete proof in the original spirit.
Applications
The Ryll-Nardzewski theorem yields the existence of a Haar measure on compact groups.
See also
Fixed-point theorems
Fixed-point theorems in infinite-dimensional spaces
Markov-Kakutani fixed-point theorem - abelian semigroup of continuous affine self-maps on compact convex set in a topological vector space has a fixed point
References
Andrzej Granas and James Dugundji, Fixed Point Theory (2003) Springer-Verlag, New York, .
A proof written by J. Lurie
Fixed-point theorems
Theorems in functional analysis
|
https://en.wikipedia.org/wiki/Bangladesh%20Mathematical%20Olympiad
|
The Bangladesh Mathematical Olympiad is an annual mathematical competition arranged for school and college students to nourish their interest and capabilities for mathematics. It has been regularly organized by the Bangladesh Math Olympiad Committee since 2001. Bangladesh Math Olympiad activities started in 2003 formally. The first Math Olympiad was held in Shahjalal University of Science and Technology. Mohammad Kaykobad, Muhammad Zafar Iqbal and Munir Hasan were instrumental in its establishment.
With the endeavor of the members of the committee, the daily newspaper Prothom Alo and the Dutch Bangla Bank Limited, the committee promptly achieved its primary goal – to send a team to the International Mathematical Olympiad. Bangladeshi students have participated in the International Mathematical Olympiad since 2005.
Besides arranging Divisional and National Math Olympiads, the committee extends its cooperation to all interested groups and individuals who want to arrange a Mathematics Olympiad. The Bangladesh Math Olympiad and the selection of the Bangladeshi national team for the International Mathematical Olympiad is bounded by rules set by the Olympiad Committee. The Bangladesh Mathematical Olympiad is open for school and college students from the country. The competitions usually take place around December–January–February. In the 2014 International Mathematical Olympiad, the Bangladesh team achieved one silver, one bronze and four honorable mentions, placing the country at 53 among 101 participating countries. In the 2015 International Mathematical Olympiad, the Bangladesh team achieved one silver, four bronze and one honorable mention, finishing in 33rd place. Ahmed Zawad Chowdhury, who previously won a silver and a bronze in 2017 and 2016, helped Bangladesh win a gold medal for the first time in the 2018 International Mathematical Olympiad. He had previously missed a gold medal in 2017 by only two marks.
Format
The students are divided into four academic categories:
Primary: Class 3-5
Junior: Class 6-8
Secondary: Class 9-10
Higher Secondary: Class 11–12
Selection Round
After achieving 1st gold medal in international Mathematics Olympiad in 2018, the competition spread all over the country. The organisers took selection round in 64 districts of Bangladesh in 2019. In 2020, due to the COVID-19 pandemic, this competition held entirely by online platform in 29 February. The selected participant of selection round can attend in regional competition.
Regional Olympiad
The country is divided in 20
regions for the Regional Olympiad. In each division except Dhaka, nearly 60 students among 1000 participants are selected for the National Olympiad. In Dhaka, the number of participants is more than 3000 and 100–150 are selected for the National Olympiad. In all of the problems in the Regional Olympiad, only the final answers are necessary.
National Olympiad
In the National Olympiad, the top 71 participants are given prizes. The time given for
|
https://en.wikipedia.org/wiki/0th
|
0th or zeroth may refer to:
Mathematics, science and technology
0th or zeroth, an ordinal for the number 0
0th dimension, a topological space
0th element, of a data structure in computer science
0th law of Thermodynamics
Zeroth (software), deep learning software for mobile devices
Other uses
0th grade, another name for kindergarten
January 0 or , an alternate name for December 31
0 Avenue, a road in British Columbia straddling the Canada-US border
See also
OTH (disambiguation) (with a letter O)
Zeroth law (disambiguation)
Zeroth-order (disambiguation)
|
https://en.wikipedia.org/wiki/Vadim%20Gerasimov
|
Vadim Viktorovich Gerasimov (, born 15 June 1969) is an engineer at Google. From 1994 to 2003, Vadim worked and studied at the MIT Media Lab. Vadim earned a BS/MS in applied mathematics from Moscow State University in 1992 and a Ph.D. from MIT in 2003.
At age 16 he was one of the original co-developers of the famous video game Tetris: he ported Alexey Pajitnov's original game to the IBM PC architecture and the two later added features to the game.
References
External links
Vadim Gerasimov personal webpage
Russian video game designers
Russian computer programmers
Russian inventors
Google employees
Moscow State University alumni
Living people
1969 births
|
https://en.wikipedia.org/wiki/Malcev%20algebra
|
In mathematics, a Malcev algebra (or Maltsev algebra or Moufang–Lie algebra) over a field is a nonassociative algebra that is antisymmetric, so that
and satisfies the Malcev identity
They were first defined by Anatoly Maltsev (1955).
Malcev algebras play a role in the theory of Moufang loops that generalizes the role of Lie algebras in the theory of groups. Namely, just as the tangent space of the identity element of a Lie group forms a Lie algebra, the tangent space of the identity of a smooth Moufang loop forms a Malcev algebra. Moreover, just as a Lie group can be recovered from its Lie algebra under certain supplementary conditions, a smooth Moufang loop can be recovered from its Malcev algebra if certain supplementary conditions hold. For example, this is true for a connected, simply connected real-analytic Moufang loop.
Examples
Any Lie algebra is a Malcev algebra.
Any alternative algebra may be made into a Malcev algebra by defining the Malcev product to be xy − yx.
The 7-sphere may be given the structure of a smooth Moufang loop by identifying it with the unit octonions. The tangent space of the identity of this Moufang loop may be identified with the 7-dimensional space of imaginary octonions. The imaginary octonions form a Malcev algebra with the Malcev product xy − yx.
See also
Malcev-admissible algebra
Notes
References
Non-associative algebras
Lie algebras
|
https://en.wikipedia.org/wiki/2006%20Australian%20Lacrosse%20League%20season
|
Results and statistics for the Australian Lacrosse League season of 2006.
Game 15
Friday, 20 October 2006, Perth, Western Australia
Goalscorers:
WA: Nathan Rainey 4-1, Adam Sear 4-1, Alex Brown 2-1, Travis Roost 2, Jason Battaglia 1, Adam Delfs 1, Jesse Stack 0-1.
SA: Ryan Gaspari 2-1, Anson Carter 2.
Game 16
Saturday, 21 October 2006, Perth, Western Australia
Goalscorers:
WA: Alex Brown 4-1, Adam Delfs 3, Adam Sear 3, Nathan Rainey 2, Russell Brown 1-1, Jason Battaglia 1, Travis Roost 1, Jesse Stack 1, Glen Morley 0-1, James Watson-Galbraith 0-1.
SA: Anson Carter 5, Shane Gilbert 1, Brendan Twiggs 1, Nigel Wapper 1.
Game 17
Friday, 27 October 2006, Melbourne, Victoria
Goalscorers:
Vic: Ben Newman 2-1, Robbie Stark 2, Damian Arnell 1, Clinton Lander 1, Aaron Onafretchook 1, Tristan Tomasino 1, Marty Hyde 0-1.
WA: Brad Goddard 2-1, Nathan Roost 2, Adam Sear 1, Jesse Stack 1, Russell Brown 0-1, Adam Delfs 0-1, James Watson-Galbraith 0-1.
Game 18
Saturday, 28 October 2006, Melbourne, Victoria
Goalscorers:
Vic: Adam Townley 3, Aaron Onafretchook 2-1, Clinton Lander 2, Robert Chamberlain 1-5, Marty Hyde 1-1, Robbie Stark 1-1, Josh Naughton 1.
WA: Adam Sear 3, Russell Brown 1-1, Alex Brown 1, Adam Delfs 1, Brad Goddard 0-1.
Game 19
Friday, 3 November 2006, Adelaide, South Australia
Goalscorers:
SA: Anson Carter 2, Nigel Wapper 2, Ryan Gaspari 1-1, Shane Gilbert 1, Philip McConnell 0-1, knocked-in 1.
Vic: Robert Chamberlain 2-1, Adam Townley 2-1, Clinton Lander 2, Robbie Stark 1-1, Marty Hyde 1, Josh Naughton 1, Ben Newman 1, Damian Arnall 0-1, Michael Rodrigues 0-1, knocked-in 1.
Game 20
Saturday, 4 November 2006, Adelaide, South Australia
Goalscorers:
SA: Anson Carter 3, Nigel Wapper 1-1.
Vic: Robbie Stark 5-1, Josh Naughton 5, Ben Newman 2-1, Adam Townley 2, Clinton Lander 1-3, Marty Hyde 1-2, Aaron Onafretchook 1-2, Tristan Tomasino 1-1, Damian Arnall 1, Michael Rodrigues 1, Chris Welsh 1.
ALL Table 2006
Table after completion of round-robin tournament
FINAL (Game 21)
Saturday, 11 November 2006, Perth, Western Australia
Goalscorers:
Vic: Ben Newman 2-1, Robert Chamberlain 2, Robbie Stark 1-1, Marty Hyde 1, Adam Townley 1, Clinton Lander 0-2.
WA: Adam Sear 5, Nathan Roost 2-1, Alex Brown 1-2, Jason Battaglia 1, Russell Brown 1, Brad Goddard 1, Nathan Rainey 1, Jesse Stack 1, Ben Tippett 1.
All-Stars
ALL 2006 Champions: Western Australia
ALL 2006 Most Valuable Player: Robbie Stark (Vic)
ALL 2006 All-Stars: Alex Brown, Warren Brown, Gavin Leavy, Travis Roost, Adam Sear (WA), Marty Hyde, Keith Nyberg, Cameron Shepherd, Robbie Stark, Adam Townley (Vic), Anson Carter, Anthony Munro, Brendan Twiggs (SA). Coach: Travis Roost (WA). Referee: Don Lovett (Vic)
See also
Australian Lacrosse League
Lacrosse in Australia
External links
Australian Lacrosse League
Lacrosse Australia
Lacrosse South Australia
Lacrosse Victoria
Western Australian Lacrosse Association
Australian Lacrosse League
2006 in Australian sport
2006 in lacros
|
https://en.wikipedia.org/wiki/Christopher%20Hooley
|
Christopher Hooley (7 August 1928 – 13 December 2018) was a British mathematician, professor of mathematics at Cardiff University.
He did his PhD under the supervision of Albert Ingham. He won the Adams Prize of Cambridge University in 1973. He was elected a Fellow of the Royal Society in 1983. He was also a Founding Fellow of the Learned Society of Wales.
He showed that the Hasse principle holds for non-singular cubic forms in at least nine variables.
References
External links
1928 births
2018 deaths
20th-century British mathematicians
21st-century British mathematicians
Academics of Cardiff University
Fellows of the Learned Society of Wales
Fellows of the Royal Society
Number theorists
Alumni of Corpus Christi College, Cambridge
|
https://en.wikipedia.org/wiki/Data%20reliability
|
The term data reliability may refer to:
Reliability (statistics), the overall consistency of a measure
Data integrity, the maintenance of, and the assurance of the accuracy and consistency of, data over its entire life-cycle
|
https://en.wikipedia.org/wiki/Complex%20reflection%20group
|
In mathematics, a complex reflection group is a finite group acting on a finite-dimensional complex vector space that is generated by complex reflections: non-trivial elements that fix a complex hyperplane pointwise.
Complex reflection groups arise in the study of the invariant theory of polynomial rings. In the mid-20th century, they were completely classified in work of Shephard and Todd. Special cases include the symmetric group of permutations, the dihedral groups, and more generally all finite real reflection groups (the Coxeter groups or Weyl groups, including the symmetry groups of regular polyhedra).
Definition
A (complex) reflection r (sometimes also called pseudo reflection or unitary reflection) of a finite-dimensional complex vector space V is an element of finite order that fixes a complex hyperplane pointwise, that is, the fixed-space has codimension 1.
A (finite) complex reflection group is a finite subgroup of that is generated by reflections.
Properties
Any real reflection group becomes a complex reflection group if we extend the scalars from
R to C. In particular, all finite Coxeter groups or Weyl groups give examples of complex reflection groups.
A complex reflection group W is irreducible if the only W-invariant proper subspace of the corresponding vector space is the origin. In this case, the dimension of the vector space is called the rank of W.
The Coxeter number of an irreducible complex reflection group W of rank is defined as where denotes the set of reflections and denotes the set of reflecting hyperplanes.
In the case of real reflection groups, this definition reduces to the usual definition of the Coxeter number for finite Coxeter systems.
Classification
Any complex reflection group is a product of irreducible complex reflection groups, acting on the sum of the corresponding vector spaces. So it is sufficient to classify the irreducible complex reflection groups.
The irreducible complex reflection groups were classified by . They proved that every irreducible belonged to an infinite family G(m, p, n) depending on 3 positive integer parameters (with p dividing m) or was one of 34 exceptional cases, which they numbered from 4 to 37. The group G(m, 1, n) is the generalized symmetric group; equivalently, it is the wreath product of the symmetric group Sym(n) by a cyclic group of order m. As a matrix group, its elements may be realized as monomial matrices whose nonzero elements are mth roots of unity.
The group G(m, p, n) is an index-p subgroup of G(m, 1, n). G(m, p, n) is of order mnn!/p. As matrices, it may be realized as the subset in which the product of the nonzero entries is an (m/p)th root of unity (rather than just an mth root). Algebraically, G(m, p, n) is a semidirect product of an abelian group of order mn/p by the symmetric group Sym(n); the elements of the abelian group are of the form (θa1, θa2, ..., θan), where θ is a primitive mth root of unity and Σai ≡ 0 mod p, and Sym(n) acts
|
https://en.wikipedia.org/wiki/Engineering%20mathematics
|
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary education typically consists of mathematical methods and models courses.
See also
Industrial mathematics
Control theory, a mathematical discipline concerned with engineering
Further mathematics and additional mathematics, A-level mathematics courses with similar content
Mathematical methods in electronics, signal processing and radio engineering
References
Applied mathematics
|
https://en.wikipedia.org/wiki/Spheroidal%20wave%20equation
|
In mathematics, the spheroidal wave equation is given by
It is a generalization of the Mathieu differential equation.
If is a solution to this equation and we define , then is a prolate spheroidal wave function in the sense that it satisfies the equation
See also
Wave equation
References
Bibliography
M. Abramowitz and I. Stegun, Handbook of Mathematical function (US Gov. Printing Office, Washington DC, 1964)
H. Bateman, Partial Differential Equations of Mathematical Physics (Dover Publications, New York, 1944)
Ordinary differential equations
|
https://en.wikipedia.org/wiki/Kumaraswamy%20%28disambiguation%29
|
Kumaraswamy or Kumaraswami is a given name for a male South Indians. It may also refer to:
Kumaraswamy distribution, a distribution form related to probability theory and statistics
Murugan, also called Kumaraswami, most popular Hindu deity amongst Tamils of Tamil Nadu state in India
Kumaraswamy Layout, a residential locality in southern Bangalore, India
See also
Coomaraswamy (disambiguation)
Kumarasamy (disambiguation)
|
https://en.wikipedia.org/wiki/David%20E.%20Orton
|
David E. Orton (born 1955) is an American engineering executive and the CEO of GEO Semiconductor Inc.
Orton earned a BS in mathematics and economics at Wake Forest University, and a MS in electrical engineering from Duke University. He worked in the graphics and semiconductor industry as an engineer at Bell Laboratories in 1979 to 1983 and then General Electric through December 1988. He joined Silicon Graphics (SGI) in 1990, and was senior vice president of visual computing and advanced systems through 1999. In 1996 SGI bought Cray Research and Orton had to deal with merging the companies' overlapping technologies.
Orton joined ATI Technologies as a result of an acquisition of ArtX in April 2000, where he was president and CEO. ATI posted losses after the dot-com bubble collapsed, although losses were reduced by June 2001.
He was named CEO of ATI in March 2004.
Though ATI's principal location was in Markham, Ontario, Canada, Orton spent a portion of his time in California where he resided.
After the announced merger of Advanced Micro Devices (AMD) with ATI on July 24, 2006, as ATI Technologies became a subsidiary of AMD, Orton became an executive vice-president of AMD, reporting to AMD CEO Hector Ruiz and COO Dirk Meyer. On July 10, 2007, AMD announced the resignation of Orton as executive vice president. One trade journalist rated Orton as the top of the "CEOs that went in 2007".
From 2007 to 2009, he served as CEO of the startup DSM Solutions. On July 15, 2009, Orton became the CEO of Aptina, a privately held image sensor company located in San Jose, California. He left Aptina in September 2012. He served on the board of directors of SuVolta.
References
External links
Photograph of David E. Orton
Living people
ATI Technologies
AMD people
Silicon Graphics people
American technology chief executives
Duke University Pratt School of Engineering alumni
Wake Forest University alumni
1955 births
|
https://en.wikipedia.org/wiki/Guido%20Hoheisel
|
Guido Karl Heinrich Hoheisel (14 July 1894 – 11 October 1968) was a German mathematician and professor of mathematics at the University of Cologne.
Academic life
He did his PhD in 1920 from the University of Berlin under the supervision of Erhard Schmidt.
During World War II Hoheisel was required to teach classes simultaneously at three universities, in Cologne, Bonn, and Münster. His doctoral students include Arnold Schönhage.
Hoheisel contributed to the journal Deutsche Mathematik.
Selected results
Hoheisel is known for a result on gaps between prime numbers:
He proved that if π(x) denotes the prime-counting function, then there exists a constant θ < 1 such that
π(x + xθ) − π(x) ~ xθ/log(x),
as x tends to infinity, implying that if pn denotes the n-th prime number then
pn+1 − pn < pnθ,
for all sufficiently large n. He showed that one may take
θ = 32999/33000 = 1 - 0.000(03),
with (03) denoting periodic repetition.
Selected works
Gewöhnliche Differentialgleichungen 1926; 2nd edition 1930; 7th edition 1965
Partielle Differentialgleichungen 1928; 3rd edition 1953
Aufgabensammlung zu den gewöhnlichen und partiellen Differentialgleichungen 1933
Integralgleichungen 1936; revised and expanded 2nd edition 1963
Existenz von Eigenwerten und Vollständigkeitskriterium 1943
Integral equations translated by A. Mary Tropper [1968, c1967]
References
20th-century German mathematicians
1968 deaths
1894 births
|
https://en.wikipedia.org/wiki/Martin%20Huxley
|
Martin Neil Huxley FLSW (born in 1944) is a British mathematician, working in the field of analytic number theory.
He was awarded a PhD from the University of Cambridge in 1970, the year after his supervisor Harold Davenport had died. He is a professor at Cardiff University.
Huxley proved a result on gaps between prime numbers, namely that if pn denotes the n-th prime number and if θ > 7/12, then
for all sufficiently large n.
Huxley also improved the known bound on the Dirichlet divisor problem.
In 2011, Huxley was elected a Fellow of the Learned Society of Wales.
References
External links
Living people
21st-century British mathematicians
20th-century British mathematicians
Number theorists
Academics of Cardiff University
Alumni of the University of Cambridge
1944 births
Fellows of the Learned Society of Wales
|
https://en.wikipedia.org/wiki/Isochoric
|
Isochoric may refer to:
cell-transitive, in geometry
isochoric process, a constant volume process in chemistry or thermodynamics
Isochoric model
|
https://en.wikipedia.org/wiki/Fear%20of%20crime
|
The fear of crime refers to the fear of being a victim of crime as opposed to the actual probability of being a victim of crime.
The fear of crime, along with fear of the streets and the fear of youth, is said to have been in Western culture for "time immemorial". While fear of crime can be differentiated into public feelings, thoughts and behaviors about the personal risk of criminal victimization, distinctions can also be made between the tendency to see situations as fearful, the actual experience while in those situations, and broader expressions about the cultural and social significance of crime and symbols of crime in people's neighborhoods and in their daily, symbolic lives.
Importantly, feelings, thoughts and behaviors can have a number of functional and dysfunctional effects on individual and group life, depending on actual risk and people's subjective approaches to danger. On a negative side, they can erode public health and psychological well-being; they can alter routine activities and habits; they can contribute to some places turning into 'no-go' areas via a withdrawal from community; and they can drain community cohesion, trust and neighborhood stability. Some degree of emotional response can be healthy: psychologists have long highlighted the fact that some degree of worry can be a problem-solving activity, motivating care and precaution, underlining the distinction between low-level anxieties that motivate caution and counter-productive worries that damage well-being.
Factors influencing the fear of crime include the psychology of risk perception, circulating representations of the risk of victimization (chiefly via interpersonal communication and the mass media), public perceptions of neighborhood stability and breakdown, the influence of neighbourhood context, and broader factors where anxieties about crime express anxieties about the pace and direction of social change. There are also some wider cultural influences. For example, some have argued that modern times have left people especially sensitive to issues of safety and insecurity.
Affective aspects of fear of crime
The core aspect of fear of crime is the range of emotions that is provoked in citizens by the possibility of victimization. While people may feel angry and outraged about the extent and prospect of crime, surveys typically ask people "who they are afraid of" and "how worried they are". Underlying the answers that people give are (more often than not) two dimensions of 'fear': (a) those everyday moments of worry that transpire when one feels personally threatened; and (b) some more diffuse or 'ambient' anxiety about risk. While standard measures of worry about crime regularly show between 30% and 50% of the population of England and Wales express some kind of worry about falling victim, probing reveals that few individuals actually worry for their own safety on an everyday basis. One thus can distinguish between fear (an emotion, a feeling of alarm or dread
|
https://en.wikipedia.org/wiki/Elementary%20amenable%20group
|
In mathematics, a group is called elementary amenable if it can be built up from finite groups and abelian groups by a sequence of simple operations that result in amenable groups when applied to amenable groups. Since finite groups and abelian groups are amenable, every elementary amenable group is amenable - however, the converse is not true.
Formally, the class of elementary amenable groups is the smallest subclass of the class of all groups that satisfies the following conditions:
it contains all finite and all abelian groups
if G is in the subclass and H is isomorphic to G, then H is in the subclass
it is closed under the operations of taking subgroups, forming quotients, and forming extensions
it is closed under directed unions.
The Tits alternative implies that any amenable linear group is locally virtually solvable; hence, for linear groups, amenability and elementary amenability coincide.
References
Infinite group theory
Properties of groups
|
https://en.wikipedia.org/wiki/Riemann%20Xi%20function
|
In mathematics, the Riemann Xi function is a variant of the Riemann zeta function, and is defined so as to have a particularly simple functional equation. The function is named in honour of Bernhard Riemann.
Definition
Riemann's original lower-case "xi"-function, was renamed with an upper-case (Greek letter "Xi") by Edmund Landau. Landau's lower-case ("xi") is defined as
for . Here denotes the Riemann zeta function and is the Gamma function.
The functional equation (or reflection formula) for Landau's is
Riemann's original function, rebaptised upper-case by Landau, satisfies
,
and obeys the functional equation
Both functions are entire and purely real for real arguments.
Values
The general form for positive even integers is
where Bn denotes the n-th Bernoulli number. For example:
Series representations
The function has the series expansion
where
where the sum extends over ρ, the non-trivial zeros of the zeta function, in order of .
This expansion plays a particularly important role in Li's criterion, which states that the Riemann hypothesis is equivalent to having λn > 0 for all positive n.
Hadamard product
A simple infinite product expansion is
where ρ ranges over the roots of ξ.
To ensure convergence in the expansion, the product should be taken over "matching pairs" of zeroes, i.e., the factors for a pair of zeroes of the form ρ and 1−ρ should be grouped together.
References
Zeta and L-functions
Bernhard Riemann
|
https://en.wikipedia.org/wiki/Karl%20Prachar
|
Karl Prachar (; 1924 – November 27, 1994) was an Austrian mathematician who worked in the area of analytic number theory. He is known for his much acclaimed book on the distribution of the prime numbers, Primzahlverteilung (Springer Verlag, 1957).
Prachar received his doctorate in 1947 from the University of Vienna.
References
Number theorists
1924 births
1994 deaths
20th-century Austrian mathematicians
Burials at Ottakring Cemetery
|
https://en.wikipedia.org/wiki/270%20%28number%29
|
270 (two hundred [and] seventy) is the natural number following 269 and preceding 271.
In mathematics
270 is a harmonic divisor number
270 is the fourth number that is divisible by its average integer divisor
References
Integers
|
https://en.wikipedia.org/wiki/15%20and%20290%20theorems
|
In mathematics, the 15 theorem or Conway–Schneeberger Fifteen Theorem, proved by John H. Conway and W. A. Schneeberger in 1993, states that if a positive definite quadratic form with integer matrix represents all positive integers up to 15, then it represents all positive integers. The proof was complicated, and was never published. Manjul Bhargava found a much simpler proof which was published in 2000.
Bhargava used the occasion of his receiving the 2005 SASTRA Ramanujan Prize to announce that he and Jonathan P. Hanke had cracked Conway's conjecture that a similar theorem holds for integral quadratic forms, with the constant 15 replaced by 290. The proof has since appeared in preprint form.
Details
Suppose is a symmetric matrix with real entries. For any vector with integer components, define
This function is called a quadratic form. We say is positive definite if whenever . If is always an integer, we call the function an integral quadratic form.
We get an integral quadratic form whenever the matrix entries are integers; then is said to have integer matrix. However, will still be an integral quadratic form if the off-diagonal entries are integers divided by 2, while the diagonal entries are integers. For example, x2 + xy + y2 is integral but does not have integral matrix.
A positive integral quadratic form taking all positive integers as values is called universal. The 15 theorem says that a quadratic form with integer matrix is universal if it takes the numbers from 1 to 15 as values. A more precise version says that, if a positive definite quadratic form with integral matrix takes the values 1, 2, 3, 5, 6, 7, 10, 14, 15 , then it takes all positive integers as values. Moreover, for each of these 9 numbers, there is such a quadratic form taking all other 8 positive integers except for this number as values.
For example, the quadratic form
is universal, because every positive integer can be written as a sum of 4 squares, by Lagrange's four-square theorem. By the 15 theorem, to verify this, it is sufficient to check that every positive integer up to 15 is a sum of 4 squares. (This does not give an alternative proof of Lagrange's theorem, because Lagrange's theorem is used in the proof of the 15 theorem.)
On the other hand,
is a positive definite quadratic form with integral matrix that takes as values all positive integers other than 15.
The 290 theorem says a positive definite integral quadratic form is universal if it takes the numbers from 1 to 290 as values. A more precise version states that, if an integer valued integral quadratic form represents all the numbers 1, 2, 3, 5, 6, 7, 10, 13, 14, 15, 17, 19, 21, 22, 23, 26, 29, 30, 31, 34, 35, 37, 42, 58, 93, 110, 145, 203, 290 , then it represents all positive integers, and for each of these 29 numbers, there is such a quadratic form representing all other 28 positive integers with the exception of this one number.
Bhargava has found analogous criteria for a quadratic fo
|
https://en.wikipedia.org/wiki/Australian%20Statistician
|
The Australian Statistician is the head of the Australian Bureau of Statistics.
On 18 June 1906, the first Statistician of the Commonwealth of Australia was appointed to carry out the provisions of the Census and Statistics Act 1905. Later in the same year the Commonwealth Bureau of Census and Statistics was formed (renamed the Australian Bureau of Statistics in 1975).
Timothy Augustine Coghlan was offered the position in December 1905, but had to decline due to his obligations to the New South Wales government.
Commonwealth Statisticians
George Handley Knibbs (1906–1921)
Charles Henry Wickens (August 1922 – April 1932, although Lyndhurst Falkiner Giblin was appointed acting Commonwealth Statistician following Wickens' stroke in 1931)
Edward Tannock McPhee (1933–1936)
Sir Roland Wilson (1936–1940; 1946–1951)
Sir Stanley Roy Carver (acting from 1940 to 1946, and again from 1948 to 1951. Formally appointed Commonwealth Statistician from 20 August 1957 to 1961 or 6 February 1962)
Keith Archer (1962–1970)
Jack O'Neill (acting from 1969-1972. Commonwealth Statistician from 1972–1975)
Australian Statisticians
Robert William Cole 1976
Roy James Cameron (1977 – 1985)
Ian Castles (1986 – 1994)
Bill McLennan (1995 – July 2000)
Dennis Trewin (July 2000 – January 2007)
Brian Pink (March 2007 – January 2014)
David Kalisch (December 2014 – December 2019)
David Gruen (December 2019 – present)
References
|
https://en.wikipedia.org/wiki/Dennis%20Trewin
|
Dennis John Trewin (born 14 August 1946) is an Australian former public servant, who was the Australian Statistician, the head of the Australian Bureau of Statistics, between July 2000 and January 2007.
Trewin joined the ABS in 1966 as a statistics cadet. Between 1992 and 1995 he was the Deputy Government Statistician in Statistics New Zealand and a Deputy Australian Statistician from 1995 to 2000, when he was appointed as the Australian Statistician.
Trewin was the driving force behind the ABS's pioneering 'Measures of Australia's Progress' (MAP), a new system of integrated national progress measurement, linking economic, social, environmental and governance dimensions of progress, a project which gained wide respect among other national statistical offices and helped bring about the OECD's global project, 'Measuring the Progress of Societies'.
He holds other senior appointments in Australia such as non-judicial member of the Australian Electoral Commission and an adjunct professor at Swinburne University. He has held the office of president of the Statistical Society of Australia.
Internationally, in 2005 he completed a term as president of the International Statistical Institute having previously been vice-president and president of the International Association of Survey Statisticians. He is a past editor of the International Statistical Review. He is chairman of the global executive board at the World Bank, chairman of the Asia/Pacific Committee of Statistics, and chairman of the advisory board of Swinburne University of Technology's Swinburne Institute for Social Research.
Trewin holds honorary life memberships of the International Statistical Institute and the Statistical Society of Australia. He was listed as one of Australia's Smart 100 in a 2003 poll run by the Australian magazine The Bulletin.
Notes
References and further reading
1946 births
Living people
Presidents of the International Statistical Institute
Elected Members of the International Statistical Institute
Australian public servants
Australian statisticians
Officers of the Order of Australia
|
https://en.wikipedia.org/wiki/Slice%20genus
|
In mathematics, the slice genus of a smooth knot K in S3 (sometimes called its Murasugi genus or 4-ball genus) is the least integer g such that K is the boundary of a connected, orientable 2-manifold S of genus g properly embedded in the 4-ball D4 bounded by S3.
More precisely, if S is required to be smoothly embedded, then this integer g is the smooth slice genus of K and is often denoted gs(K) or g4(K), whereas if S is required only to be topologically locally flatly embedded then g is the topologically locally flat slice genus of K. (There is no point considering g if S is required only to be a topological embedding, since the cone on K is a 2-disk with genus 0.) There can be an arbitrarily great difference between the smooth and the topologically locally flat slice genus of a knot; a theorem of Michael Freedman says that if the Alexander polynomial of K is 1, then the topologically locally flat slice genus of K is 0, but it can be proved in many ways (originally with gauge theory) that for every g there exist knots K such that the Alexander polynomial of K is 1 while the genus and the smooth slice genus of K both equal g.
The (smooth) slice genus of a knot K is bounded below by a quantity involving the Thurston–Bennequin invariant of K:
The (smooth) slice genus is zero if and only if the knot is concordant to the unknot.
See also
Slice knot
knot genus
Milnor conjecture (topology)
Further reading
Livingston Charles, A survey of classical knot concordance, in: Handbook of knot theory, pp 319–347, Elsevier, Amsterdam, 2005.
Knot theory
|
https://en.wikipedia.org/wiki/UEFA%20Cup%20and%20Europa%20League%20records%20and%20statistics
|
This page details statistics of the UEFA Cup and UEFA Europa League. Unless notified these statistics concern all seasons since inception of the UEFA Cup in the 1971–72 season, including qualifying rounds. The UEFA Cup replaced the Inter-Cities Fairs Cup in the 1971–72 season, so the Fairs Cup is not considered a UEFA competition, and hence clubs' records in the Fairs Cup are not considered part of their European record.
General performances
By club
A total of 29 clubs have won the tournament since its 1971 inception, with Sevilla being the only team to win it seven times, and only one to win three in a row. A total of fifteen clubs have won the tournament multiple times: the forementioned club, along with Liverpool, Juventus, Inter Milan, Atlético Madrid, Borussia Mönchengladbach, Tottenham Hotspur, Real Madrid, IFK Göteborg, Parma, Feyenoord, Chelsea, Porto and Eintracht Frankfurt. A total of 32 clubs have reached the final without ever managing to win the tournament.
Clubs from eleven countries have provided tournament winners. Spanish clubs have been the most successful, winning a total of fourteen titles. Italy and England are second with nine each, while the other multiple-time winners are Germany with seven, Netherlands with four, and Portugal, Sweden and Russia with two each. The only other countries to provide a tournament winner are Belgium, Ukraine, and Turkey. France, Scotland, Yugoslavia, Hungary, and Austria have all provided losing finalists.
The 1980 UEFA Cup saw four Bundesliga teams (i.e., Bayern Munich, Eintracht Frankfurt, Borussia Mönchengladbach, and VfB Stuttgart) make up all of the semi-finals competitors — a unique record for one country. Frankfurt beat Mönchengladbach in the final.
Clubs from a total of 53 European cities have participated in the tournament final. Clubs from 27 cities have provided winners, with the clear city leaders being Sevilla and Madrid (seven and five respectively).
By nation
By city
By player
Most titles: José Antonio Reyes (5)
Atlético Madrid (2): 2009–10, 2011–12
Sevilla (3): 2013–14, 2014–15, 2015–16
By manager
Most titles: Unai Emery (4)
Sevilla (3): 2013–14, 2014–15, 2015–16
Villarreal (1): 2020–21
All-time top 25 UEFA Cup and Europa League rankings
Note: Clubs ranked on theoretical points total (2 points for a win, 1 point for draw, results after extra time count, all matches that went to penalties count as draw). Includes qualifying matches.
Number of participating clubs by country of the Europa League era
The following is a list of clubs that have played or will be playing in the Europa League group stage.
Season in Bold: Team qualified for knockout phase that season
Number of participating clubs in the group stage of the UEFA Cup era
Team in Bold: qualified for knockout phase
Club appearances
Performance review
By semi-final appearances
Consecutive appearances
As of 3 November 2022
Bold = Ongoing streakItalics = Currently in Champions League, but may sti
|
https://en.wikipedia.org/wiki/Track%20geometry%20car
|
A track geometry car (also known as a track recording car) is an automated track inspection vehicle on a rail transport system used to test several parameters of the track geometry without obstructing normal railroad operations. Some of the parameters generally measured include position, curvature, alignment of the track, smoothness, and the crosslevel of the two rails. The cars use a variety of sensors, measuring systems, and data management systems to create a profile of the track being inspected.
History
Track geometry cars emerged in the 1920s when rail traffic became sufficiently dense that manual and visual inspections were no longer practical. Furthermore, the increased operating speeds of trains of that era required more meticulously maintained tracks. In 1925, the Chemins de fer de l'Est put a track geometry car into operation carrying an accelerograph developed by Emile Hallade, the inventor of the Hallade method. The accelerograph could record horizontal and vertical movement as well as roll. It was fitted with a manual button to record milestones and stations in the record. Such car was developed by travaux Strasbourg now part of GEISMAR Group.
By 1927 the Atchison, Topeka and Santa Fe Railway had a track car in operation followed by the Estrada de Ferro Central do Brasil in 1929. These two cars were built by Baldwin using the gyroscope technology of Sperry Corporation.
The first track geometry car in Germany appeared in 1929 and was operated by Deutsche Reichsbahn. The equipment for this car came from Anschütz in Kiel, a company currently owned by Raytheon. In Switzerland, the first track geometry recording equipment was integrated in an already existing dynamometer car in 1930.
One of the earliest track geometry cars was Car T2 used by the U.S. Department of Transportation's Project HISTEP (High-Speed Train Evaluation Program). It was built by the Budd Company for Project HISTEP to evaluate track conditions between Trenton and New Brunswick, NJ, where the DOT had established a section of track for testing high-speed trains, and accordingly, the T2 ran at 150 miles per hour or faster.
Many of the first regular service geometry cars were created from old passenger cars outfitted with the appropriate sensors, instruments, and recording equipment, coupled behind a locomotive. By at least 1977, self-propelled geometry cars had emerged. Southern Pacific's GC-1 (built by Plasser American) was among the first and utilized twelve measuring wheels in conjunction with strain gauges, computers, and spreadsheets to give managers a clear picture of the condition of the railroad. Even in 1981, the Encyclopedia of North American Railroads considered this the most advanced track geometry car in North America.
Advantages
Track inspection was originally done by track inspectors walking the railroad and visually inspecting every section of track. This was hazardous as it had to be done while trains were running. It was also manpower intensive, an
|
https://en.wikipedia.org/wiki/Artin%20billiard
|
In mathematics and physics, the Artin billiard is a type of a dynamical billiard first studied by Emil Artin in 1924. It describes the geodesic motion of a free particle on the non-compact Riemann surface where is the upper half-plane endowed with the Poincaré metric and is the modular group. It can be viewed as the motion on the fundamental domain of the modular group with the sides identified.
The system is notable in that it is an exactly solvable system that is strongly chaotic: it is not only ergodic, but is also strong mixing. As such, it is an example of an Anosov flow. Artin's paper used symbolic dynamics for analysis of the system.
The quantum mechanical version of Artin's billiard is also exactly solvable. The eigenvalue spectrum consists of a bound state and a continuous spectrum above the energy . The wave functions are given by Bessel functions.
Exposition
The motion studied is that of a free particle sliding frictionlessly, namely, one having the Hamiltonian
where m is the mass of the particle, are the coordinates on the manifold, are the conjugate momenta:
and
is the metric tensor on the manifold. Because this is the free-particle Hamiltonian, the solution to the Hamilton-Jacobi equations of motion are simply given by the geodesics on the manifold.
In the case of the Artin billiards, the metric is given by the canonical Poincaré metric
on the upper half-plane. The non-compact Riemann surface is a symmetric space, and is defined as the quotient of the upper half-plane modulo the action of the elements of acting as Möbius transformations. The set
is a fundamental domain for this action.
The manifold has, of course, one cusp. This is the same manifold, when taken as the complex manifold, that is the space on which elliptic curves and modular functions are studied.
References
E. Artin, "Ein mechanisches System mit quasi-ergodischen Bahnen", Abh. Math. Sem. d. Hamburgischen Universität, 3 (1924) pp170-175.
Chaotic maps
Ergodic theory
|
https://en.wikipedia.org/wiki/Chevalley%20basis
|
In mathematics, a Chevalley basis for a simple complex Lie algebra is a basis constructed by Claude Chevalley with the property that all structure constants are integers. Chevalley used these bases to construct analogues of Lie groups over finite fields, called Chevalley groups. The Chevalley basis is the Cartan-Weyl basis, but with a different normalization.
The generators of a Lie group are split into the generators H and E indexed by simple roots and their negatives . The Cartan-Weyl basis may be written as
Defining the dual root or coroot of as
One may perform a change of basis to define
The Cartan integers are
The resulting relations among the generators are the following:
where in the last relation is the greatest positive integer such that is a root and we consider if is not a root.
For determining the sign in the last relation one fixes an ordering of roots which respects addition, i.e., if then provided that all four are roots. We then call an extraspecial pair of roots if they are both positive and is minimal among all that occur in pairs of positive roots satisfying . The sign in the last relation can be chosen arbitrarily whenever is an extraspecial pair of roots. This then determines the signs for all remaining pairs of roots.
References
Lie groups
Lie algebras
|
https://en.wikipedia.org/wiki/Hadamard%27s%20dynamical%20system
|
In physics and mathematics, the Hadamard dynamical system (also called Hadamard's billiard or the Hadamard–Gutzwiller model) is a chaotic dynamical system, a type of dynamical billiards. Introduced by Jacques Hadamard in 1898, and studied by Martin Gutzwiller in the 1980s, it is the first dynamical system to be proven chaotic.
The system considers the motion of a free (frictionless) particle on the Bolza surface, i.e, a two-dimensional surface of genus two (a donut with two holes) and constant negative curvature; this is a compact Riemann surface. Hadamard was able to show that every particle trajectory moves away from every other: that all trajectories have a positive Lyapunov exponent.
Frank Steiner argues that Hadamard's study should be considered to be the first-ever examination of a chaotic dynamical system, and that Hadamard should be considered the first discoverer of chaos. He points out that the study was widely disseminated, and considers the impact of the ideas on the thinking of Albert Einstein and Ernst Mach.
The system is particularly important in that in 1963, Yakov Sinai, in studying Sinai's billiards as a model of the classical ensemble of a Boltzmann–Gibbs gas, was able to show that the motion of the atoms in the gas follow the trajectories in the Hadamard dynamical system.
Exposition
The motion studied is that of a free particle sliding frictionlessly on the surface, namely, one having the Hamiltonian
where m is the mass of the particle, , are the coordinates on the manifold, are the conjugate momenta:
and
is the metric tensor on the manifold. Because this is the free-particle Hamiltonian, the solution to the Hamilton–Jacobi equations of motion are simply given by the geodesics on the manifold.
Hadamard was able to show that all geodesics are unstable, in that they all diverge exponentially from one another, as with positive Lyapunov exponent
with E the energy of a trajectory, and being the constant negative curvature of the surface.
References
Chaotic maps
Ergodic theory
|
https://en.wikipedia.org/wiki/Strahler%20number
|
In mathematics, the Strahler number or Horton–Strahler number of a mathematical tree is a numerical measure of its branching complexity.
These numbers were first developed in hydrology, as a way of measuring the complexity of rivers and streams, by and . In this application, they are referred to as the Strahler stream order and are used to define stream size based on a hierarchy of tributaries.
The same numbers also arise in the analysis of L-systems and of hierarchical biological structures such as (biological) trees and animal respiratory and circulatory systems, in register allocation for compilation of high-level programming languages and in the analysis of social networks.
Definition
All trees in this context are directed graphs, oriented from the root towards the leaves; in other words, they are arborescences. The degree of a node in a tree is just its number of children. One may assign a Strahler number to all nodes of a tree, in bottom-up order, as follows:
If the node is a leaf (has no children), its Strahler number is one.
If the node has one child with Strahler number i, and all other children have Strahler numbers less than i, then the Strahler number of the node is i again.
If the node has two or more children with Strahler number i, and no children with greater number, then the Strahler number of the node is i + 1.
The Strahler number of a tree is the number of its root node.
Algorithmically, these numbers may be assigned by performing a depth-first search and assigning each node's number in postorder.
The same numbers may also be generated via a pruning process in which the tree is simplified in a sequence of stages, where in each stage one removes all leaf nodes and all of the paths of degree-one nodes leading to leaves: the Strahler number of a node is the stage at which it would be removed by this process, and the Strahler number of a tree is the number of stages required to remove all of its nodes. Another equivalent definition of the Strahler number of a tree is that it is the height of the largest complete binary tree that can be homeomorphically embedded into the given tree; the Strahler number of a node in a tree is similarly the height of the largest complete binary tree that can be embedded below that node.
Any node with Strahler number i must have at least two descendants with Strahler number i − 1, at least four descendants with Strahler number i − 2, etc., and at least 2i − 1 leaf descendants. Therefore, in a tree with n nodes, the largest possible Strahler number is log2 n + 1. However, unless the tree forms a complete binary tree its Strahler number will be less than this bound. In an n-node binary tree, chosen uniformly at random among all possible binary trees, the expected index of the root is with high probability very close to log4 n.
Applications
River networks
In the application of the Strahler stream order to hydrology, each segment of a stream or river within a river network is treated as a node in a
|
https://en.wikipedia.org/wiki/Algebraic%20operation
|
In mathematics, a basic algebraic operation is any one of the common operations of elementary algebra, which include addition, subtraction, multiplication, division, raising to a whole number power, and taking roots (fractional power). These operations may be performed on numbers, in which case they are often called arithmetic operations. They may also be performed, in a similar way, on variables, algebraic expressions, and more generally, on elements of algebraic structures, such as groups and fields. An algebraic operation may also be defined simply as a function from a Cartesian power of a set to the same set.
The term algebraic operation may also be used for operations that may be defined by compounding basic algebraic operations, such as the dot product. In calculus and mathematical analysis, algebraic operation is also used for the operations that may be defined by purely algebraic methods. For example, exponentiation with an integer or rational exponent is an algebraic operation, but not the general exponentiation with a real or complex exponent. Also, the derivative is an operation that is not algebraic.
Notation
Multiplication symbols are usually omitted, and implied, when there is no operator between two variables or terms, or when a coefficient is used. For example, 3 × x2 is written as 3x2, and 2 × x × y is written as 2xy. Sometimes, multiplication symbols are replaced with either a dot or center-dot, so that x × y is written as either x . y or x · y. Plain text, programming languages, and calculators also use a single asterisk to represent the multiplication symbol, and it must be explicitly used; for example, 3x is written as 3 * x.
Rather than using the ambiguous division sign (÷), division is usually represented with a vinculum, a horizontal line, as in . In plain text and programming languages, a slash (also called a solidus) is used, e.g. 3 / (x + 1).
Exponents are usually formatted using superscripts, as in x2. In plain text, the TeX mark-up language, and some programming languages such as MATLAB and Julia, the caret symbol, ^, represents exponents, so x2 is written as x ^ 2. In programming languages such as Ada, Fortran, Perl, Python and Ruby, a double asterisk is used, so x2 is written as x ** 2.
The plus–minus sign, ±, is used as a shorthand notation for two expressions written as one, representing one expression with a plus sign, the other with a minus sign. For example, y = x ± 1 represents the two equations y = x + 1 and y = x − 1. Sometimes, it is used for denoting a positive-or-negative term such as ±x.
Arithmetic vs algebraic operations
Algebraic operations work in the same way as arithmetic operations, as can be seen in the table below.
Note: the use of the letters and is arbitrary, and the examples would have been equally valid if and were used.
Properties of arithmetic and algebraic operations
See also
Algebraic expression
Algebraic function
Elementary algebra
Factoring a quadratic expression
Ord
|
https://en.wikipedia.org/wiki/Engalsvik
|
Engelsviken is a village in Fredrikstad municipality, Norway. As of 2003 it is considered by Statistics Norway as a part of the Greater Lervik area.
In popular culture
In the television show Bones, a real human skeleton tied to a cross was found being used as a stage prop for a Black Metal band in Engelsvik.
References
Fredrikstad
Villages in Østfold
|
https://en.wikipedia.org/wiki/Digon
|
In geometry, a digon is a polygon with two sides (edges) and two vertices. Its construction is degenerate in a Euclidean plane because either the two sides would coincide or one or both would have to be curved; however, it can be easily visualised in elliptic space.
A regular digon has both angles equal and both sides equal and is represented by Schläfli symbol {2}. It may be constructed on a sphere as a pair of 180 degree arcs connecting antipodal points, when it forms a lune.
The digon is the simplest abstract polytope of rank 2.
A truncated digon, t{2} is a square, {4}. An alternated digon, h{2} is a monogon, {1}.
In Euclidean geometry
The digon can have one of two visual representations if placed in Euclidean space.
One representation is degenerate, and visually appears as a double-covering of a line segment. Appearing when the minimum distance between the two edges is 0, this form arises in several situations. This double-covering form is sometimes used for defining degenerate cases of some other polytopes; for example, a regular tetrahedron can be seen as an antiprism formed of such a digon. It can be derived from the alternation of a square (h{4}), as it requires two opposing vertices of said square to be connected. When higher-dimensional polytopes involving squares or other tetragonal figures are alternated, these digons are usually discarded and considered single edges.
A second visual representation, infinite in size, is as two parallel lines stretching to (and projectively meeting at; i.e. having vertices at) infinity, arising when the shortest distance between the two edges is greater than zero. This form arises in the representation of some degenerate polytopes, a notable example being the apeirogonal hosohedron, the limit of a general spherical hosohedron at infinity, composed of an infinite number of digons meeting at two antipodal points at infinity. However, as the vertices of these digons are at infinity and hence are not bound by closed line segments, this tessellation is usually not considered to be an additional regular tessellation of the Euclidean plane, even when its dual order-2 apeirogonal tiling (infinite dihedron) is.
Any straight-sided digon is regular even though it is degenerate, because its two edges are the same length and its two angles are equal (both being zero degrees). As such, the regular digon is a constructible polygon.
Some definitions of a polygon do not consider the digon to be a proper polygon because of its degeneracy in the Euclidean case.
In elementary polyhedra
A digon as a face of a polyhedron is degenerate because it is a degenerate polygon. But sometimes it can have a useful topological existence in transforming polyhedra.
As a spherical lune
A spherical lune is a digon whose two vertices are antipodal points on the sphere.
A spherical polyhedron constructed from such digons is called a hosohedron.
Theoretical significance
The digon is an important construct in the topologica
|
https://en.wikipedia.org/wiki/Ornstein%20isomorphism%20theorem
|
In mathematics, the Ornstein isomorphism theorem is a deep result in ergodic theory. It states that if two Bernoulli schemes have the same Kolmogorov entropy, then they are isomorphic. The result, given by Donald Ornstein in 1970, is important because it states that many systems previously believed to be unrelated are in fact isomorphic; these include all finite stationary stochastic processes, including Markov chains and subshifts of finite type, Anosov flows and Sinai's billiards, ergodic automorphisms of the n-torus, and the continued fraction transform.
Discussion
The theorem is actually a collection of related theorems. The first theorem states that if two different Bernoulli shifts have the same Kolmogorov entropy, then they are isomorphic as dynamical systems. The third theorem extends this result to flows: namely, that there exists a flow such that is a Bernoulli shift. The fourth theorem states that, for a given fixed entropy, this flow is unique, up to a constant rescaling of time. The fifth theorem states that there is a single, unique flow (up to a constant rescaling of time) that has infinite entropy. The phrase "up to a constant rescaling of time" means simply that if and are two Bernoulli flows with the same entropy, then for some constant c. The developments also included proofs that factors of Bernoulli shifts are isomorphic to Bernoulli shifts, and gave criteria for a given measure-preserving dynamical system to be isomorphic to a Bernoulli shift.
A corollary of these results is a solution to the root problem for Bernoulli shifts: So, for example, given a shift T, there is another shift that is isomorphic to it.
History
The question of isomorphism dates to von Neumann, who asked if the two Bernoulli schemes BS(1/2, 1/2) and BS(1/3, 1/3, 1/3) were isomorphic or not. In 1959, Ya. Sinai and Kolmogorov replied in the negative, showing that two different schemes cannot be isomorphic if they do not have the same entropy. Specifically, they showed that the entropy of a Bernoulli scheme BS(p1, p2,..., pn) is given by
The Ornstein isomorphism theorem, proved by Donald Ornstein in 1970, states that two Bernoulli schemes with the same entropy are isomorphic. The result is sharp, in that very similar, non-scheme systems do not have this property; specifically, there exist Kolmogorov systems with the same entropy that are not isomorphic. Ornstein received the Bôcher prize for this work.
A simplified proof of the isomorphism theorem for symbolic Bernoulli schemes was given by Michael S. Keane and M. Smorodinsky in 1979.
References
Further reading
Steven Kalikow, Randall McCutcheon (2010) Outline of Ergodic Theory, Cambridge University Press
Donald Ornstein (2008), "Ornstein theory" Scholarpedia, 3(3):3957.
Daniel J. Rudolph (1990) Fundamentals of measurable dynamics: Ergodic theory on Lebesgue spaces, Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1990.
Ergodic theory
Symbolic
|
https://en.wikipedia.org/wiki/Poussin%20proof
|
In number theory, the Poussin proof is the proof of an identity related to the fractional part of a ratio.
In 1838, Peter Gustav Lejeune Dirichlet proved an approximate formula for the average number of divisors of all the numbers from 1 to n:
where d represents the divisor function, and γ represents the Euler-Mascheroni constant.
In 1898, Charles Jean de la Vallée-Poussin proved that if a large number n is divided by all the primes up to n, then the average fraction by which the quotient falls short of the next whole number is γ:
where {x} represents the fractional part of x, and π represents the prime-counting function.
For example, if we divide 29 by 2, we get 14.5, which falls short of 15 by 0.5.
References
Dirichlet, G. L. "Sur l'usage des séries infinies dans la théorie des nombres", Journal für die reine und angewandte Mathematik 18 (1838), pp. 259–274. Cited in MathWorld article "Divisor Function" below.
de la Vallée Poussin, C.-J. Untitled communication. Annales de la Societe Scientifique de Bruxelles 22 (1898), pp. 84–90. Cited in MathWorld article "Euler-Mascheroni Constant" below.
External links
Number theory
|
https://en.wikipedia.org/wiki/Markov%20renewal%20process
|
Markov renewal processes are a class of random processes in probability and statistics that generalize the class of Markov jump processes. Other classes of random processes, such as Markov chains and Poisson processes, can be derived as special cases among the class of Markov renewal processes, while Markov renewal processes are special cases among the more general class of renewal processes.
Definition
In the context of a jump process that takes states in a state space , consider the set of random variables , where represents the jump times and represents the associated states in the sequence of states (see Figure). Let the sequence of inter-arrival times . In order for the sequence to be considered a Markov renewal process the following condition should hold:
Relation to other stochastic processes
Let and be as defined in the previous statement. Defining a new stochastic process for , then the process is called a semi-Markov process as it happens in a continuous-time Markov chain. The process is Markovian only at the specified jump instants, justifying the name semi-Markov. (See also: hidden semi-Markov model.)
A semi-Markov process (defined in the above bullet point) in which all the holding times are exponentially distributed is called a continuous-time Markov chain. In other words, if the inter-arrival times are exponentially distributed and if the waiting time in a state and the next state reached are independent, we have a continuous-time Markov chain.
The sequence in the Markov renewal process is a discrete-time Markov chain. In other words, if the time variables are ignored in the Markov renewal process equation, we end up with a discrete-time Markov chain.
If the sequence of s is independent and identically distributed, and if their distribution does not depend on the state , then the process is a renewal. So, if the states are ignored and we have a chain of iid times, then we have a renewal process.
See also
Markov process
Renewal theory
Variable-order Markov model
Hidden semi-Markov model
References
Markov processes
|
https://en.wikipedia.org/wiki/Tom%20Maibaum
|
Thomas Stephen Edward Maibaum Fellow of the Royal Society of Arts (FRSA) is a computer scientist.
Maibaum has a Bachelor of Science (B.Sc.) undergraduate degree in pure mathematics from the University of Toronto, Canada (1970), and a Doctor of Philosophy (Ph.D.) in computer science from Queen Mary and Royal Holloway Colleges, University of London, England (1974).
Maibaum has held academic posts at Imperial College, London, King's College London (UK) and McMaster University (Canada). His research interests have concentrated on the theory of specification, together with its application in different contexts, in the general area of software engineering.
From 1996 to 2005, he was involved with developing international standards in programming and informatics, as a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68.
He is a Fellow of the Institution of Engineering and Technology and the Royal Society of Arts.
References
External links
KCL home page
, McMaster University
Living people
20th-century Hungarian people
Hungarian expatriates in Canada
University of Toronto alumni
Hungarian expatriates in the United Kingdom
Alumni of Queen Mary University of London
Alumni of Royal Holloway, University of London
Academics of Imperial College London
Academics of King's College London
Hungarian computer scientists
Formal methods people
Academic staff of McMaster University
Fellows of the Institution of Engineering and Technology
Year of birth missing (living people)
|
https://en.wikipedia.org/wiki/Whitney%20extension%20theorem
|
In mathematics, in particular in mathematical analysis, the Whitney extension theorem is a partial converse to Taylor's theorem. Roughly speaking, the theorem asserts that if A is a closed subset of a Euclidean space, then it is possible to extend a given function of A in such a way as to have prescribed derivatives at the points of A. It is a result of Hassler Whitney.
Statement
A precise statement of the theorem requires careful consideration of what it means to prescribe the derivative of a function on a closed set. One difficulty, for instance, is that closed subsets of Euclidean space in general lack a differentiable structure. The starting point, then, is an examination of the statement of Taylor's theorem.
Given a real-valued Cm function f(x) on Rn, Taylor's theorem asserts that for each a, x, y ∈ Rn, there is a function Rα(x,y) approaching 0 uniformly as x,y → a such that
where the sum is over multi-indices α.
Let fα = Dαf for each multi-index α. Differentiating (1) with respect to x, and possibly replacing R as needed, yields
where Rα is o(|x − y|m−|α|) uniformly as x,y → a.
Note that () may be regarded as purely a compatibility condition between the functions fα which must be satisfied in order for these functions to be the coefficients of the Taylor series of the function f. It is this insight which facilitates the following statement:
Theorem. Suppose that fα are a collection of functions on a closed subset A of Rn for all multi-indices α with satisfying the compatibility condition () at all points x, y, and a of A. Then there exists a function F(x) of class Cm such that:
F = f0 on A.
DαF = fα on A.
F is real-analytic at every point of Rn − A.
Proofs are given in the original paper of , and in , and .
Extension in a half space
proved a sharpening of the Whitney extension theorem in the special case of a half space. A smooth function on a half space Rn,+ of points where xn ≥ 0 is a smooth function f on the interior xn for which the derivatives ∂α f extend to continuous functions on the half space. On the boundary xn = 0, f restricts to smooth function. By Borel's lemma, f can be extended to a
smooth function on the whole of Rn. Since Borel's lemma is local in nature, the same argument shows that if is a (bounded or unbounded) domain in Rn with smooth boundary, then any smooth function on the closure of can be extended to a smooth function on Rn.
Seeley's result for a half line gives a uniform extension map
which is linear, continuous (for the topology of uniform convergence of functions and their derivatives on compacta) and takes functions supported in [0,R] into functions supported in [−R,R]
To define set
where φ is a smooth function of compact support on R equal to 1 near 0 and the sequences (am), (bm) satisfy:
tends to ;
for with the sum absolutely convergent.
A solution to this system of equations can be obtained by taking and seeking an entire function
such that That such a function can b
|
https://en.wikipedia.org/wiki/Method%20of%20moments%20%28probability%20theory%29
|
In probability theory, the method of moments is a way of proving convergence in distribution by proving convergence of a sequence of moment sequences. Suppose X is a random variable and that all of the moments
exist. Further suppose the probability distribution of X is completely determined by its moments, i.e., there is no other probability distribution with the same sequence of moments
(cf. the problem of moments). If
for all values of k, then the sequence {Xn} converges to X in distribution.
The method of moments was introduced by Pafnuty Chebyshev for proving the central limit theorem; Chebyshev cited earlier contributions by Irénée-Jules Bienaymé. More recently, it has been applied by Eugene Wigner to prove Wigner's semicircle law, and has since found numerous applications in the theory of random matrices.
Notes
Moment (mathematics)
|
https://en.wikipedia.org/wiki/Carter%20subgroup
|
In mathematics, especially in the field of group theory, a Carter subgroup of a finite group G is a self-normalizing subgroup of G that is nilpotent. These subgroups were introduced by Roger Carter, and marked the beginning of the post 1960 theory of solvable groups .
proved that any finite solvable group has a Carter subgroup, and all its Carter subgroups are conjugate subgroups (and therefore isomorphic). If a group is not solvable it need not have any Carter subgroups: for example, the alternating group A5 of order 60 has no Carter subgroups. showed that even if a finite group is not solvable then any two Carter subgroups are conjugate.
A Carter subgroup is a maximal nilpotent subgroup, because of the normalizer condition for nilpotent groups, but not all maximal nilpotent subgroups are Carter subgroups . For example, any non-identity proper subgroup of the nonabelian group of order six is a maximal nilpotent subgroup, but only those of order two are Carter subgroups. Every subgroup containing a Carter subgroup of a soluble group is also self-normalizing, and a soluble group is generated by any Carter subgroup and its nilpotent residual .
viewed the Carter subgroups as analogues of Sylow subgroups and Hall subgroups, and unified their treatment with the theory of formations. In the language of formations, a Sylow p-subgroup is a covering group for the formation of p-groups, a Hall π-subgroup is a covering group for the formation of π-groups, and a Carter subgroup is a covering group for the formation of nilpotent groups . Together with an important generalization, Schunck classes, and an important dualization, Fischer classes, formations formed the major research themes of the late 20th century in the theory of finite soluble groups.
A dual notion to Carter subgroups was introduced by Bernd Fischer in . A Fischer subgroup of a group is a nilpotent subgroup containing every other nilpotent subgroup it normalizes. A Fischer subgroup is a maximal nilpotent subgroup, but not every maximal nilpotent subgroup is a Fischer subgroup: again the nonabelian group of order six provides an example as every non-identity proper subgroup is a maximal nilpotent subgroup, but only the subgroup of order three is a Fischer subgroup .
See also
Cartan subalgebra
Cartan subgroup
References
, especially Kap VI, §12, pp736–743
translation in Siberian Mathematical Journal 47 (2006), no. 4, 597–600.
Finite groups
Solvable groups
Subgroup properties
|
https://en.wikipedia.org/wiki/Ergodicity
|
In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies that the average behavior of the system can be deduced from the trajectory of a "typical" point. Equivalently, a sufficiently large collection of random samples from a process can represent the average statistical properties of the entire process. Ergodicity is a property of the system; it is a statement that the system cannot be reduced or factored into smaller components. Ergodic theory is the study of systems possessing ergodicity.
Ergodic systems occur in a broad range of systems in physics and in geometry. This can be roughly understood to be due to a common phenomenon: the motion of particles, that is, geodesics on a hyperbolic manifold are divergent; when that manifold is compact, that is, of finite size, those orbits return to the same general area, eventually filling the entire space.
Ergodic systems capture the common-sense, every-day notions of randomness, such that smoke might come to fill all of a smoke-filled room, or that a block of metal might eventually come to have the same temperature throughout, or that flips of a fair coin may come up heads and tails half the time. A stronger concept than ergodicity is that of mixing, which aims to mathematically describe the common-sense notions of mixing, such as mixing drinks or mixing cooking ingredients.
The proper mathematical formulation of ergodicity is founded on the formal definitions of measure theory and dynamical systems, and rather specifically on the notion of a measure-preserving dynamical system. The origins of ergodicity lie in statistical physics, where Ludwig Boltzmann formulated the ergodic hypothesis.
Informal explanation
Ergodicity occurs in broad settings in physics and mathematics. All of these settings are unified by a common mathematical description, that of the measure-preserving dynamical system. Equivalently, ergodicity can be understood in terms of stochastic processes. They are one and the same, despite using dramatically different notation and language.
Measure-preserving dynamical systems
The mathematical definition of ergodicity aims to capture ordinary every-day ideas about randomness. This includes ideas about systems that move in such a way as to (eventually) fill up all of space, such as diffusion and Brownian motion, as well as common-sense notions of mixing, such as mixing paints, drinks, cooking ingredients, industrial process mixing, smoke in a smoke-filled room, the dust in Saturn's rings and so on. To provide a solid mathematical footing, descriptions of ergodic systems begin with the definition of a measure-preserving dynamical system. This is written as
The set is understood to be the total space to be filled: the mixing bowl, the smoke-filled room, etc. The measure is understood to define the n
|
https://en.wikipedia.org/wiki/Collocation%20method
|
In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations. The idea is to choose a finite-dimensional space of candidate solutions (usually polynomials up to a certain degree) and a number of points in the domain (called collocation points), and to select that solution which satisfies the given equation at the collocation points.
Ordinary differential equations
Suppose that the ordinary differential equation
is to be solved over the interval . Choose from 0 ≤ c1< c2< … < cn ≤ 1.
The corresponding (polynomial) collocation method approximates the solution y by the polynomial p of degree n which satisfies the initial condition , and the differential equation
at all collocation points for . This gives n + 1 conditions, which matches the n + 1 parameters needed to specify a polynomial of degree n.
All these collocation methods are in fact implicit Runge–Kutta methods. The coefficients ck in the Butcher tableau of a Runge–Kutta method are the collocation points. However, not all implicit Runge–Kutta methods are collocation methods.
Example: The trapezoidal rule
Pick, as an example, the two collocation points c1 = 0 and c2 = 1 (so n = 2). The collocation conditions are
There are three conditions, so p should be a polynomial of degree 2. Write p in the form
to simplify the computations. Then the collocation conditions can be solved to give the coefficients
The collocation method is now given (implicitly) by
where y1 = p(t0 + h) is the approximate solution at t = t1 = t0 + h.
This method is known as the "trapezoidal rule" for differential equations. Indeed, this method can also be derived by rewriting the differential equation as
and approximating the integral on the right-hand side by the trapezoidal rule for integrals.
Other examples
The Gauss–Legendre methods use the points of Gauss–Legendre quadrature as collocation points. The Gauss–Legendre method based on s points has order 2s. All Gauss–Legendre methods are A-stable.
In fact, one can show that the order of a collocation method corresponds to the order of the quadrature rule that one would get using the collocation points as weights.
Orthogonal collocation method
In direct collocation method, we are essentially performing variational calculus with the finite-dimensional subspace of piecewise linear functions (as in trapezoidal rule), or cubic functions, or other piecewise polynomial functions. In orthogonal collocation method, we instead use the finite-dimensional subspace spanned by the first N vectors in some orthogonal polynomial basis, such as the Legendre polynomials.
Notes
References
.
.
.
.
Curve fitting
Numerical differential equations
|
https://en.wikipedia.org/wiki/Topological%20entropy
|
In mathematics, the topological entropy of a topological dynamical system is a nonnegative extended real number that is a measure of the complexity of the system. Topological entropy was first introduced in 1965 by Adler, Konheim and McAndrew. Their definition was modelled after the definition of the Kolmogorov–Sinai, or metric entropy. Later, Dinaburg and Rufus Bowen gave a different, weaker definition reminiscent of the Hausdorff dimension. The second definition clarified the meaning of the topological entropy: for a system given by an iterated function, the topological entropy represents the exponential growth rate of the number of distinguishable orbits of the iterates. An important variational principle relates the notions of topological and measure-theoretic entropy.
Definition
A topological dynamical system consists of a Hausdorff topological space X (usually assumed to be compact) and a continuous self-map f. Its topological entropy is a nonnegative extended real number that can be defined in various ways, which are known to be equivalent.
Definition of Adler, Konheim, and McAndrew
Let X be a compact Hausdorff topological space. For any finite open cover C of X, let H(C) be the logarithm (usually to base 2) of the smallest number of elements of C that cover X. For two covers C and D, let be their (minimal) common refinement, which consists of all the non-empty intersections of a set from C with a set from D, and similarly for multiple covers.
For any continuous map f: X → X, the following limit exists:
Then the topological entropy of f, denoted h(f), is defined to be the supremum of H(f,C) over all possible finite covers C of X.
Interpretation
The parts of C may be viewed as symbols that (partially) describe the position of a point x in X: all points x ∈ Ci are assigned the symbol Ci . Imagine that the position of x is (imperfectly) measured by a certain device and that each part of C corresponds to one possible outcome of the measurement. then represents the logarithm of the minimal number of "words" of length n needed to encode the points of X according to the behavior of their first n − 1 iterates under f, or, put differently, the total number of "scenarios" of the behavior of these iterates, as "seen" by the partition C. Thus the topological entropy is the average (per iteration) amount of information needed to describe long iterations of the map f.
Definition of Bowen and Dinaburg
This definition uses a metric on X (actually, a uniform structure would suffice). This is a narrower definition than that of Adler, Konheim, and McAndrew, as it requires the additional metric structure on the topological space (but is independent of the choice of metrics generating the given topology). However, in practice, the Bowen-Dinaburg topological entropy is usually much easier to calculate.
Let (X, d) be a compact metric space and f: X → X be a continuous map. For each natural number n, a new metric dn is defined on X by the formul
|
https://en.wikipedia.org/wiki/K%C3%B6nig%27s%20theorem
|
There are several theorems associated with the name König or Kőnig:
König's theorem (set theory), named after the Hungarian mathematician Gyula Kőnig.
König's theorem (complex analysis), named after the Hungarian mathematician Gyula König.
Kőnig's theorem (graph theory), named after his son Dénes Kőnig.
König's theorem (kinetics), named after the German mathematician Samuel König.
See also
Kőnig's lemma (also known as Kőnig's infinity lemma), named after Dénes Kőnig
|
https://en.wikipedia.org/wiki/Coefficient%20diagram%20method
|
In control theory, the coefficient diagram method (CDM) is an algebraic approach applied to a polynomial loop in the parameter space, where a special diagram called a "coefficient diagram" is used as the vehicle to carry the necessary information, and as the criterion of good design. The performance of the closed loop system is monitored by the coefficient diagram.
The most considerable advantages of CDM can be listed as follows:
The design procedure is easily understandable, systematic and useful. Therefore, the coefficients of the CDM controller polynomials can be determined more easily than those of the PID or other types of controller. This creates the possibility of an easy realisation for a new designer to control any kind of system.
There are explicit relations between the performance parameters specified before the design and the coefficients of the controller polynomials as described in. For this reason, the designer can easily realize many control systems having different performance properties for a given control problem in a wide range of freedom.
The development of different tuning methods is required for time delay processes of different properties in PID control. But it is sufficient to use the single design procedure in the CDM technique. This is an outstanding advantage.
It is particularly hard to design robust controllers realizing the desired performance properties for unstable, integrating and oscillatory processes having poles near the imaginary axis. It has been reported that successful designs can be achieved even in these cases by using CDM.
It is theoretically proven that CDM design is equivalent to LQ design with proper state augmentation. Thus, CDM can be considered an ‘‘improved LQG’’, because the order of the controller is smaller and weight selection rules are also given.
It is usually required that the controller for a given plant should be designed under some practical limitations.
The controller is desired to be of minimum degree, minimum phase (if possible) and stable. It must have enough bandwidth and power rating limitations. If the controller is designed without considering these limitations, the robustness property will be very poor, even though the stability and time response requirements are met. CDM controllers designed while considering all these problems is of the lowest degree, has a convenient bandwidth and results with a unit step time response without an overshoot. These properties guarantee the robustness, the sufficient damping of the disturbance effects and the low economic property.
Although the main principles of CDM have been known since the 1950s, the first systematic method was proposed by Shunji Manabe. He developed a new method that easily builds a target characteristic polynomial to meet the desired time response. CDM is an algebraic approach combining classical and modern control theories and uses polynomial representation in the mathematical expression. The advantages of the cl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.