source
stringlengths 31
168
| text
stringlengths 51
3k
|
|---|---|
https://en.wikipedia.org/wiki/Half-space
|
Half-space may refer to:
Half-space (geometry), either of the two parts into which a plane divides Euclidean space
(Poincaré) Half-space model, a model of hyperbolic geometry using a Euclidean half-space
Siegel upper half-space, a set of complex matrices with positive definite imaginary part
Half-space (punctuation), a spacing character half the width of a regular space
Half-space model (oceanography), an estimate for seabed height in areas without significant subduction
|
https://en.wikipedia.org/wiki/Williams%27s%20p%20%2B%201%20algorithm
|
In computational number theory, Williams's p + 1 algorithm is an integer factorization algorithm, one of the family of algebraic-group factorisation algorithms. It was invented by Hugh C. Williams in 1982.
It works well if the number N to be factored contains one or more prime factors p such that p + 1 is smooth, i.e. p + 1 contains only small factors. It uses Lucas sequences to perform exponentiation in a quadratic field.
It is analogous to Pollard's p − 1 algorithm.
Algorithm
Choose some integer A greater than 2 which characterizes the Lucas sequence:
where all operations are performed modulo N.
Then any odd prime p divides whenever M is a multiple of , where
and
is the Jacobi symbol.
We require that , that is, D should be a quadratic non-residue modulo p. But as we don't know p beforehand, more than one value of A may be required before finding a solution. If , this algorithm degenerates into a slow version of Pollard's p − 1 algorithm.
So, for different values of M we calculate , and when the result is not equal to 1 or to N, we have found a non-trivial factor of N.
The values of M used are successive factorials, and is the M-th value of the sequence characterized by .
To find the M-th element V of the sequence characterized by B, we proceed in a manner similar to left-to-right exponentiation:
x := B
y := (B ^ 2 − 2) mod N
for each bit of M to the right of the most significant bit do
if the bit is 1 then
x := (x × y − B) mod N
y := (y ^ 2 − 2) mod N
else
y := (x × y − B) mod N
x := (x ^ 2 − 2) mod N
V := x
Example
With N=112729 and A=5, successive values of are:
V1 of seq(5) = V1! of seq(5) = 5
V2 of seq(5) = V2! of seq(5) = 23
V3 of seq(23) = V3! of seq(5) = 12098
V4 of seq(12098) = V4! of seq(5) = 87680
V5 of seq(87680) = V5! of seq(5) = 53242
V6 of seq(53242) = V6! of seq(5) = 27666
V7 of seq(27666) = V7! of seq(5) = 110229.
At this point, gcd(110229-2,112729) = 139, so 139 is a non-trivial factor of 112729. Notice that p+1 = 140 = 22 × 5 × 7. The number 7! is the lowest factorial which is multiple of 140, so the proper factor 139 is found in this step.
Using another initial value, say A = 9, we get:
V1 of seq(9) = V1! of seq(9) = 9
V2 of seq(9) = V2! of seq(9) = 79
V3 of seq(79) = V3! of seq(9) = 41886
V4 of seq(41886) = V4! of seq(9) = 79378
V5 of seq(79378) = V5! of seq(9) = 1934
V6 of seq(1934) = V6! of seq(9) = 10582
V7 of seq(10582) = V7! of seq(9) = 84241
V8 of seq(84241) = V8! of seq(9) = 93973
V9 of seq(93973) = V9! of seq(9) = 91645.
At this point gcd(91645-2,112729) = 811, so 811 is a non-trivial factor of 112729. Notice that p−1 = 810 = 2 × 5 × 34. The number 9! is the lowest factorial which is multiple of 810, so the proper factor 811 is found in this step. The factor 139 is not found this time because p−1 = 138 = 2 × 3 × 23 which is not a divisor of 9!
As can be seen in these examples we do not know i
|
https://en.wikipedia.org/wiki/Circular%20shift
|
In combinatorial mathematics, a circular shift is the operation of rearranging the entries in a tuple, either by moving the final entry to the first position, while shifting all other entries to the next position, or by performing the inverse operation. A circular shift is a special kind of cyclic permutation, which in turn is a special kind of permutation. Formally, a circular shift is a permutation σ of the n entries in the tuple such that either
modulo n, for all entries i = 1, ..., n
or
modulo n, for all entries i = 1, ..., n.
The result of repeatedly applying circular shifts to a given tuple are also called the circular shifts of the tuple.
For example, repeatedly applying circular shifts to the four-tuple (a, b, c, d) successively gives
(d, a, b, c),
(c, d, a, b),
(b, c, d, a),
(a, b, c, d) (the original four-tuple),
and then the sequence repeats; this four-tuple therefore has four distinct circular shifts. However, not all n-tuples have n distinct circular shifts. For instance, the 4-tuple (a, b, a, b) only has 2 distinct circular shifts. The number of distinct circular shifts of an n-tuple is , where is a divisor of , indicating the maximal number of repeats over all subpatterns.
In computer programming, a bitwise rotation, also known as a circular shift, is a bitwise operation that shifts all bits of its operand. Unlike an arithmetic shift, a circular shift does not preserve a number's sign bit or distinguish a floating-point number's exponent from its significand. Unlike a logical shift, the vacant bit positions are not filled in with zeros but are filled in with the bits that are shifted out of the sequence.
Implementing circular shifts
Circular shifts are used often in cryptography in order to permute bit sequences. Unfortunately, many programming languages, including C, do not have operators or standard functions for circular shifting, even though virtually all processors have bitwise operation instructions for it (e.g. Intel x86 has ROL and ROR).
However, some compilers may provide access to the processor instructions by means of intrinsic functions. In addition, some constructs in standard ANSI C code may be optimized by a compiler to the "rotate" assembly language instruction on CPUs that have such an instruction. Most C compilers recognize the following idiom, and compile it to a single 32-bit rotate instruction.
/*
* Shift operations in C are only defined for shift values which are
* not negative and smaller than sizeof(value) * CHAR_BIT.
* The mask, used with bitwise-and (&), prevents undefined behaviour
* when the shift count is 0 or >= the width of unsigned int.
*/
#include <stdint.h> // for uint32_t, to get 32-bit-wide rotates, regardless of the size of int.
#include <limits.h> // for CHAR_BIT
uint32_t rotl32 (uint32_t value, unsigned int count) {
const unsigned int mask = CHAR_BIT * sizeof(value) - 1;
count &= mask;
return (value << count) | (value >> (-count & mask));
}
uint32_t rotr32
|
https://en.wikipedia.org/wiki/Vaclav%20Zizler
|
Vaclav Zizler, Ph.D., Dr.Sc. (born 8 March 1943), is a Czech mathematics professor specializing in Banach space theory and non-linear spaces. As of 2006, Dr. Zizler holds the position of Professor Emeritus at the University of Alberta in Edmonton, Alberta, Canada. Formerly he was at the Mathematical Institute of the Czech Academy of Sciences where he was Head of Research. In 2001 the Czech Minister of Education named his Functional Analysis and Infinite Dimensional Geometry the university textbook of the year. In 2008 he was, for his excellent lifelong work in mathematical analysis and selfless activities in favour of the Czech mathematics, awarded a laureate medal by the Czech Mathematical Society.
Selected publications
Books
.
.
Research articles
.
.
References
External links
Zizler's homepage at the University of Alberta.
Canadian mathematicians
Czech expatriates in Canada
Czech mathematicians
Academic staff of the University of Alberta
1943 births
Living people
Place of birth missing (living people)
|
https://en.wikipedia.org/wiki/Continuum%20limit
|
In mathematical physics and mathematics, the continuum limit or scaling limit of a lattice model refers to its behaviour in the limit as the lattice spacing goes to zero. It is often useful to use lattice models to approximate real-world processes, such as Brownian motion. Indeed, according to Donsker's theorem, the discrete random walk would, in the scaling limit, approach the true Brownian motion.
Terminology
The term continuum limit mostly finds use in the physical sciences, often in reference to models of aspects of quantum physics, while the term scaling limit is more common in mathematical use.
Application in quantum field theory
A lattice model that approximates a continuum quantum field theory in the limit as the lattice spacing goes to zero may correspond to finding a second order phase transition of the model. This is the scaling limit of the model.
See also
Universality classes
References
H. E. Stanley, Introduction to Phase Transitions and Critical Phenomena
H. Kleinert, Gauge Fields in Condensed Matter, Vol. I, " SUPERFLOW AND VORTEX LINES", pp. 1–742, Vol. II, "STRESSES AND DEFECTS", pp. 743–1456, World Scientific (Singapore, 1989); Paperback (also available online: Vol. I and Vol. II)
H. Kleinert and V. Schulte-Frohlinde, Critical Properties of φ4-Theories, World Scientific (Singapore, 2001); Paperback (also available online)
Lattice models
Lattice field theory
Renormalization group
Critical phenomena
Articles containing video clips
|
https://en.wikipedia.org/wiki/Arbitrated%20loop
|
The arbitrated loop, also known as FC-AL, is a Fibre Channel topology in which devices are connected in a one-way loop fashion in a ring topology. Historically it was a lower-cost alternative to a fabric topology. It allowed connection of many servers and computer storage devices without using then very costly Fibre Channel switches. The cost of the switches dropped considerably, so by 2007, FC-AL had become rare in server-to-storage communication. It is however still common within storage systems.
It is a serial architecture that can be used as the transport layer in a SCSI network, with up to 127 devices. The loop may connect into a fibre channel fabric via one of its ports.
The bandwidth on the loop is shared among all ports.
Only two ports may communicate at a time on the loop. One port wins arbitration and may open one other port in either half or full duplex mode.
A loop with two ports is valid and has the same physical topology as point-to-point, but still acts as a loop protocol-wise.
Fibre Channel ports capable of arbitrated loop communication are NL_port (node loop port) and FL_port (fabric loop port), collectively referred to as the L_ports. The ports may attach to each other via a hub, with cables running from the hub to the ports. The physical connectors on the hub are not ports in terms of the protocol. A hub does not contain ports.
An arbitrated loop with no fabric port (with only NL_ports) is a private loop.
An arbitrated loop connected to a fabric (through an FL_port) is a public loop.
An NL_Port must provide fabric logon (FLOGI) and name registration facilities to initiate communication with other node through the fabric (to be an initiator).
Arbitrated loop can be physically cabled in a ring fashion or using a hub. The physical ring ceases to work if one of the devices in the chain fails. The hub on the other hand, while maintaining a logical ring, allows a star topology on the cable level. Each receive port on the hub is simply passed to next active transmit port, bypassing any inactive or failed ports.
Fibre Channel hubs therefore have another function: They provide bypass circuits that
prevent the loop from breaking if one device fails or is removed. If a device is removed
from a loop (for example, by pulling its interconnect plug), the hub’s bypass circuit
detects the absence of signal and immediately begins to route incoming data directly
to the loop’s next port, bypassing the missing device entirely. This gives loops at least
a measure of resiliency—failure of one device in a loop doesn’t cause the entire loop
to become inoperable.
See also
Storage area network
Fibre Channel
Switched fabric
List of Fibre Channel standards
References
Fibre Channel
Network topology
|
https://en.wikipedia.org/wiki/Switched%20fabric
|
Switched fabric or switching fabric is a network topology in which network nodes interconnect via one or more network switches (particularly crossbar switches). Because a switched fabric network spreads network traffic across multiple physical links, it yields higher total throughput than broadcast networks, such as the early 10BASE5 version of Ethernet and most wireless networks such as Wi-Fi.
The generation of high-speed serial data interconnects that appeared in 2001–2004 which provided point-to-point connectivity between processor and peripheral devices are sometimes referred to as fabrics; however, they lack features such as a message-passing protocol. For example, HyperTransport, the computer processor interconnect technology, continues to maintain a processor bus focus even after adopting a higher speed physical layer. Similarly, PCI Express is just a serial version of PCI; it adheres to PCI's host/peripheral load/store direct memory access (DMA)-based architecture on top of a serial physical and link layer.
Fibre Channel
In the Fibre Channel Switched Fabric (FC-SW-6) topology, devices are connected to each other through one or more Fibre Channel switches. While this topology has the best scalability of the three FC topologies (the other two are Arbitrated Loop and point-to-point), it is the only one requiring switches, which are costly hardware devices.
Visibility among devices (called nodes) in a fabric is typically controlled with Fibre Channel zoning.
Multiple switches in a fabric usually form a mesh network, with devices being on the "edges" ("leaves") of the mesh. Most Fibre Channel network designs employ two separate fabrics for redundancy. The two fabrics share the edge nodes (devices), but are otherwise unconnected. One of the advantages of such setup is capability of failover, meaning that in case one link breaks or a fabric goes out of order, datagrams can be sent via the second fabric.
The fabric topology allows the connection of up to the theoretical maximum of about 16 million devices, limited only by the available address space (224).
239 domains * 256 areas * 256 ports = 15,663,104
See also
Clos network
Fabric Application Interface Standard
InfiniBand
Network traffic control
RapidIO
VPX
References
External links
What is a Switch Fabric
Fibre Channel
Network topology
|
https://en.wikipedia.org/wiki/Formation
|
Formation may refer to:
Linguistics
Back-formation, the process of creating a new lexeme by removing or affixes
Word formation, the creation of a new word by adding affixes
Mathematics and science
Cave formation or speleothem, a secondary mineral deposit formed in a cave
Class formation, a topological group acting on a module satisfying certain conditions
Formation (group theory), a class of groups that is closed under some operations
Formation constant, an equilibrium constant for the formation of a complex in solution
Formation enthalpy, standard heat of formation of a compound
Formation (group theory), a class of groups
Formation (geology), a formally named rock stratum or geological unit
Formation of rocks, how rocks are formed
Formation and evolution of the Solar System, history of the Solar System
Rock formation, an isolated, scenic, or spectacular surface rock outcrop
Vegetation formation, a concept used to classify vegetation communities
Military
Formation flying, the disciplined flight of two or more aircraft under the command of a flight leader
Formation (military), a high-level military organization
Tactical formation, the arrangement or deployment of moving military forces
Formation, an element in order of battle as a formal assembly of military personnel usually to receive the course of actions (operation order) or get deployed to operations
Formation may be tactical or ceremonial
Music
Formation Records, a record label headed by DJ SS
"Formation" (song), a song by American singer Beyoncé on her 2016 album Lemonade
The Formation World Tour, concert tour by Beyoncé for her album Lemonade
Religion
Formations or saṅkhāra, an important Buddhist concept
Formation in the Catholic Church, the personal preparation that the Catholic Church offers to people with a defined mission
Sports
Formation (association football), how team players are positioned on the pitch
Formation (American football), the positions in which players line up before the start of a down
Formation (bandy), how the players are positioned on the rink
Formation dance, a style of ballroom dancing
Formation finish, a staged motor-race finish in which multiple vehicles of the same team cross the finish line together
Writing
Formation (book), a bildungsroman by Brad Mehldau
Other
Contract formation in law; an offer, acceptance, consideration, and a mutual intent to be bound
Formation 8, an American venture capital firm in San Francisco, California
Formation level, the native material underneath a constructed road, pavement or railway
Formation of a coalition government, led by a formateur
Formation water, water that is produced as a byproduct during oil and gas production
Government formation in a parliamentary system
Formations, imprint of Ohio State University Press
See also
Form (disambiguation)
|
https://en.wikipedia.org/wiki/Volume%20element
|
In mathematics, a volume element provides a means for integrating a function with respect to volume in various coordinate systems such as spherical coordinates and cylindrical coordinates. Thus a volume element is an expression of the form
where the are the coordinates, so that the volume of any set can be computed by
For example, in spherical coordinates , and so .
The notion of a volume element is not limited to three dimensions: in two dimensions it is often known as the area element, and in this setting it is useful for doing surface integrals. Under changes of coordinates, the volume element changes by the absolute value of the Jacobian determinant of the coordinate transformation (by the change of variables formula). This fact allows volume elements to be defined as a kind of measure on a manifold. On an orientable differentiable manifold, a volume element typically arises from a volume form: a top degree differential form. On a non-orientable manifold, the volume element is typically the absolute value of a (locally defined) volume form: it defines a 1-density.
Volume element in Euclidean space
In Euclidean space, the volume element is given by the product of the differentials of the Cartesian coordinates
In different coordinate systems of the form , , , the volume element changes by the Jacobian (determinant) of the coordinate change:
For example, in spherical coordinates (mathematical convention)
the Jacobian determinant is
so that
This can be seen as a special case of the fact that differential forms transform through a pullback as
Volume element of a linear subspace
Consider the linear subspace of the n-dimensional Euclidean space Rn that is spanned by a collection of linearly independent vectors
To find the volume element of the subspace, it is useful to know the fact from linear algebra that the volume of the parallelepiped spanned by the is the square root of the determinant of the Gramian matrix of the :
Any point p in the subspace can be given coordinates such that
At a point p, if we form a small parallelepiped with sides , then the volume of that parallelepiped is the square root of the determinant of the Grammian matrix
This therefore defines the volume form in the linear subspace.
Volume element of manifolds
On an oriented Riemannian manifold of dimension n, the volume element is a volume form equal to the Hodge dual of the unit constant function, :
Equivalently, the volume element is precisely the Levi-Civita tensor . In coordinates,
where is the determinant of the metric tensor g written in the coordinate system.
Area element of a surface
A simple example of a volume element can be explored by considering a two-dimensional surface embedded in n-dimensional Euclidean space. Such a volume element is sometimes called an area element. Consider a subset and a mapping function
thus defining a surface embedded in . In two dimensions, volume is just area, and a volume element gives a way to dete
|
https://en.wikipedia.org/wiki/Bruno%20Buchberger
|
Bruno Buchberger (born 22 October 1942) is Professor of Computer Mathematics at Johannes Kepler University in Linz, Austria. In his 1965 Ph.D. thesis, he created the theory of Gröbner bases, and has developed this theory throughout his career. He named these objects after his advisor Wolfgang Gröbner. Since 1995, he has been active in the Theorema project at the University of Linz.
Career
In 1987 Buchberger founded and chaired the Research Institute for Symbolic Computation (RISC) at Johannes Kepler University. In 1985 he started the Journal of Symbolic Computation, which has now become the premier publication in the field of computer algebra.
Buchberger also conceived Softwarepark Hagenberg in 1989 and since then has been directing the expansion of this Austrian technology park for software.
In 2014 he became a member of the Global Digital Mathematical Library Working Group of the IMU.
Awards
Wilhelm Exner Medal (1995).
Paris Kanellakis Theory and Practice Award (2007). For theory of Gröbner bases.
Golden Medal of Honor by the Upper Austrian Government
Honorary doctorates from the Universities of Nijmegen (1993), Timișoara (2000), Bath (2005), Waterloo (2011), and Innsbruck (2012).
Herbrand Award for Distinguished Contributions to Automated Reasoning (2018)
See also
Buchberger's algorithm
Gröbner bases
References
Sources
External links
Buchberger's university website
RISC website
1942 births
20th-century Austrian mathematicians
21st-century Austrian mathematicians
Living people
Scientists from Innsbruck
Academic staff of Johannes Kepler University Linz
|
https://en.wikipedia.org/wiki/Federal%20State%20Statistics%20Service%20%28Russia%29
|
The Federal State Statistics Service (, Росстат/Rosstat) is the governmental statistics agency in Russia.
Since 2017, it is again part of the Ministry of Economic Development, having switched several times in the previous decades between that ministry and being directly controlled by the federal government.
History
Goskomstat (, or, in English, the State Committee for Statistics) was the centralised agency dealing with statistics in the Soviet Union. Goskomstat was created in 1987 to replace the Central Statistical Administration, while maintaining the same basic functions in the collection, analysis, publication and distribution of state statistics, including economic, social and population statistics. This renaming amounted to a formal demotion of the status of the agency.
In addition to overseeing the collection and evaluation of state statistics, Goskomstat (and its predecessors) was responsible for planning and carrying out the population and housing censuses. It carried out seven such censuses, in 1926, 1937, 1939, 1959, 1970, 1979 and 1989.
House No. 39, on Ulitsa Myasnitskaya, Tsentrosoyuz building, home to Goskomstat, was designed by the Swiss-born architect Le Corbusier.
References
External links
Official website
Official website
Interstate Statistical Committee of the Commonwealth of Independent States
Government agencies of Russia
Economy of the Soviet Union
Government of the Soviet Union
1987 establishments in the Soviet Union
State Committees of the Soviet Union
National statistical services
Demographics of the Soviet Union
|
https://en.wikipedia.org/wiki/Per%20Martin-L%C3%B6f
|
Per Erik Rutger Martin-Löf (; ; born 8 May 1942) is a Swedish logician, philosopher, and mathematical statistician. He is internationally renowned for his work on the foundations of probability, statistics, mathematical logic, and computer science. Since the late 1970s, Martin-Löf's publications have been mainly in logic. In philosophical logic, Martin-Löf has wrestled with the philosophy of logical consequence and judgment, partly inspired by the work of Brentano, Frege, and Husserl. In mathematical logic, Martin-Löf has been active in developing intuitionistic type theory as a constructive foundation of mathematics; Martin-Löf's work on type theory has influenced computer science.
Until his retirement in 2009, Per Martin-Löf held a joint chair for Mathematics and Philosophy at Stockholm University.
His brother Anders Martin-Löf is now emeritus professor of mathematical statistics at Stockholm University; the two brothers have collaborated in research in probability and statistics. The research of Anders and Per Martin-Löf has influenced statistical theory, especially concerning exponential families, the expectation-maximization method for missing data, and model selection.
Per Martin-Löf received his PhD in 1970 from Stockholm University, under Andrey Kolmogorov.
Martin-Löf is an enthusiastic bird-watcher; his first scientific publication was on the mortality rates of ringed birds.
Randomness and Kolmogorov complexity
In 1964 and 1965, Martin-Löf studied in Moscow under the supervision of Andrei N. Kolmogorov. He wrote a 1966 article The definition of random sequences that gave the first suitable definition of a random sequence.
Earlier researchers such as Richard von Mises had attempted to formalize the notion of a test for randomness in order to define a random sequence as one that passed all tests for randomness; however, the precise notion of a randomness test was left vague. Martin-Löf's key insight was to use the theory of computation to define formally the notion of a test for randomness. This contrasts with the idea of randomness in probability; in that theory, no particular element of a sample space can be said to be random.
Martin-Löf randomness has since been shown to admit many equivalent characterizations — in terms of compression, randomness tests, and gambling — that bear little outward resemblance to the original definition, but each of which satisfies our intuitive notion of properties that random sequences ought to have: random sequences should be incompressible, they should pass statistical tests for randomness, and it should be impossible to make money betting on them. The existence of these multiple definitions of Martin-Löf randomness, and the stability of these definitions under different models of computation, give evidence that Martin-Löf randomness is a fundamental property of mathematics and not an accident of Martin-Löf's particular model. The thesis that the definition of Martin-Löf randomness "correctly" c
|
https://en.wikipedia.org/wiki/Circumcircle
|
In geometry, the circumscribed circle or circumcircle of a triangle is a circle that passes through all three vertices. The center of this circle is called the circumcenter of the triangle, and its radius is called the circumradius. The circumcenter is the point of intersection between the three perpendicular bisectors of the triangle's sides, and is a triangle center.
More generally, an -sided polygon with all its vertices on the same circle, also called the circumscribed circle, is called a cyclic polygon, or in the special case , a cyclic quadrilateral. All rectangles, isosceles trapezoids, right kites, and regular polygons are cyclic, but not every polygon is.
Straightedge and compass construction
The circumcenter of a triangle can be constructed by drawing any two of the three perpendicular bisectors. For three non-collinear points, these two lines cannot be parallel, and the circumcenter is the point where they cross. Any point on the bisector is equidistant from the two points that it bisects, from which it follows that this point, on both bisectors, is equidistant from all three triangle vertices.
The circumradius is the distance from it to any of the three vertices.
Alternative construction
An alternative method to determine the circumcenter is to draw any two lines each one departing from one of the vertices at an angle with the common side, the common angle of departure being 90° minus the angle of the opposite vertex. (In the case of the opposite angle being obtuse, drawing a line at a negative angle means going outside the triangle.)
In coastal navigation, a triangle's circumcircle is sometimes used as a way of obtaining a position line using a sextant when no compass is available. The horizontal angle between two landmarks defines the circumcircle upon which the observer lies.
Circumcircle equations
Cartesian coordinates
In the Euclidean plane, it is possible to give explicitly an equation of the circumcircle in terms of the Cartesian coordinates of the vertices of the inscribed triangle. Suppose that
are the coordinates of points . The circumcircle is then the locus of points in the Cartesian plane satisfying the equations
guaranteeing that the points are all the same distance from the common center of the circle. Using the polarization identity, these equations reduce to the condition that the matrix
has a nonzero kernel. Thus the circumcircle may alternatively be described as the locus of zeros of the determinant of this matrix:
Using cofactor expansion, let
we then have where and – assuming the three points were not in a line (otherwise the circumcircle is that line that can also be seen as a generalized circle with at infinity) – giving the circumcenter and the circumradius A similar approach allows one to deduce the equation of the circumsphere of a tetrahedron.
Parametric equation
A unit vector perpendicular to the plane containing the circle is given by
Hence, given the radius, , center, , a point
|
https://en.wikipedia.org/wiki/Ben%20Green%20%28mathematician%29
|
Ben Joseph Green FRS (born 27 February 1977) is a British mathematician, specialising in combinatorics and number theory. He is the Waynflete Professor of Pure Mathematics at the University of Oxford.
Early life and education
Ben Green was born on 27 February 1977 in Bristol, England. He studied at local schools in Bristol, Bishop Road Primary School and Fairfield Grammar School, competing in the International Mathematical Olympiad in 1994 and 1995. He entered Trinity College, Cambridge in 1995 and completed his BA in mathematics in 1998, winning the Senior Wrangler title. He stayed on for Part III and earned his doctorate under the supervision of Timothy Gowers, with a thesis entitled Topics in arithmetic combinatorics (2003). During his PhD he spent a year as a visiting student at Princeton University. He was a research Fellow at Trinity College, Cambridge between 2001 and 2005, before becoming a Professor of Mathematics at the University of Bristol from January 2005 to September 2006 and then the first Herchel Smith Professor of Pure Mathematics at the University of Cambridge from September 2006 to August 2013. He became the Waynflete Professor of Pure Mathematics at the University of Oxford on 1 August 2013. He was also a Research Fellow of the Clay Mathematics Institute and held various positions at institutes such as Princeton University, University of British Columbia, and Massachusetts Institute of Technology.
Mathematics
The majority of Green's research is in the fields of analytic number theory and additive combinatorics, but he also has results in harmonic analysis and in group theory. His best known theorem, proved jointly with his frequent collaborator Terence Tao, states that there exist arbitrarily long arithmetic progressions in the prime numbers: this is now known as the Green–Tao theorem.
Amongst Green's early results in additive combinatorics are an improvement of a result of Jean Bourgain of the size of arithmetic progressions in sumsets, as well as a proof of the Cameron–Erdős conjecture on sum-free sets of natural numbers. He also proved an arithmetic regularity lemma for functions defined on the first natural numbers, somewhat analogous to the Szemerédi regularity lemma for graphs.
From 2004–2010, in joint work with Terence Tao and Tamar Ziegler, he developed so-called higher order Fourier analysis. This theory relates Gowers norms with objects known as nilsequences. The theory derives its name from these nilsequences, which play an analogous role to the role that characters play in classical Fourier analysis. Green and Tao used higher order Fourier analysis to present a new method for counting the number of solutions to simultaneous equations in certain sets of integers, including in the primes. This generalises the classical approach using Hardy–Littlewood circle method. Many aspects of this theory, including the quantitative aspects of the inverse theorem for the Gowers norms, are still the subject of ongoing res
|
https://en.wikipedia.org/wiki/SO%2810%29
|
In particle physics, SO(10) refers to a grand unified theory (GUT) based on the spin group Spin(10). The shortened name SO(10) is conventional among physicists, and derives from the Lie algebra or less precisely the Lie group of SO(10), which is a special orthogonal group that is double covered by Spin(10).
SO(10) subsumes the Georgi–Glashow and Pati–Salam models, and unifies all fermions in a generation into a single field. This requires 12 new gauge bosons, in addition to the 12 of SU(5) and 9 of SU(4)×SU(2)×SU(2).
History
Before the SU(5) theory behind the Georgi–Glashow model, Harald Fritzsch and Peter Minkowski, and independently Howard Georgi, found that all the matter contents are incorporated into a single representation, spinorial 16 of SO(10). However, it is worth noting that Georgi found the SO(10) theory just a few hours before finding SU(5) at the end of 1973.
Important subgroups
It has the branching rules to [SU(5)×U(1)χ]/Z5.
If the hypercharge is contained within SU(5), this is the conventional Georgi–Glashow model, with the 16 as the matter fields, the 10 as the electroweak Higgs field and the 24 within the 45 as the GUT Higgs field. The superpotential may then include renormalizable terms of the form Tr(45 ⋅ 45); Tr(45 ⋅ 45 ⋅ 45); 10 ⋅ 45 ⋅ 10, 10 ⋅ 16* ⋅ 16 and 16* ⋅ 16. The first three are responsible to the gauge symmetry breaking at low energies and give the Higgs mass, and the latter two give the matter particles masses and their Yukawa couplings to the Higgs.
There is another possible branching, under which the hypercharge is a linear combination of an SU(5) generator and χ. This is known as flipped SU(5).
Another important subgroup is either [SU(4) × SU(2)L × SU(2)R]/Z2 or Z2 ⋊ [SU(4) × SU(2)L × SU(2)R]/Z2 depending upon whether or not the left-right symmetry is broken, yielding the Pati–Salam model, whose branching rule is
Spontaneous symmetry breaking
The symmetry breaking of SO(10) is usually done with a combination of (( a 45H OR a 54H) AND ((a 16H AND a ) OR (a 126H AND a )) ).
Let's say we choose a 54H. When this Higgs field acquires a GUT scale VEV, we have a symmetry breaking to Z2 ⋊ [SU(4) × SU(2)L × SU(2)R]/Z2, i.e. the Pati–Salam model with a Z2 left-right symmetry.
If we have a 45H instead, this Higgs field can acquire any VEV in a two dimensional subspace without breaking the standard model. Depending on the direction of this linear combination, we can break the symmetry to SU(5)×U(1), the Georgi–Glashow model with a U(1) (diag(1,1,1,1,1,-1,-1,-1,-1,-1)), flipped SU(5) (diag(1,1,1,-1,-1,-1,-1,-1,1,1)), SU(4)×SU(2)×U(1) (diag(0,0,0,1,1,0,0,0,-1,-1)), the minimal left-right model (diag(1,1,1,0,0,-1,-1,-1,0,0)) or SU(3)×SU(2)×U(1)×U(1) for any other nonzero VEV.
The choice diag(1,1,1,0,0,-1,-1,-1,0,0) is called the Dimopoulos-Wilczek mechanism aka the "missing VEV mechanism" and it is proportional to B−L.
The choice of a 16H and a breaks the gauge group down to the Georgi–Glashow SU(5). The same c
|
https://en.wikipedia.org/wiki/Von%20Mangoldt%20function
|
In mathematics, the von Mangoldt function is an arithmetic function named after German mathematician Hans von Mangoldt. It is an example of an important arithmetic function that is neither multiplicative nor additive.
Definition
The von Mangoldt function, denoted by , is defined as
The values of for the first nine positive integers (i.e. natural numbers) are
which is related to .
Properties
The von Mangoldt function satisfies the identity
The sum is taken over all integers that divide . This is proved by the fundamental theorem of arithmetic, since the terms that are not powers of primes are equal to . For example, consider the case . Then
By Möbius inversion, we have
and using the product rule for the logarithm we get
For all , we have
Also, there exist positive constants and such that
for all , and
for all sufficiently large .
Dirichlet series
The von Mangoldt function plays an important role in the theory of Dirichlet series, and in particular, the Riemann zeta function. For example, one has
The logarithmic derivative is then
These are special cases of a more general relation on Dirichlet series. If one has
for a completely multiplicative function , and the series converges for , then
converges for .
Chebyshev function
The second Chebyshev function ψ(x) is the summatory function of the von Mangoldt function:
It was introduced by Pafnuty Chebyshev who used it to show that the true order of the prime counting function is . Von Mangoldt provided a rigorous proof of an explicit formula for involving a sum over the non-trivial zeros of the Riemann zeta function. This was an important part of the first proof of the prime number theorem.
The Mellin transform of the Chebyshev function can be found by applying Perron's formula:
which holds for .
Exponential series
Hardy and Littlewood examined the series
in the limit . Assuming the Riemann hypothesis, they demonstrate that
In particular this function is oscillatory with diverging oscillations: there exists a value such that both inequalities
hold infinitely often in any neighbourhood of 0. The graphic to the right indicates that this behaviour is not at first numerically obvious: the oscillations are not clearly seen until the series is summed in excess of 100 million terms, and are only readily visible when .
Riesz mean
The Riesz mean of the von Mangoldt function is given by
Here, and are numbers characterizing the Riesz mean. One must take . The sum over is the sum over the zeroes of the Riemann zeta function, and
can be shown to be a convergent series for .
Approximation by Riemann zeta zeros
There is an explicit formula for the summatory Mangoldt function given by
If we separate out the trivial zeros of the zeta function, which are the negative even integers, we obtain
(The sum is not absolutely convergent, so we take the zeros in order of the absolute value of their imaginary part.)
Taking the derivative of both sides, ignoring convergence issues, we
|
https://en.wikipedia.org/wiki/Picture%20Pages
|
Picture Pages is a 1978–1984 American educational television program aimed at preschool children, presented by Bill Cosby—teaching lessons on basic arithmetic, geometry, and drawing through a series of interactive lessons that used a workbook that viewers would follow along with the lesson.
Picture Pages was created by Julius Oleinick and started on a local Pittsburgh children's show in 1974 with the Picture Pages puzzle booklets given away at a supermarket chain. It debuted as a national segment of the Captain Kangaroo show in 1978 (then directed by Jimmy Hirschfeld), in which Captain Kangaroo would do the lessons on his "magic drawing board". Bill Cosby took over hosting the segments in 1980, presenting the lessons with a marker named "Mortimer Ichabod Marker" (M.I. for short), which was topped with a cartoon figure that played musical notes whenever he drew with it.
When the Captain Kangaroo show left CBS in 1984, the Cosby-era Picture Pages series was rerun as an interstitial program on Nickelodeon from 1984 to 1993.
The show also aired in Canada on the YTV cable network.
References
External links
1970s American children's television series
1970s preschool education television series
1974 American television series debuts
1980s American children's television series
1980s preschool education television series
1984 American television series endings
American preschool education television series
Culture of Pittsburgh
|
https://en.wikipedia.org/wiki/Impredicativity
|
In mathematics, logic and philosophy of mathematics, something that is impredicative is a self-referencing definition. Roughly speaking, a definition is impredicative if it invokes (mentions or quantifies over) the set being defined, or (more commonly) another set that contains the thing being defined. There is no generally accepted precise definition of what it means to be predicative or impredicative. Authors have given different but related definitions.
The opposite of impredicativity is predicativity, which essentially entails building stratified (or ramified) theories where quantification over lower levels results in variables of some new type, distinguished from the lower types that the variable ranges over. A prototypical example is intuitionistic type theory, which retains ramification so as to discard impredicativity.
Russell's paradox is a famous example of an impredicative construction—namely the set of all sets that do not contain themselves. The paradox is that such a set cannot exist: If it would exist, the question could be asked whether it contains itself or not — if it does then by definition it should not, and if it does not then by definition it should.
The greatest lower bound of a set , , also has an impredicative definition: if and only if for all elements of , is less than or equal to , and any less than or equal to all elements of is less than or equal to . This definition quantifies over the set (potentially infinite, depending on the order in question) whose members are the lower bounds of , one of which being the glb itself. Hence predicativism would reject this definition.
History
The terms "predicative" and "impredicative" were introduced by , though the meaning has changed a little since then.
Solomon Feferman provides a historical review of predicativity, connecting it to current outstanding research problems.
The vicious circle principle was suggested by Henri Poincaré (1905-6, 1908) and Bertrand Russell in the wake of the paradoxes as a requirement on legitimate set specifications. Sets that do not meet the requirement are called impredicative.
The first modern paradox appeared with Cesare Burali-Forti's 1897 A question on transfinite numbers and would become known as the Burali-Forti paradox. Cantor had apparently discovered the same paradox in his (Cantor's) "naive" set theory and this become known as Cantor's paradox. Russell's awareness of the problem originated in June 1901 with his reading of Frege's treatise of mathematical logic, his 1879 Begriffsschrift; the offending sentence in Frege is the following:
In other words, given the function is the variable and is the invariant part. So why not substitute the value for itself? Russell promptly wrote Frege a letter pointing out that:
Frege promptly wrote back to Russell acknowledging the problem:
While the problem had adverse personal consequences for both men (both had works at the printers that had to be emended), van Heijenoort obser
|
https://en.wikipedia.org/wiki/List%20of%20English%20districts%20by%20population%20density
|
This is a list of the districts of England ordered by population density, based on population estimates for from the Office for National Statistics. The densities are calculated by dividing the latest Population Estimate by the Standard Area Measurement.
Less than 100 / km²
See also
List of English districts by population
List of English districts by area
List of English districts and their ethnic composition
References
Districts of England
Districts of England by Population Density
English districts
|
https://en.wikipedia.org/wiki/Q-matrix
|
In mathematics, a Q-matrix is a square matrix whose associated linear complementarity problem LCP(M,q) has a solution for every vector q.
Properties
M is a Q-matrix if there exists d > 0 such that LCP(M,0) and LCP(M,d) have a unique solution.
Any P-matrix is a Q-matrix. Conversely, if a matrix is a Z-matrix and a Q-matrix, then it is also a P-matrix.
See also
P-matrix
Z-matrix
References
Matrix theory
Matrices
|
https://en.wikipedia.org/wiki/List%20of%20Liverpool%20F.C.%20records%20and%20statistics
|
Liverpool Football Club is an English professional association football club based in Liverpool, Merseyside, who currently play in the Premier League. They have played at their current home ground, Anfield, since their foundation in 1892. Liverpool joined the Football League in 1894, and were founding members of the Premier League in 1992.
This list encompasses the major honours won by Liverpool, records set by the club, their managers and their players. The player records section includes details of the club's leading goalscorers and those who have made most appearances in first-team competitions. It also records notable achievements by Liverpool players on the international stage, and the highest transfer fees paid and received by the club. Attendance records at Anfield are also included in the list.
The club have won 19 top-flight titles, and also hold the record for the most European Cup victories by an English team, winning the competition six times. The club's record appearance maker is Ian Callaghan, who made 857 appearances between 1958 and 1978. Ian Rush is the club's record goalscorer, scoring 346 goals in total.
All statistics are correct as of 21 February 2023.
Honours
Liverpool have won honours both domestically and in European cup competitions. They have won the English top league 19 times and the League Cup a record nine times. In their first season, 1892–93, they won the Lancashire League title and the Liverpool District Cup, and their most recent success came in 2022, when they won their eighth FA Cup title.
Player records
Appearances
Most appearances in all competitions: Ian Callaghan, 857
Most league appearances: Ian Callaghan, 640
Most FA Cup appearances: Ian Callaghan, 79
Most League Cup appearances: Ian Rush, 78
Most continental appearances: Jamie Carragher, 150
Youngest first-team player: Jerome Sinclair, 16 years and 6 days (against West Bromwich Albion, 26 September 2012)
Youngest player to start a first-team match: Harvey Elliott, 16 years and 174 days (against Milton Keynes Dons, 25 September 2019)
Oldest first-team player: Ned Doig, 41 years and 165 days (against Newcastle United, 11 April 1908)
Oldest debutant: Ned Doig, 37 years and 307 days (against Burton United, 1 September 1904)
Most consecutive appearances: Phil Neal, 417 (23 October 1976 – 24 September 1983)
Most seasons playing every minute of every league and cup game: Phil Neal, 9 (from 1976–77 to 1983–84)
Longest-serving player: Elisha Scott, 21 years and 52 days (1913–1934)
Most red cards while playing for Liverpool: Steven Gerrard, 7
Most appearances
Competitive, professional matches only, appearances as substitute in brackets.
Goalscorers
Most goals in all competitions: Ian Rush, 346
Most league goals: Roger Hunt, 244
Most FA Cup goals: Ian Rush, 39
Most League Cup goals: Ian Rush, 48
Most continental goals: Mohamed Salah, 43
First player to score for Liverpool: Malcolm McVean (against Rotherham Town, 1 September 1892)
Most goals in a season:
|
https://en.wikipedia.org/wiki/Jerzy%20Pniewski
|
Jerzy Pniewski (June 1, 1913 – June 16, 1989) was a Polish physicist.
Pniewski was born in Płock. He studied mathematics and physics at the University of Warsaw.
In 1952, he co-discovered the hypernucleus with Marian Danysz. In 1962, he discovered hypernuclear isomery.
References
1913 births
1989 deaths
20th-century Polish physicists
|
https://en.wikipedia.org/wiki/Del%20%28disambiguation%29
|
Del is a vector differential operator represented by the symbol ∇ (nabla).
Del or DEL can also refer to:
Mathematics
A name for the partial derivative symbol ∂
Dynamic epistemic logic
Abbreviations
DEL or Del, for Delaware, one of the United States
Del, for the constellation Delphinus
Del., for a non-voting delegate to the United States House of Representatives
People
Del (given name), a list of people with the given name or nickname
Del Shannon, stage name of American rock and country singer-songwriter Charles Weedon Westover (1934–1990)
Del tha Funkee Homosapien (short for "Delvon"), American hip hop artist
Del Fontaine (1904–1935), Canadian boxer and convicted murderer born Raymond Henry Bousquet
Fictional characters
Del Boy, lead character in the BBC comedy series Only Fools and Horses
Del Dingle, fictional character in the ITV soap opera Emmerdale
Del, robot alligator villager from the video game series Animal Crossing
Mascots
Del, one of the mascots of PBS Kids since 2013
Computing
DEL, Data-Entry Language, predecessor of the Lua programming language
Del (command), a DOS, OS/2, and Microsoft Windows shell command
, HTML tags used to mark text for deletion
Delete character, also known as rubout
Delete key, abbreviated Del on computer keyboards
Acronyms
Department for Employment and Learning, part of the Northern Ireland government
Deutsche Eishockey Liga, the premier ice hockey league in Germany
DNA Encoded Chemical Library, a technology for the synthesis and screening of collections of chemical compounds
Codes
DEL, IATA code for Indira Gandhi International Airport, Delhi, India
del, ISO 639-2 and 639-3 codes for the Delaware languages of Native Americans
Music
"Del", a song on the album 3rd Eye Vision by Hieroglyphics
See also
DEL2, the second tier of ice hockey in Germany, below the DEL
Deel (disambiguation)
Dell (disambiguation)
|
https://en.wikipedia.org/wiki/List%20of%20school%20districts%20in%20Sonoma%20County%2C%20California
|
List of school districts in Sonoma County, California. Statistics are as of the 2008–09 academic year.
Cazadero area:
Fort Ross (K-8, 1 school, 40 students, website)
Montgomery (K-8, 1 school, 38 students)
Cloverdale Unified (K-12, 5 schools, 1520 students, website)
Cotati-Rohnert Park Unified (K-12, 13 schools, 6,654 students)
Forestville Union (K-8, 2 schools, 486 students, website)
Geyserville Unified (K-12, 5 schools, 273 students, website)
Guerneville (K-8, 2 schools, 302 students, website)
Harmony Union (K-8, 3 schools, 834 students)
Healdsburg area:
Alexander Valley Union (K-6, 1 school, 120 students, website)
Healdsburg Unified (K-12, 4 schools, 2,267 students, website)
West Side Union (K-6, 1 school, 163 students, website)
Horicon (K-8, 1 school, 86 students)
Kashia (K-8, 1 school, 11 students)
Kenwood (K-6, 1 school, 153 students, website)
Monte Rio Union (K-8, 1 school, 104 students, website)
Petaluma area:
Cinnabar (K-6, 1 school, 205 students, website)
Dunham (K-6, 1 school, 174 students, website)
Liberty (K-6, 2 schools, 635 students, website)
Old Adobe Union (K-6, 5 schools, 1,832 students, website)
Petaluma City Schools (website):
Petaluma City (Elementary) (K-6, 8 schools, 2,272 students)
Petaluma Joint Union High (7-12, 10 schools, 5,731 students)
Two Rock Union (K-6, 1 school, 152 students, website)
Waugh (K-6, 2 schools, 899 students, website)
Wilmar Union (K-6, 1 school, 224 students, website)
Santa Rosa area:
Bellevue Union (K-6, 4 schools, 1,725 students, website)
Bennett Valley Union (K-6, 2 schools, 951 students, website)
Mark West Union (K-6, 4 schools, 1,421 students, website)
Oak Grove Union (K-8, 2 schools, 722 students, website)
Piner-Olivet Union (K-8, 6 schools, 1,683 students)
Rincon Valley Union (K-6, 9 schools, 2,965 students, website)
Roseland (K-12, 3 schools, 1,994 students, website)
Santa Rosa City Schools:
Santa Rosa City (Elementary) (K-6, 14 schools, 4,734 students)
Santa Rosa City High (7-12, 19 schools, 11,964 students)
Wright (K-6, 3 schools, 1,435 students, website)
Sebastopol area:
Gravenstein Union (K-8, 2 schools, 508 students, website)
Sebastopol Union (K-8, 4 schools, 1,173 students, website)
Twin Hills Union (K-8, 4 schools, 908 students, website)
West Sonoma County Union High (9-12, 5 schools, 2,435 students, website)
Sonoma Valley Unified (K-12, 12 schools, 4,793 students, website)
Windsor Unified (K-12, 8 schools, 5,344 students)
See also
List of school districts in California by county
References
External links
District map
Sonoma
Sonoma
|
https://en.wikipedia.org/wiki/Karsten%20M%C3%BCller
|
Karsten Müller (born November 23, 1970, in Hamburg, West Germany) is a German chess Grandmaster and author. He earned the Grandmaster title in 1998 and a PhD in mathematics in 2002 at the University of Hamburg. He had placed third in the 1996 German championship and second in the 1997 German championship.
He has written about endgames, including in Fundamental Chess Endings (Gambit Publications, 2001) and Secrets of Pawn Endings (Everyman Chess, 2000), both with Frank Lamprecht. He also wrote How to Play Chess Endgames, with Wolfgang Pajeken (Gambit, 2008) and Magic of Chess Tactics (Russell Enterprises 2003) with FIDE Master Claus Dieter Meyer. His column "Endgame Corner" has appeared at ChessCafe.com since January 2001 and he has been a regular contributor to ChessBase Magazine since 1997. He also contributed material to some of the early issues of the online daily chess newspaper Chess Today.
The seventh chapter of Tibor Karolyi's 2009 book Genius in the Background is devoted to him. His main interest apart from chess are football and mathematical games.
Books
Corrected edition by Gambit in 2007, .
ChessBase Products
Karsten has authored a large number of ChessBase products. These can be found online here
Notes
Further reading
External links
1970 births
Living people
Chess grandmasters
German chess players
German chess writers
German male non-fiction writers
University of Hamburg alumni
|
https://en.wikipedia.org/wiki/Hyperstructure
|
Hyperstructures are algebraic structures equipped with at least one multi-valued operation, called a hyperoperation. The largest classes of the hyperstructures are the ones called – structures.
A hyperoperation on a nonempty set is a mapping from to the nonempty power set , meaning the set of all nonempty subsets of , i.e.
For we define
and
is a semihypergroup if is an associative hyperoperation, i.e. for all
Furthermore, a hypergroup is a semihypergroup , where the reproduction axiom is valid, i.e.
for all
References
AHA (Algebraic Hyperstructures & Applications). A scientific group at Democritus University of Thrace, School of Education, Greece. aha.eled.duth.gr
Applications of Hyperstructure Theory, Piergiulio Corsini, Violeta Leoreanu, Springer, 2003, ,
Functional Equations on Hypergroups, László, Székelyhidi, World Scientific Publishing, 2012,
Abstract algebra
|
https://en.wikipedia.org/wiki/Sum-free%20sequence
|
In mathematics, a sum-free sequence is an increasing sequence of positive integers,
such that no term can be represented as a sum of any subset of the preceding elements of the sequence.
This differs from a sum-free set, where only pairs of sums must be avoided, but where those sums may come from the whole set rather than just the preceding terms.
Example
The powers of two,
1, 2, 4, 8, 16, ...
form a sum-free sequence: each term in the sequence is one more than the sum of all preceding terms, and so cannot be represented as a sum of preceding terms.
Sums of reciprocals
A set of integers is said to be small if the sum of its reciprocals converges to a finite value. For instance, by the prime number theorem, the prime numbers are not small. proved that every sum-free sequence is small, and asked how large the sum of reciprocals could be. For instance, the sum of the reciprocals of the powers of two (a geometric series) is two.
If denotes the maximum sum of reciprocals of a sum-free sequence, then through subsequent research it is known that .
Density
It follows from the fact that sum-free sequences are small that they have zero Schnirelmann density; that is, if is defined to be the number of sequence elements that are less than or equal to , then . showed that for every sum-free sequence there exists an unbounded sequence of numbers for which where is the golden ratio, and he exhibited a sum-free sequence for which, for all values of , , subsequently improved to by Deshouillers, Erdős and Melfi in 1999 and to by Luczak and Schoen in 2000, who also proved that the exponent 1/2 cannot be further improved.
Notes
References
.
.
.
.
.
.
.
.
Additive combinatorics
Integer sequences
|
https://en.wikipedia.org/wiki/Max%20Noether%27s%20theorem
|
In algebraic geometry, Max Noether's theorem may refer to the results of Max Noether:
Several closely related results of Max Noether on canonical curves
AF+BG theorem, or Max Noether's fundamental theorem, a result on algebraic curves in the projective plane, on the residual sets of intersections
Max Noether's theorem on curves lying on algebraic surfaces, which are hypersurfaces in P3, or more generally complete intersections
Noether's theorem on rationality for surfaces
Max Noether theorem on the generation of the Cremona group by quadratic transformations
See also
Noether's theorem, usually referring to a result derived from work of Max's daughter Emmy Noether
Noether inequality
Special divisor
Hirzebruch–Riemann–Roch theorem
|
https://en.wikipedia.org/wiki/List%20of%20urban%20areas%20in%20the%20Republic%20of%20Ireland
|
This is a list of urban areas in the Republic of Ireland by population. In 2022, the Central Statistics Office (CSO), the Department of Housing, Local Government and Heritage and Tailte Éireann created of a new unit of urban geography called Built Up Areas (BUAs) which were used to produce data for urban areas in the 2022 census of Ireland.
There were 867 BUAs, representing the entire settlement area of each town and city (including suburbs and environs). The 250 largest cities, towns and villages are listed below with data from the 2022 census.
Cities and towns list
Notes
See also
List of urban areas in the Republic of Ireland for the 2016 census
List of urban areas in the Republic of Ireland for the 2011 census
List of urban areas in the Republic of Ireland for the 2006 census
List of urban areas in the Republic of Ireland for the 2002 census
List of towns and villages in the Republic of Ireland
List of localities in Northern Ireland by population
List of settlements on the island of Ireland by population
References
External links
Ireland
Urban
|
https://en.wikipedia.org/wiki/FK-AK%20space
|
In functional analysis and related areas of mathematics an FK-AK space or FK-space with the AK property is an FK-space which contains the space of finite sequences and has a Schauder basis.
Examples and non-examples
the space of convergent sequences with the supremum norm has the AK property.
() the absolutely p-summable sequences with the norm have the AK property.
with the supremum norm does not have the AK property.
Properties
An FK-AK space has the property
that is the continuous dual of is linear isomorphic to the beta dual of
FK-AK spaces are separable spaces.
See also
References
Topological vector spaces
|
https://en.wikipedia.org/wiki/Franklin%20Merrell-Wolff
|
Franklin Merrell-Wolff (born Franklin Fowler Wolff; 11 July 1887 – 4 October 1985) was an American mystic and esoteric philosopher. After formal education in philosophy and mathematics at Stanford and Harvard, Wolff devoted himself to the goal of transcending the normal limits of human consciousness. After exploring various mystical teachings and paths, he dedicated himself to the path of jnana yoga and the writings of Shankara, the expounder of the Advaita Vedanta school of Hindu philosophy.
Life
Franklin Fowler Wolff was born in Pasadena, California in 1887. He was raised as a Methodist, but abandoned Christianity during his youth. Wolff studied mathematics and philosophy at Stanford and Harvard. At Stanford, he was elected to the Phi Beta Kappa Society in 1911. He briefly taught mathematics at Stanford in 1914, but left academia the following year. In 1920, Wolff married Sarah Merrell Briggs. The couple joined their original surnames; hence Wolff became Franklin Merrell-Wolff. Merrell-Wolff and his wife founded an esoteric group called the Assembly of Man in 1928, which congregated in an ashram in the Sierra Nevada mountains near Mount Whitney. Sarah Merrell-Wolff, also known as Sherifa, died in 1959. Franklin Merrell-Wolff remarried and lived the rest of his life in the mountains until his death in 1985. He authored various books and a great number of recorded lectures explaining his philosophy.
Publications and philosophy
Wolff's publications are "an elaboration of the significance of [his] mystical experiences," described by religious scholar Arthur Versluis as a "consistent and extensive body of work with a unique vocabulary and set of concepts". In his works, Wolff described his mystic experiences and their implications, examining his experience in the light of his extensive knowledge of mathematics and philosophy. Although he started an Ashram, his form of spirituality was not necessarily compatible with a religious structure.
In his book Pathways Through to Space, Wolff describes having a profound spiritual realization in 1936, which provided the basis for his transcendental philosophy. It was induced "in a context of sustained reflective observation and deep thought," rather than by the usual practice of meditation. He called this experience the "Fundamental Realization". In its aftermath, Wolff found himself being in a state of euphoric consciousness he called the "Current of Ambrosia", which he described as being "above time, space and causality". It also led Wolff to a state of "High Indifference", or consciousness without an object. At the center of these experiences was the realization of "Primordial consciousness", which, according to Wolff, is beyond and prior to the subject or the object and is unaffected by their presence or absence.
The notion of "Introception", or "Knowledge through Identity," "[describes] the inward focus of consciousness upon its own nature".
Wolff's other published books detailing his experience and
|
https://en.wikipedia.org/wiki/Greek%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering
|
Greek letters are used in mathematics, science, engineering, and other areas where mathematical notation is used as symbols for constants, special functions, and also conventionally for variables representing certain quantities. In these contexts, the capital letters and the small letters represent distinct and unrelated entities. Those Greek letters which have the same form as Latin letters are rarely used: capital A, B, E, Z, H, I, K, M, N, O, P, T, Y, X. Small ι, ο and υ are also rarely used, since they closely resemble the Latin letters i, o and u. Sometimes, font variants of Greek letters are used as distinct symbols in mathematics, in particular for ε/ϵ and π/ϖ. The archaic letter digamma (Ϝ/ϝ/ϛ) is sometimes used.
The Bayer designation naming scheme for stars typically uses the first Greek letter, α, for the brightest star in each constellation, and runs through the alphabet before switching to Latin letters.
In mathematical finance, the Greeks are the variables denoted by Greek letters used to describe the risk of certain investments.
Typography
The Greek letter forms used in mathematics are often different from those used in Greek-language text: they are designed to be used in isolation, not connected to other letters, and some use variant forms which are not normally used in current Greek typography.
The OpenType font format has the feature tag "mgrk" ("Mathematical Greek") to identify a glyph as representing a Greek letter to be used in mathematical (as opposed to Greek language) contexts.
The table below shows a comparison of Greek letters rendered in TeX and HTML.
The font used in the TeX rendering is an italic style. This is in line with the convention that variables should be italicized. As Greek letters are more often than not used as variables in mathematical formulas, a Greek letter appearing similar to the TeX rendering is more likely to be encountered in works involving mathematics.
Concepts represented by a Greek letter
Αα (alpha)
represents:
the first angle in a triangle, opposite the side a
the statistical significance of a result
the false positive rate in statistics ("Type I" error)
the fine-structure constant in physics
the angle of attack of an aircraft
an alpha particle (He2+)
angular acceleration in physics
the linear thermal expansion coefficient
the thermal diffusivity
In organic chemistry the α-carbon is the backbone carbon next to the carbonyl carbon, most often for amino acids
right ascension in astronomy
the brightest star in a constellation
Iron ferrite and numerous phases within materials science
the return in excess of the compensation for the risk borne in investment
the α-conversion in lambda calculus
the independence number of a graph
a placeholder for ordinal numbers in mathematical logic
a type of receptor for the neurotransmitter noradrenaline in neuroscience
Ββ (beta)
represents the beta function
represents:
the thermodynamic beta, equal to (kBT)−1, where kB is Boltzm
|
https://en.wikipedia.org/wiki/Phi%20Sigma%20Rho
|
Phi Sigma Rho (; also known as Phi Rho or PSR) is a social sorority for individuals who identify as female or non-binary in science, technology, engineering, and mathematics. The sorority was founded in 1984 at Purdue University. It has since expanded to more than 40 colleges across the United States.
History
Phi Sigma Rho was founded on September 24, 1984, at Purdue University by Rashmi Khanna and Abby McDonald. Khanna and McDonald were unable to participate in traditional sorority rush due to the demands of the sororities and their engineering program, so they decided to start a new sorority that would take their academic program's demands into consideration.
The Alpha chapter at Purdue University was founded with ten charter members: Gail Bonney, Anita Chatterjea, Ann Cullinan, Pam Kabbes, Rashmi Khanna, Abby McDonald, Christine Mooney, Tina Kershner, Michelle Self, and Kathy Vargo.
Phi Sigma Rho accepts students pursuing degrees in science, technology, engineering, and mathematics who identify as female or who identify as non-binary. The sorority made the decision to include non-binary students in all chapters in the summer of 2021.
Phi Sigma Rho has grown more than 40 chapters nationally. Its headquarters is located in Northville, Michigan. Its online magazine is The Key.
Symbols
The colors of Phi Sigma Rho are wine red and silver. The sorority's flower is the orchid, and its jewel is the pearl. Its mascot is Sigmand the penguin. Its motto is "together we build the future."
Objectives
The objectives of Phi Sigma Rho are:
To foster and provide the broadening experience of sorority living with its social and moral challenges and responsibilities for the individual and the chapter.
To develop the highest standard of personal integrity and character.
To promote academic excellence and support personal achievement, while providing a social balance.
To aid the individual in the transition from academic to the professional community.
To maintain sorority involvement with the alma mater and the community through responsible participation.
To maintain the bond of sisterhood with alumnae members through communication, consultation, and participation in Sorority functions.
Philanthropy
Phi Sigma Rho's national philanthropy is the Leukemia & Lymphoma Society.
The Phi Sigma Rho Foundation was established as a separate nonprofit organization in 2005. It supports the educational and philanthropic efforts of the sorority's members and offers merit-based scholarships to sorority members.
Chapters
The following tables lists Phi Sigma Rho chapters, prospective chapters, and interest groups. Active chapters are indicated in bold. Inactive chapters are indicated in italic.
Notes
See also
Professional fraternities and sororities
Society of Women Engineers
References
Fraternities and sororities in the United States
Student societies in the United States
Student organizations established in 1984
1984 establishments in Indiana
Purdu
|
https://en.wikipedia.org/wiki/Renewal%20theory
|
Renewal theory is the branch of probability theory that generalizes the Poisson process for arbitrary holding times. Instead of exponentially distributed holding times, a renewal process may have any independent and identically distributed (IID) holding times that have finite mean. A renewal-reward process additionally has a random sequence of rewards incurred at each holding time, which are IID but need not be independent of the holding times.
A renewal process has asymptotic properties analogous to the strong law of large numbers and central limit theorem. The renewal function (expected number of arrivals) and reward function (expected reward value) are of key importance in renewal theory. The renewal function satisfies a recursive integral equation, the renewal equation. The key renewal equation gives the limiting value of the convolution of with a suitable non-negative function. The superposition of renewal processes can be studied as a special case of Markov renewal processes.
Applications include calculating the best strategy for replacing worn-out machinery in a factory and comparing the long-term benefits of different insurance policies. The inspection paradox relates to the fact that observing a renewal interval at time t gives an interval with average value larger than that of an average renewal interval.
Renewal processes
Introduction
The renewal process is a generalization of the Poisson process. In essence, the Poisson process is a continuous-time Markov process on the positive integers (usually starting at zero) which has independent exponentially distributed holding times at each integer before advancing to the next integer, . In a renewal process, the holding times need not have an exponential distribution; rather, the holding times may have any distribution on the positive numbers, so long as the holding times are independent and identically distributed (IID) and have finite mean.
Formal definition
Let be a sequence of positive independent identically distributed random variables with finite expected value
We refer to the random variable as the "-th holding time".
Define for each n > 0 :
each is referred to as the "-th jump time" and the intervals are called "renewal intervals".
Then is given by random variable
where is the indicator function
represents the number of jumps that have occurred by time t, and is called a renewal process.
Interpretation
If one considers events occurring at random times, one may choose to think of the holding times as the random time elapsed between two consecutive events. For example, if the renewal process is modelling the numbers of breakdown of different machines, then the holding time represents the time between one machine breaking down before another one does.
The Poisson process is the unique renewal process with the Markov property, as the exponential distribution is the unique continuous random variable with the property of memorylessness.
Renewal-reward processes
|
https://en.wikipedia.org/wiki/Eisenstein%27s%20theorem
|
In mathematics, Eisenstein's theorem, named after the German mathematician Gotthold Eisenstein, applies to the coefficients of any power series which is an algebraic function with rational number coefficients. Through the theorem, it is readily demonstrable, for example, that the exponential function must be a transcendental function.
Theorem
Suppose that
is a formal power series with rational coefficients an, which has a non-zero radius of convergence in the complex plane, and within it represents an analytic function that is in fact an algebraic function. Then Eisenstein's theorem states that there exists a non-zero integer A, such that Anan are all integers.
This has an interpretation in terms of p-adic numbers: with an appropriate extension of the idea, the p-adic radius of convergence of the series is at least 1, for almost all p (i.e., the primes outside a finite set S). In fact that statement is a little weaker, in that it disregards any initial partial sum of the series, in a way that may vary according to p. For the other primes the radius is non-zero.
History
Eisenstein's original paper is the short communication
Über eine allgemeine Eigenschaft der Reihen-Entwicklungen aller algebraischen Functionen
(1852), reproduced in Mathematische Gesammelte Werke, Band II, Chelsea Publishing Co., New York, 1975,
p. 765–767.
More recently, many authors have investigated precise and effective bounds quantifying the above almost all.
See, e.g., Sections 11.4 and 11.55 of the book by E. Bombieri & W. Gubler.
References
Theorems in number theory
|
https://en.wikipedia.org/wiki/Moyal%20product
|
In mathematics, the Moyal product (after José Enrique Moyal; also called the star product or Weyl–Groenewold product, after Hermann Weyl and Hilbrand J. Groenewold) is an example of a phase-space star product. It is an associative, non-commutative product, , on the functions on , equipped with its Poisson bracket (with a generalization to symplectic manifolds, described below). It is a special case of the -product of the "algebra of symbols" of a universal enveloping algebra.
Historical comments
The Moyal product is named after José Enrique Moyal, but is also sometimes called the Weyl–Groenewold product as it was introduced by H. J. Groenewold in his 1946 doctoral dissertation, in a trenchant appreciation of the Weyl correspondence. Moyal actually appears not to know about the product in his celebrated article and was crucially lacking it in his legendary correspondence with Dirac, as illustrated in his biography. The popular naming after Moyal appears to have emerged only in the 1970s, in homage to his flat phase-space quantization picture.
Definition
The product for smooth functions and on takes the form
where each is a certain bidifferential operator of order characterized by the following properties (see below for an explicit formula):
Deformation of the pointwise product — implicit in the formula above.
Deformation of the Poisson bracket, called Moyal bracket.
The 1 of the undeformed algebra is also the identity in the new algebra.
The complex conjugate is an antilinear antiautomorphism.
Note that, if one wishes to take functions valued in the real numbers, then an alternative version eliminates the in the second condition and eliminates the fourth condition.
If one restricts to polynomial functions, the above algebra is isomorphic to the Weyl algebra , and the two offer alternative realizations of the Weyl map of the space of polynomials in variables (or the symmetric algebra of a vector space of dimension ).
To provide an explicit formula, consider a constant Poisson bivector on :
where is a real number for each .
The star product of two functions and can then be defined as the pseudo-differential operator acting on both of them,
where is the reduced Planck constant, treated as a formal parameter here.
This is a special case of what is known as the Berezin formula on the algebra of symbols and can be given a closed form (which follows from the Baker–Campbell–Hausdorff formula). The closed form can be obtained by using the exponential:
where is the multiplication map, , and the exponential is treated as a power series,
That is, the formula for is
As indicated, often one eliminates all occurrences of above, and the formulas then restrict naturally to real numbers.
Note that if the functions and are polynomials, the above infinite sums become finite (reducing to the ordinary Weyl-algebra case).
The relationship of the Moyal product to the generalized -product used in the definition of the "algebra
|
https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20Galois%20theory
|
In mathematics, the fundamental theorem of Galois theory is a result that describes the structure of certain types of field extensions in relation to groups. It was proved by Évariste Galois in his development of Galois theory.
In its most basic form, the theorem asserts that given a field extension E/F that is finite and Galois, there is a one-to-one correspondence between its intermediate fields and subgroups of its Galois group. (Intermediate fields are fields K satisfying F ⊆ K ⊆ E; they are also called subextensions of E/F.)
Explicit description of the correspondence
For finite extensions, the correspondence can be described explicitly as follows.
For any subgroup H of Gal(E/F), the corresponding fixed field, denoted EH, is the set of those elements of E which are fixed by every automorphism in H.
For any intermediate field K of E/F, the corresponding subgroup is Aut(E/K), that is, the set of those automorphisms in Gal(E/F) which fix every element of K.
The fundamental theorem says that this correspondence is a one-to-one correspondence if (and only if) E/F is a Galois extension.
For example, the topmost field E corresponds to the trivial subgroup of Gal(E/F), and the base field F corresponds to the whole group Gal(E/F).
The notation Gal(E/F) is only used for Galois extensions. If E/F is Galois, then Gal(E/F) = Aut(E/F). If E/F is not Galois, then the "correspondence" gives only an injective (but not surjective) map from to , and a surjective (but not injective) map in the reverse direction. In particular, if E/F is not Galois, then F is not the fixed field of any subgroup of Aut(E/F).
Properties of the correspondence
The correspondence has the following useful properties.
It is inclusion-reversing. The inclusion of subgroups H1 ⊆ H2 holds if and only if the inclusion of fields EH1 ⊇ EH2 holds.
Degrees of extensions are related to orders of groups, in a manner consistent with the inclusion-reversing property. Specifically, if H is a subgroup of Gal(E/F), then |H| = [E:EH] and |Gal(E/F)|/|H| = [EH:F].
The field EH is a normal extension of F (or, equivalently, Galois extension, since any subextension of a separable extension is separable) if and only if H is a normal subgroup of Gal(E/F). In this case, the restriction of the elements of Gal(E/F) to EH induces an isomorphism between Gal(EH/F) and the quotient group Gal(E/F)/H.
Example 1
Consider the field
Since is constructed from the base field by adjoining , then , each element of can be written as:
Its Galois group comprises the automorphisms of which fix . Such automorphisms must send to or , and send to or , since they permute the roots of any irreducible polynomial. Suppose that exchanges and , so
and exchanges and , so
These are clearly automorphisms of , respecting its addition and multiplication. There is also the identity automorphism which fixes each element, and the composition of and which changes the signs on both radicals:
Since the order of th
|
https://en.wikipedia.org/wiki/List%20of%20high%20schools%20in%20Washington%2C%20D.C.
|
This is a list of high schools in Washington, D.C.
High Schools
Map of High Schools
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
-76.982326,
38.870618
]
},
"properties": {
"title": "Anacostia High School",
"marker-color": "#0000FF",
"Neighborhood": "Anacostia",
"Ward": 8,
"DCPS School Code": 450,
"Address": "1601 16th St SE, Washington, DC 20020"
}
},
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
-77.000676,
38.84027
]
},
"properties": {
"title": "Ballou High School",
"marker-color": "#0000FF",
"Neighborhood": "Congress Heights",
"Ward": 8,
"DCPS School Code": 452,
"Address": "3401 4th St SE, Washington, DC 20032"
}
},
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
-77.0195784,
38.9672842
]
},
"properties": {
"title": "Calvin Coolidge High School",
"marker-color": "#0000FF",
"Neighborhood": "Takoma",
"Ward": 4,
"DCPS School Code": 455,
"Address": "6315 5th St NW, Washington, DC 20011"
}
},
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
-77.0284,
38.92194525
]
},
"properties": {
"title": "Cardozo Education Campus",
"marker-color": "#0000FF",
"Neighborhood": "Columbia Heights",
"Ward": 1,
"DCPS School Code": 442,
"Address": "1200 Clifton St NW, Washington, DC 20009"
}
},
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
-77.012407,
38.914232
]
},
"properties": {
"title": "Dunbar High School",
"marker-color": "#0000FF",
"Neighborhood": "Truxton Circle",
"Ward": 5,
"DCPS School Code": 467,
"Address": "101 N St NW, Washington, DC 20001"
}
},
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
-76.979687,
38.89041
]
},
"properties": {
"title": "Eastern High School",
"marker-color": "#0000FF",
"Neighborhood": "Kingman Park",
"Ward": 7,
"DCPS School Code": 457,
"Address": "1700 East Capitol St NE, Washington, DC 20002"
}
},
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
-76.92199513,
38.89637511
]
},
"properties": {
"title": "H.D. Woodson Senior High School",
"marker-color": "#0000FF",
"Neighborhood
|
https://en.wikipedia.org/wiki/Anne%20Wheeler
|
Anne Wheeler, OC, (born September 23, 1946) is a Canadian film and television writer, producer, and director.
Biography
Graduating in Mathematics from the University of Alberta she was a computer programmer before traveling abroad. Her years of travels inspired her to become a storyteller and when she returned she joined a group of old friends to form a film collective. From 1975 to 1985 she worked for the NFB where she made her first feature film, A War Story (1981), which was about her father, Ben Wheeler and his time as a doctor in a P.O.W. camp during World War II. The war is a common theme in her work and she revisited it later in her films Bye Bye Blues (1989) and The War Between Us (1995). Her first non-NFB film was Loyalties in 1986.
In addition to her films, Wheeler has directed episodes of Anne with an E, Private Eyes, Strange Empire, The Romeo Section, The Guard, This Is Wonderland, Da Vinci's Inquest, and Cold Squad.
Awards and honors
Wheeler has been nominated four times for the Genie Award for Best Achievement in Direction for her films Loyalties (1986), Cowboys Don't Cry (1988), Bye Bye Blues (1989), and Suddenly Naked (2001). Her 1998 television miniseries, The Sleep Room, won Gemini awards for best television movie and best direction.
In 2017 Wheeler won a Leo Award for Best Direction (Television Film) for the Hallmark movie Stop the Wedding.
Wheeler was made an Officer of the Order of Canada in 1995. In 2012 she received the Queen Elizabeth II Diamond Jubilee Medal. Wheeler has also been awarded seven honorary doctorates and is the first woman to be given a Lifetime Achievement Award from the Directors Guild of Canada.
Filmography
See also
List of female film and television directors
List of LGBT-related films directed by women
References
External links
Canadian Film Encyclopedia |A publication of The Film Reference Library/a division of the Toronto International Film Festival Group
Official web site
Anne Wheeler at the Canadian Women Film Directors Database
1946 births
Canadian women film directors
Canadian television directors
Canadian women television directors
Living people
Officers of the Order of Canada
Film directors from Edmonton
Writers from Edmonton
Canadian women screenwriters
Victoria School of Performing and Visual Arts alumni
Best Original Song Genie and Canadian Screen Award winners
Directors of Genie and Canadian Screen Award winners for Best Short Documentary Film
20th-century Canadian screenwriters
21st-century Canadian screenwriters
20th-century Canadian women writers
21st-century Canadian women writers
Canadian women documentary filmmakers
|
https://en.wikipedia.org/wiki/Arithmetic%20genus
|
In mathematics, the arithmetic genus of an algebraic variety is one of a few possible generalizations of the genus of an algebraic curve or Riemann surface.
Projective varieties
Let X be a projective scheme of dimension r over a field k, the arithmetic genus of X is defined asHere is the Euler characteristic of the structure sheaf .
Complex projective manifolds
The arithmetic genus of a complex projective manifold
of dimension n can be defined as a combination of Hodge numbers, namely
When n=1, the formula becomes . According to the Hodge theorem, . Consequently , where g is the usual (topological) meaning of genus of a surface, so the definitions are compatible.
When X is a compact Kähler manifold, applying hp,q = hq,p recovers the earlier definition for projective varieties.
Kähler manifolds
By using hp,q = hq,p for compact Kähler manifolds this can be
reformulated as the Euler characteristic in coherent cohomology for the structure sheaf :
This definition therefore can be applied to some other
locally ringed spaces.
See also
Genus (mathematics)
Geometric genus
References
Further reading
Topological methods of algebraic geometry
|
https://en.wikipedia.org/wiki/Hilbert%27s%20irreducibility%20theorem
|
In number theory, Hilbert's irreducibility theorem, conceived by David Hilbert in 1892, states that every finite set of irreducible polynomials in a finite number of variables and having rational number coefficients admit a common specialization of a proper subset of the variables to rational numbers such that all the polynomials remain irreducible. This theorem is a prominent theorem in number theory.
Formulation of the theorem
Hilbert's irreducibility theorem. Let
be irreducible polynomials in the ring
Then there exists an r-tuple of rational numbers (a1, ..., ar) such that
are irreducible in the ring
Remarks.
It follows from the theorem that there are infinitely many r-tuples. In fact the set of all irreducible specializations, called Hilbert set, is large in many senses. For example, this set is Zariski dense in
There are always (infinitely many) integer specializations, i.e., the assertion of the theorem holds even if we demand (a1, ..., ar) to be integers.
There are many Hilbertian fields, i.e., fields satisfying Hilbert's irreducibility theorem. For example, number fields are Hilbertian.
The irreducible specialization property stated in the theorem is the most general. There are many reductions, e.g., it suffices to take in the definition. A result of Bary-Soroker shows that for a field K to be Hilbertian it suffices to consider the case of and absolutely irreducible, that is, irreducible in the ring Kalg[X,Y], where Kalg is the algebraic closure of K.
Applications
Hilbert's irreducibility theorem has numerous applications in number theory and algebra. For example:
The inverse Galois problem, Hilbert's original motivation. The theorem almost immediately implies that if a finite group G can be realized as the Galois group of a Galois extension N of
then it can be specialized to a Galois extension N0 of the rational numbers with G as its Galois group. (To see this, choose a monic irreducible polynomial f(X1, ..., Xn, Y) whose root generates N over E. If f(a1, ..., an, Y) is irreducible for some ai, then a root of it will generate the asserted N0.)
Construction of elliptic curves with large rank.
Hilbert's irreducibility theorem is used as a step in the Andrew Wiles proof of Fermat's Last Theorem.
If a polynomial is a perfect square for all large integer values of x, then g(x) is the square of a polynomial in This follows from Hilbert's irreducibility theorem with and
(More elementary proofs exist.) The same result is true when "square" is replaced by "cube", "fourth power", etc.
Generalizations
It has been reformulated and generalized extensively, by using the language of algebraic geometry. See thin set (Serre).
References
D. Hilbert, "Uber die Irreducibilitat ganzer rationaler Functionen mit ganzzahligen Coefficienten", J. reine angew. Math. 110 (1892) 104–129.
J. P. Serre, Lectures on The Mordell-Weil Theorem, Vieweg, 1989.
M. D. Fried and M. Jarden, Field Arithmetic, Springer-Verlag, Berlin, 2005.
H
|
https://en.wikipedia.org/wiki/Critical%20point%20%28mathematics%29
|
Critical point is a term used in many branches of mathematics.
When dealing with functions of a real variable, a critical point is a point in the domain of the function where the function is either not differentiable or the derivative is equal to zero. Similarly, when dealing with complex variables, a critical point is a point in the function's domain where it is either not holomorphic or the derivative is equal to zero. Likewise, for a function of several real variables, a critical point is a value in its domain where the gradient is undefined or is equal to zero.
The value of the function at a critical point is a critical value.
This sort of definition extends to differentiable maps between and a critical point being, in this case, a point where the rank of the Jacobian matrix is not maximal. It extends further to differentiable maps between differentiable manifolds, as the points where the rank of the Jacobian matrix decreases. In this case, critical points are also called bifurcation points.
In particular, if is a plane curve, defined by an implicit equation , the critical points of the projection onto the -axis, parallel to the -axis are the points where the tangent to are parallel to the -axis, that is the points where
In other words, the critical points are those where the implicit function theorem does not apply.
The notion of a critical point allows the mathematical description of an astronomical phenomenon that was unexplained before the time of Copernicus. A stationary point in the orbit of a planet is a point of the trajectory of the planet on the celestial sphere, where the motion of the planet seems to stop before restarting in the other direction. This occurs because of a critical point of the projection of the orbit into the ecliptic circle.
Critical point of a single variable function
A critical point of a function of a single real variable, , is a value in the domain of where is not differentiable or its derivative is 0 (i.e. A critical value is the image under of a critical point. These concepts may be visualized through the graph of : at a critical point, the graph has a horizontal tangent if you can assign one at all.
Notice how, for a differentiable function, critical point is the same as stationary point.
Although it is easily visualized on the graph (which is a curve), the notion of critical point of a function must not be confused with the notion of critical point, in some direction, of a curve (see below for a detailed definition). If is a differentiable function of two variables, then is the implicit equation of a curve. A critical point of such a curve, for the projection parallel to the -axis (the map ), is a point of the curve where This means that the tangent of the curve is parallel to the -axis, and that, at this point, g does not define an implicit function from to (see implicit function theorem). If is such a critical point, then is the corresponding critical value. Such a critical point
|
https://en.wikipedia.org/wiki/Sampling%20error
|
In statistics, sampling errors are incurred when the statistical characteristics of a population are estimated from a subset, or sample, of that population. Since the sample does not include all members of the population, statistics of the sample (often known as estimators), such as means and quartiles, generally differ from the statistics of the entire population (known as parameters). The difference between the sample statistic and population parameter is considered the sampling error. For example, if one measures the height of a thousand individuals from a population of one million, the average height of the thousand is typically not the same as the average height of all one million people in the country.
Since sampling is almost always done to estimate population parameters that are unknown, by definition exact measurement of the sampling errors will not be possible; however they can often be estimated, either by general methods such as bootstrapping, or by specific methods incorporating some assumptions (or guesses) regarding the true population distribution and parameters thereof.
Description
Sampling Error
The sampling error is the error caused by observing a sample instead of the whole population. The sampling error is the difference between a sample statistic used to estimate a population parameter and the actual but unknown value of the parameter.
Effective Sampling
In statistics, a truly random sample means selecting individuals from a population with an equivalent probability; in other words, picking individuals from a group without bias. Failing to do this correctly will result in a sampling bias, which can dramatically increase the sample error in a systematic way. For example, attempting to measure the average height of the entire human population of the Earth, but measuring a sample only from one country, could result in a large over- or under-estimation. In reality, obtaining an unbiased sample can be difficult as many parameters (in this example, country, age, gender, and so on) may strongly bias the estimator and it must be ensured that none of these factors play a part in the selection process.
Even in a perfect non-biased sample, the sample error will still exist due to the remaining statistical component; consider that measuring only two or three individuals and taking the average would produce a wildly varying result each time. The likely size of the sampling error can generally be reduced by taking a larger sample.
Sample Size Determination
The cost of increasing a sample size may be prohibitive in reality. Since the sample error can often be estimated beforehand as a function of the sample size, various methods of sample size determination are used to weigh the predicted accuracy of an estimator against the predicted cost of taking a larger sample.
Bootstrapping and Standard Error
As discussed, a sample statistic, such as an average or percentage, will generally be subject to sample-to-sample variation. By compari
|
https://en.wikipedia.org/wiki/Fredholm%27s%20theorem
|
In mathematics, Fredholm's theorems are a set of celebrated results of Ivar Fredholm in the Fredholm theory of integral equations. There are several closely related theorems, which may be stated in terms of integral equations, in terms of linear algebra, or in terms of the Fredholm operator on Banach spaces.
The Fredholm alternative is one of the Fredholm theorems.
Linear algebra
Fredholm's theorem in linear algebra is as follows: if M is a matrix, then the orthogonal complement of the row space of M is the null space of M:
Similarly, the orthogonal complement of the column space of M is the null space of the adjoint:
Integral equations
Fredholm's theorem for integral equations is expressed as follows. Let be an integral kernel, and consider the homogeneous equations
and its complex adjoint
Here, denotes the complex conjugate of the complex number , and similarly for . Then, Fredholm's theorem is that, for any fixed value of , these equations have either the trivial solution or have the same number of linearly independent solutions , .
A sufficient condition for this theorem to hold is for to be square integrable on the rectangle (where a and/or b may be minus or plus infinity).
Here, the integral is expressed as a one-dimensional integral on the real number line. In Fredholm theory, this result generalizes to integral operators on multi-dimensional spaces, including, for example, Riemannian manifolds.
Existence of solutions
One of Fredholm's theorems, closely related to the Fredholm alternative, concerns the existence of solutions to the inhomogeneous Fredholm equation
Solutions to this equation exist if and only if the function is orthogonal to the complete set of solutions of the corresponding homogeneous adjoint equation:
where is the complex conjugate of and the former is one of the complete set of solutions to
A sufficient condition for this theorem to hold is for to be square integrable on the rectangle .
References
E.I. Fredholm, "Sur une classe d'equations fonctionnelles", Acta Math., 27 (1903) pp. 365–390.
Fredholm theory
Linear algebra
Theorems in functional analysis
|
https://en.wikipedia.org/wiki/Product%20order
|
In mathematics, given a partial order and on a set and , respectively, the product order (also called the coordinatewise order or componentwise order) is a partial ordering on the Cartesian product Given two pairs and in declare that if and
Another possible ordering on is the lexicographical order, which is a total ordering. However the product order of two total orders is not in general total; for example, the pairs and are incomparable in the product order of the ordering with itself. The lexicographic combination of two total orders is a linear extension of their product order, and thus the product order is a subrelation of the lexicographic order.
The Cartesian product with the product order is the categorical product in the category of partially ordered sets with monotone functions.
The product order generalizes to arbitrary (possibly infinitary) Cartesian products.
Suppose is a set and for every is a preordered set.
Then the on is defined by declaring for any and in that
if and only if for every
If every is a partial order then so is the product preorder.
Furthermore, given a set the product order over the Cartesian product can be identified with the inclusion ordering of subsets of
The notion applies equally well to preorders. The product order is also the categorical product in a number of richer categories, including lattices and Boolean algebras.
References
See also
Direct product of binary relations
Examples of partial orders
Star product, a different way of combining partial orders
Orders on the Cartesian product of totally ordered sets
Ordinal sum of partial orders
Order theory
|
https://en.wikipedia.org/wiki/Stephanie%20Pace%20Marshall
|
Stephanie Anne Pace Marshall (born July 19, 1945), is an American educator and the founding president of the Illinois Mathematics and Science Academy.
Education
Stephanie Anne Pace was born to Dominick Martin and Anne (née Price) Pace in the Bronx, New York on July 19, 1945, and grew up in the New York city area. She graduated from East Meadow High School in 1963. Pace attended Muhlenberg College from 1963 to 1965 before transferring to Queens College, City University of New York where she completed a B.A. in education and sociology in 1967. In 1971, she earned an M.A. in curriculum philosophy from the University of Chicago. In January 1983, she completed a Ph.D. in Educational Administration and Industrial Relations from Loyola University Chicago. Her dissertation was titled, An analysis of the profile, roles, functions, and behavior of women on boards of education in DuPage County, Illinois. Marshall's doctoral advisor was Melvin P. Heller.
Career
Marshall was a schoolteacher in elementary and junior high schools in Alsip, Illinois. She taught graduate courses at the National Louis University. In 1976, Marshall became assistant superintended for instruction for Batavia Public School District 101. From 1983 to 1985, She served as Batavia's superintendent.
Marshall served as president of the Illinois Mathematics and Science Academy from its 1985 founding until 2007. She was president of the Association for Supervision and Curriculum Development (ASCD).
Her philosophy of education was influenced by anthropologist Margaret Mead and educators Ernie Boyer and Elliot Eisner.
Awards and honors
Marshall was inducted as a laureate of The Lincoln Academy of Illinois and received the Order of Lincoln Award in the area of education from the Governor of Illinois in 2005. She is a Fellow of the Royal Society of Arts. She was awarded honorary degrees from Illinois Wesleyan University, Aurora University, and North Central College.
Personal life
Marshall was married to educator Robert Dean Marshall before his death in 2014.
Selected works
References
External links
Living people
School superintendents in Illinois
Queens College, City University of New York alumni
University of Chicago alumni
Loyola University Chicago alumni
National Louis University faculty
People from Batavia, Illinois
1945 births
Educators from New York City
20th-century American educators
21st-century American educators
People from the Bronx
Schoolteachers from New York (state)
Schoolteachers from Illinois
20th-century American women educators
21st-century American women educators
American women academics
|
https://en.wikipedia.org/wiki/Strongly%20compact%20cardinal
|
In set theory, a branch of mathematics, a strongly compact cardinal is a certain kind of large cardinal.
A cardinal κ is strongly compact if and only if every κ-complete filter can be extended to a κ-complete ultrafilter.
Strongly compact cardinals were originally defined in terms of infinitary logic, where logical operators are allowed to take infinitely many operands. The logic on a regular cardinal κ is defined by requiring the number of operands for each operator to be less than κ; then κ is strongly compact if its logic satisfies an analog of the compactness property of finitary logic.
Specifically, a statement which follows from some other collection of statements should also follow from some subcollection having cardinality less than κ.
The property of strong compactness may be weakened by only requiring this compactness property to hold when the original collection of statements has cardinality below a certain cardinal λ; we may then refer to λ-compactness. A cardinal is weakly compact if and only if it is κ-compact; this was the original definition of that concept.
Strong compactness implies measurability, and is implied by supercompactness. Given that the relevant cardinals exist, it is consistent with ZFC either that the first measurable cardinal is strongly compact, or that the first strongly compact cardinal is supercompact; these cannot both be true, however. A measurable limit of strongly compact cardinals is strongly compact, but the least such limit is not supercompact.
The consistency strength of strong compactness is strictly above that of a Woodin cardinal. Some set theorists conjecture that existence of a strongly compact cardinal is equiconsistent with that of a supercompact cardinal. However, a proof is unlikely until a canonical inner model theory for supercompact cardinals is developed.
Extendibility is a second-order analog of strong compactness.
See also
List of large cardinal properties
References
Large cardinals
|
https://en.wikipedia.org/wiki/Metabelian%20group
|
In mathematics, a metabelian group is a group whose commutator subgroup is abelian. Equivalently, a group G is metabelian if and only if there is an abelian normal subgroup A such that the quotient group G/A is abelian.
Subgroups of metabelian groups are metabelian, as are images of metabelian groups over group homomorphisms.
Metabelian groups are solvable. In fact, they are precisely the solvable groups of derived length at most 2.
Examples
Any dihedral group is metabelian, as it has a cyclic normal subgroup of index 2. More generally, any generalized dihedral group is metabelian, as it has an abelian normal subgroup of index 2.
If F is a field, the group of affine maps (where a ≠ 0) acting on F is metabelian. Here the abelian normal subgroup is the group of pure translations , and the abelian quotient group is isomorphic to the group of homotheties . If F is a finite field with q elements, this metabelian group is of order q(q − 1).
The group of direct isometries of the Euclidean plane is metabelian. This is similar to the above example, as the elements are again affine maps. The translations of the plane form an abelian normal subgroup of the group, and the corresponding quotient is the circle group.
The finite Heisenberg group H3,p of order p3 is metabelian. The same is true for any Heisenberg group defined over a ring (group of upper-triangular 3 × 3 matrices with entries in a commutative ring).
All nilpotent groups of class 3 or less are metabelian.
The lamplighter group is metabelian.
All groups of order p5 are metabelian (for prime p).
All groups of order less than 24 are metabelian.
In contrast to this last example, the symmetric group S4 of order 24 is not metabelian, as its commutator subgroup is the non-abelian alternating group A4.
References
External links
Ryan Wisnesky, Solvable groups (subsection Metabelian Groups)
Groupprops, The Group Properties Wiki Metabelian group
Properties of groups
Solvable groups
|
https://en.wikipedia.org/wiki/List%20of%20presidents%20of%20the%20United%20States%20by%20date%20of%20death
|
The following is a list of presidents of the United States by date of death, plus additional lists of presidential death related statistics. Of the 45 people who have served as President of the United States since the office came into existence in 1789, 39 have diedeight of them while in office.
The oldest president at the time of death was George H. W. Bush, who died at the age of . John F. Kennedy, assassinated at the age of , was the nation's shortest-lived president; the youngest to have died by natural causes was James K. Polk, who died of cholera at the age of .
Presidents in order of death
Died same day, date, year, age
Same day
July 4, 1826: Thomas Jefferson at 12:50 p.m., and John Adams at 6:20 p.m.
Same date
March 8: Millard Fillmore in 1874 and William Howard Taft in 1930
July 4: John Adams and Thomas Jefferson in 1826, and James Monroe in 1831
December 26: Harry S. Truman in 1972 and Gerald Ford in 2006
Same calendar year
1826: Thomas Jefferson and John Adams, both on July 4
1862: John Tyler and Martin Van Buren, on January 18 and July 24 respectively
1901: Benjamin Harrison and William McKinley, on March 13 and September 14 respectively
Same age (rounded down to nearest year)
93: Gerald Ford and Ronald Reagan
90: John Adams and Herbert Hoover
78: Andrew Jackson and Dwight D. Eisenhower
71: John Tyler and Grover Cleveland
67: George Washington, Benjamin Harrison and Woodrow Wilson
64: Franklin Pierce and Lyndon B. Johnson
63: Ulysses S. Grant and Franklin D. Roosevelt
60: Theodore Roosevelt and Calvin Coolidge
57: Chester A. Arthur and Warren G. Harding
Died before multiple predecessors
9th president William Henry Harrison (died April 4, 1841)
before 7th president Andrew Jackson (died June 8, 1845)
before 6th president John Quincy Adams (died February 23, 1848)
before 8th president Martin Van Buren (died July 24, 1862)
11th president James K. Polk (died June 15, 1849)
before 10th president John Tyler (died January 18, 1862)
before 8th president Martin Van Buren (died July 24, 1862)
12th president Zachary Taylor (died July 9, 1850)
before 10th president John Tyler (died January 18, 1862)
before 8th president Martin Van Buren (died July 24, 1862)
15th president James Buchanan (died June 1, 1868)
before 14th president Franklin Pierce (died October 8, 1869)
before 13th president Millard Fillmore (died March 8, 1874)
16th president Abraham Lincoln (died April 15, 1865)
before 15th president James Buchanan (died June 1, 1868)
before 14th president Franklin Pierce (died October 8, 1869)
before 13th president Millard Fillmore (died March 8, 1874)
20th president James A. Garfield (died September 19, 1881)
before 18th president Ulysses S. Grant (died July 23, 1885)
before 19th president Rutherford B. Hayes (died January 17, 1893)
29th president Warren Harding (died August 2, 1923)
before 28th president Woodrow Wilson (died February 3, 1924)
before 27th president William Howard Taft (die
|
https://en.wikipedia.org/wiki/Peroxymonosulfuric%20acid
|
Peroxymonosulfuric acid, , is also known as persulfuric acid, peroxysulfuric acid, or Caro's acid. In this acid, the S(VI) center adopts its characteristic tetrahedral geometry; the connectivity is indicated by the formula HO–O–S(O)2–OH. It is one of the strongest oxidants known (E0 = +2.51 V) and is highly explosive.
is sometimes confused with , known as peroxydisulfuric acid. The disulfuric acid, which appears to be more widely used as its alkali metal salts, has the structure HO–S(O)2–O–O–S(O)2–OH.
History
was first described in 1898 by the German chemist Heinrich Caro, after whom it is named.
Synthesis and production
The laboratory scale preparation of Caro's acid involves the combination of chlorosulfuric acid and hydrogen peroxide:
+ ⇌ + HCl
Published patents include more than one reaction for preparation of Caro's acid, usually as an intermediate for the production of potassium monopersulfate (PMPS), a bleaching and oxidizing agent. One patent for production of Caro's acid for this purpose gives the following reaction:
+ ⇌ +
This is the reaction that produces the acid transiently in "piranha solution".
Uses in industry
has been used for a variety of disinfectant and cleaning applications, e.g., swimming pool treatment and denture cleaning. Alkali metal salts of show promise for the delignification of wood. It is also used in laboratories as a last resort in removing organic materials since can fully oxidize any organic materials.
Ammonium, sodium, and potassium salts of are used in the plastics industry as radical initiators for polymerization. They are also used as etchants, oxidative desizing agents for textile fabrics, and for decolorizing and deodorizing oils.
Potassium peroxymonosulfate, , is the potassium acid salt of peroxymonosulfuric acid. It is widely used as an oxidizing agent.
Hazards
Pure Caro's acid is highly explosive. Explosions have been reported at Brown University and Sun Oil. As with all strong oxidizing agents, peroxysulfuric acid should be kept away from organic compounds such as ethers and ketones because of its ability to peroxidize these compounds, creating highly unstable molecules such as acetone peroxide.
See also
Peroxydisulfuric acid
Peroxomonosulfate
References
Hydrogen compounds
Sulfur oxoacids
Liquid explosives
Persulfates
Peroxy acids
Oxidizing agents
Explosive chemicals
|
https://en.wikipedia.org/wiki/Whitney%20umbrella
|
In geometry, the Whitney umbrella (or Whitney's umbrella, named after American mathematician Hassler Whitney, and sometimes called a Cayley umbrella) is a specific self-intersecting ruled surface placed in three dimensions. It is the union of all straight lines that pass through points of a fixed parabola and are perpendicular to a fixed straight line which is parallel to the axis of the parabola and lies on its perpendicular bisecting plane.
Formulas
Whitney's umbrella can be given by the parametric equations in Cartesian coordinates
where the parameters u and v range over the real numbers. It is also given by the implicit equation
This formula also includes the negative z axis (which is called the handle of the umbrella).
Properties
Whitney's umbrella is a ruled surface and a right conoid. It is important in the field of singularity theory, as a simple local model of a pinch point singularity. The pinch point and the fold singularity are the only stable local singularities of maps from R2 to R3.
It is named after the American mathematician Hassler Whitney.
In string theory, a Whitney brane is a D7-brane wrapping a variety whose singularities are locally modeled by the Whitney umbrella. Whitney branes appear naturally when taking Sen's weak coupling limit of F-theory.
See also
Cross-cap
Right conoid
Ruled surface
References
(Images and movies of the Whitney umbrella.)
Differential topology
Singularity theory
Surfaces
Algebraic geometry
|
https://en.wikipedia.org/wiki/Complementary%20event
|
In probability theory, the complement of any event A is the event [not A], i.e. the event that A does not occur. The event A and its complement [not A] are mutually exclusive and exhaustive. Generally, there is only one event B such that A and B are both mutually exclusive and exhaustive; that event is the complement of A. The complement of an event A is usually denoted as A′, Ac, A or . Given an event, the event and its complementary event define a Bernoulli trial: did the event occur or not?
For example, if a typical coin is tossed and one assumes that it cannot land on its edge, then it can either land showing "heads" or "tails." Because these two outcomes are mutually exclusive (i.e. the coin cannot simultaneously show both heads and tails) and collectively exhaustive (i.e. there are no other possible outcomes not represented between these two), they are therefore each other's complements. This means that [heads] is logically equivalent to [not tails], and [tails] is equivalent to [not heads].
Complement rule
In a random experiment, the probabilities of all possible events (the sample space) must total to 1— that is, some outcome must occur on every trial. For two events to be complements, they must be collectively exhaustive, together filling the entire sample space. Therefore, the probability of an event's complement must be unity minus the probability of the event. That is, for an event A,
Equivalently, the probabilities of an event and its complement must always total to 1. This does not, however, mean that any two events whose probabilities total to 1 are each other's complements; complementary events must also fulfill the condition of mutual exclusivity.
Example of the utility of this concept
Suppose one throws an ordinary six-sided die eight times. What is the probability that one sees a "1" at least once?
It may be tempting to say that
Pr(["1" on 1st trial] or ["1" on second trial] or ... or ["1" on 8th trial])
= Pr("1" on 1st trial) + Pr("1" on second trial) + ... + P("1" on 8th trial)
= 1/6 + 1/6 + ... + 1/6
= 8/6
= 1.3333...
This result cannot be right because a probability cannot be more than 1. The technique is wrong because the eight events whose probabilities got added are not mutually exclusive.
One may resolve this overlap by the principle of inclusion-exclusion, or, in this case, by simply finding the probability of the complementary event and subtracting it from 1, thus:
Pr(at least one "1") = 1 − Pr(no "1"s)
= 1 − Pr([no "1" on 1st trial] and [no "1" on 2nd trial] and ... and [no "1" on 8th trial])
= 1 − Pr(no "1" on 1st trial) × Pr(no "1" on 2nd trial) × ... × Pr(no "1" on 8th trial)
= 1 −(5/6) × (5/6) × ... × (5/6)
= 1 − (5/6)8
= 0.7674...
See also
Logical complement
Exclusive disjunction
Binomial probability
References
External links
Complementary events - (free) page from probability book of McGraw-Hill
Experiment (probability theory)
|
https://en.wikipedia.org/wiki/149%20%28number%29
|
149 (one hundred [and] forty-nine) is the natural number between 148 and 150.
In mathematics
149 is a prime number, the first prime whose difference from the previous prime is exactly 10, an emirp, and an irregular prime. After 1 and 127, it is the third smallest de Polignac number, an odd number that cannot be represented as a prime plus a power of two. More strongly, after 1, it is the second smallest number that is not a sum of two prime powers.
It is a tribonacci number, being the sum of the three preceding terms, 24, 44, 81.
There are exactly 149 integer points in a closed circular disk of radius 7, and exactly 149 ways of placing six queens (the maximum possible) on a 5 × 5 chess board so that each queen attacks exactly one other. The barycentric subdivision of a tetrahedron produces an abstract simplicial complex with exactly 149 simplices.
See also
The year AD 149 or 149 BC
List of highways numbered 149
References
External links
Integers
|
https://en.wikipedia.org/wiki/Isotropic%20manifold
|
In mathematics, an isotropic manifold is a manifold in which the geometry does not depend on directions. Formally, we say that a Riemannian manifold is isotropic if for any point and unit vectors , there is an isometry of with and . Every connected isotropic manifold is homogeneous, i.e. for any there is an isometry of with This can be seen by considering a geodesic from to and taking the isometry which fixes and maps to
Examples
The simply-connected space forms (the n-sphere, hyperbolic space, and ) are isotropic. It is not true in general that any constant curvature manifold is isotropic; for example, the flat torus is not isotropic. This can be seen by noting that any isometry of which fixes a point must lift to an isometry of which fixes a point and preserves ; thus the group of isometries of which fix is discrete. Moreover, it can be seen in a same way that no oriented surface with constant curvature and negative Euler characteristic is isotropic.
Moreover, there are isotropic manifolds which do not have constant curvature, such as the complex projective space () equipped with the Fubini-Study metric. Indeed, the universal cover of any constant-curvature manifold is either a sphere, or a hyperbolic space, or . But is simply-connected yet not a sphere (for ), as can be seen for example from homotopy group calculations from long exact sequence of the fibration .
Further examples of isotropic manifolds are given by the rank one symmetric spaces, including the projective spaces , , , and , as well as their noncompact hyperbolic analogues.
A manifold can be homogeneous but not isotropic, such as the flat torus or with the product metric.
See also
Cosmological principle
Isotropic Manifold on Math.StackExchange (July 2013)
Differential geometry
|
https://en.wikipedia.org/wiki/Donald%20Gillies
|
Donald Gillies may refer to:
Donald B. Gillies (1928–1975), mathematician and computer scientist
Donald A. Gillies (born 1944), historian of mathematics
Donnie Gillies (born 1951), Scottish footballer
See also
Donald Gillis (disambiguation)
|
https://en.wikipedia.org/wiki/Siegel%20zero
|
In mathematics, more specifically in the field of analytic number theory, a Landau–Siegel zero or simply Siegel zero (also known as exceptional zero), named after Edmund Landau and Carl Ludwig Siegel, is a type of potential counterexample to the generalized Riemann hypothesis, on the zeros of Dirichlet L-functions associated to quadratic number fields. Roughly speaking, these are possible zeros very near (in a quantifiable sense) to .
Motivation and definition
The way in which Siegel zeros appear in the theory of Dirichlet L-functions is as potential exceptions to the classical zero-free regions, which can only occur when the L-function is associated to a real Dirichlet character.
Real primitive Dirichlet characters
For an integer , a Dirichlet character modulo is an arithmetic function satisfying the following properties:
(Completely multiplicative) for every , ;
(Periodic) for every ;
(Support) if .
That is, is the lifting of a homomorphism .
The trivial character is the character modulo 1, and the principal character modulo , denoted , is the lifting of the trivial homomorphism .
A character is called imprimitive if there exists some integer with such that the induced homomorphism factors as
for some character ; otherwise, is called primitive.
A character is real (or quadratic) if it equals its complex conjugate (defined as ), or equivalently if . The real primitive Dirichlet characters are in one-to-one correspondence with the Kronecker symbols for a fundamental discriminant (i.e., the discriminant of a quadratic number field). One way to define is as the completely multiplicative arithmetic function determined by (for prime):
It is thus common to write , which are real primitive characters modulo .
Classical zero-free regions
The Dirichlet L-function associated to a character is defined as the analytic continuation of the Dirichlet series defined for , where s is a complex variable. For non-principal, this continuation is entire; otherwise it has a simple pole of residue at as its only singularity. For , Dirichlet L-functions can be expanded into an Euler product , from where it follows that has no zeros in this region. The prime number theorem for arithmetic progressions is equivalent (in a certain sense) to (). Moreover, via the functional equation, we can reflect these regions through to conclude that, with the exception of negative integers of same parity as , all the other zeros of must lie inside . This region is called the critical strip, and zeros in this region are called non-trivial zeros.
The classical theorem on zero-free regions (Grönwall, Landau, Titchmarsh) states that there exists an
effectively computable real number such that, writing for the complex variable, the function has no zeros in the region
if is non-real. If is real, then there is at most one zero in this region, which must necessarily be real and simple. This possible zero is the so-called Siegel zero.
The General
|
https://en.wikipedia.org/wiki/Distribution
|
Distribution may refer to:
Mathematics
Distribution (mathematics), generalized functions used to formulate solutions of partial differential equations
Probability distribution, the probability of a particular value or value range of a variable
Cumulative distribution function, in which the probability of being no greater than a particular value is a function of that value
Frequency distribution, a list of the values recorded in a sample
Inner distribution, and outer distribution, in coding theory
Distribution (differential geometry), a subset of the tangent bundle of a manifold
Distributed parameter system, systems that have an infinite-dimensional state-space
Distribution of terms, a situation in which all members of a category are accounted for
Distributivity, a property of binary operations that generalises the distributive law from elementary algebra
Distribution (number theory)
Distribution problems, a common type of problems in combinatorics where the goal is to enumerate the number of possible distributions of objects to recipients, subject to various conditions; see Twelvefold way
Computing and telecommunications
Distribution (concurrency), the projection operator in a history monoid, a representation of the histories of concurrent computer processes
Data distribution or dissemination, to distribute information without direct feedback
Digital distribution, publishing media digitally
Distributed computing, the coordinated use of physically distributed computers (distributed systems) for tasks or storage
Electronic brakeforce distribution, an automotive technology that varies brake force based on prevailing conditions
Key distribution center, part of a cryptosystem intended to reduce the risks inherent in exchanging keys
Software distribution, bundles of a specific software already compiled and configured
A specific packaging of an operating system containing a kernel, toolchain, utilities and other software
Linux distribution, one of several distributions built on the Linux kernel
Natural sciences
Distribution (pharmacology), the movement of a drug from one location to another within the body
Species distribution, the manner in which a species is spatially arranged
Cosmopolitan distribution, in which a species appears in appropriate environments around the world
Spectral power distribution of light sources
Economics
Distribution (economics), distribution of income or output among individuals or factors of production (or to help others)
Distribution in kind, concerning the transfer of non-cash assets by a company to a shareholder, see Companies Act 2006
Distribution (marketing), or place, one of the four elements of marketing mix
Distribution resource planning, method used in business administration for planning orders within a supply chain
Distributionism, an economic ideology
Distribution of wealth, among members in a society
Division of property, or equitable distribution, of property between spouses during divorce
Food distribu
|
https://en.wikipedia.org/wiki/Large%20sieve
|
The large sieve is a method (or family of methods and related ideas) in analytic number theory. It is a type of sieve where up to half of all residue classes of numbers are removed, as opposed to small sieves such as the Selberg sieve wherein only a few residue classes are removed. The method has been further heightened by the larger sieve which removes arbitrarily many residue classes.
Name
Its name comes from its original application: given a set such that the elements of S are forbidden to lie in a set Ap ⊂ Z/p Z modulo every prime p, how large can S be? Here Ap is thought of as being large, i.e., at least as large as a constant times p; if this is not the case, we speak of a small sieve.
History
The early history of the large sieve traces back to work of Yu. B. Linnik, in 1941, working on the problem of the least quadratic non-residue. Subsequently Alfréd Rényi worked on it, using probability methods. It was only two decades later, after quite a number of contributions by others, that the large sieve was formulated in a way that was more definitive. This happened in the early 1960s, in independent work of Klaus Roth and Enrico Bombieri. It is also around that time that the connection with the duality principle became better understood. In the mid-1960s, the Bombieri–Vinogradov theorem was proved as a major application of large sieves using estimations of mean values of Dirichlet characters. In the late 1960s and early 1970s, many of the key ingredients and estimates were simplified by Patrick X. Gallagher.
Development
Large-sieve methods have been developed enough that they are applicable to small-sieve situations as well. Something is commonly seen as related to the large sieve not necessarily in terms of whether it is related to the kind of situation outlined above, but, rather, if it involves one of the two methods of proof traditionally used to yield a large-sieve result:
Approximate Plancherel inequality
If a set S is ill-distributed modulo p (by virtue, for example, of being excluded from the congruence classes Ap) then the Fourier coefficients of the characteristic function fp of the set S mod p are in average large. These coefficients can be lifted to values of the Fourier transform of the characteristic function f of the set S (i.e.,
).
By bounding derivatives, we can see that must be large, on average, for all x near rational numbers of the form a/p. Large here means "a relatively large constant times |S|". Since
we get a contradiction with the Plancherel identity
unless |S| is small. (In practice, to optimise bounds, people nowadays modify the Plancherel identity into an equality rather than bound derivatives as above.)
Duality principle
One can prove a strong large-sieve result easily by noting the following basic fact from functional analysis: the norm of a linear operator (i.e.,
where A is an operator from a linear space V to a linear space W) equals the norm of its adjoint i.e.,
).
This principle itself ha
|
https://en.wikipedia.org/wiki/Cartan%E2%80%93K%C3%A4hler%20theorem
|
In mathematics, the Cartan–Kähler theorem is a major result on the integrability conditions for differential systems, in the case of analytic functions, for differential ideals . It is named for Élie Cartan and Erich Kähler.
Meaning
It is not true that merely having contained in is sufficient for integrability. There is a problem caused by singular solutions. The theorem computes certain constants that must satisfy an inequality in order that there be a solution.
Statement
Let be a real analytic EDS. Assume that is a connected, -dimensional, real analytic, regular integral manifold of with (i.e., the tangent spaces are "extendable" to higher dimensional integral elements).
Moreover, assume there is a real analytic submanifold of codimension containing and such that has dimension for all .
Then there exists a (locally) unique connected, -dimensional, real analytic integral manifold of that satisfies .
Proof and assumptions
The Cauchy-Kovalevskaya theorem is used in the proof, so the analyticity is necessary.
References
Jean Dieudonné, Eléments d'analyse, vol. 4, (1977) Chapt. XVIII.13
R. Bryant, S. S. Chern, R. Gardner, H. Goldschmidt, P. Griffiths, Exterior Differential Systems, Springer Verlag, New York, 1991.
External links
R. Bryant, "Nine Lectures on Exterior Differential Systems", 1999
E. Cartan, "On the integration of systems of total differential equations," transl. by D. H. Delphenich
E. Kähler, "Introduction to the theory of systems of differential equations," transl. by D. H. Delphenich
Partial differential equations
Theorems in analysis
|
https://en.wikipedia.org/wiki/Municipality%20of%20the%20District%20of%20Clare
|
Clare, officially named the Municipality of the District of Clare, is a district municipality in western Nova Scotia, Canada. Statistics Canada classifies the district municipality as a municipal district.
Geography
The Municipality of the District of Clare occupies the western half of Digby County. Most of the municipality's settled areas are located along St. Marys Bay, a sub-basin of the Gulf of Maine.
History
The township was settled in 1768 by Acadian families who had returned to Nova Scotia from exile.
Prior to the establishment of Clare the Mi'kmaw knew the area as Wagweiik. The mouth of Salmon River is thought to be a traditional summer settlement of the Mi'kmaw and several artifacts have been found there, as well as at Meteghan, Major's Point and other sites [2]. Place names like Hectanooga, Mitihikan (Meteghan), and Chicaben (Church Point) are found in the area. They also had a principal settlement by River Allen near Cape Sainte-Marie used for fishing and as a canoe route [3]. The Mi'kmaw also used a fishing weir system for catching mackerel and herring that they taught to the new settlers, which they continued to use until well into the 1900s, and fish drying techniques that continue today. They also caught eels, seals, clams, urchins and other sea life, as well as berries, medicinal plants and other coastal resources. As new settlers arrived in the 1760s–1780s, the Mi'kmaw were instrumental in helping the new Acadians survive and become skilled in surviving the harsh winters along the coast. By the 1800s most Mi'kmaw had left the area to live on the Reserve in Bear River, while still returning for fishing, hunting, trade and ceremony throughout the year.
It was named "Clare" by then Lieutenant Governor of Nova Scotia, Michael Francklin. The name comes from the County Clare in Ireland.
Present day
The municipality is inhabited by many Acadians and their descendants and conducts its business in both English and French, although the official language is English and both languages are used. The only French university in the province of Nova Scotia, Université Sainte-Anne, is located in Church Point (Pointe-de-l'Église / Chicoben) and 47% of the adult population has a postsecondary education. The area hosts the oldest and largest annual Acadian Festival, as well as Nova Scotia's first Gran Fondo cycling event, which was cancelled in 2020 due to COVID-19.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, the Municipality of the District of Clare had a population of living in of its total private dwellings, a change of from its 2016 population of . With a land area of , it had a population density of in 2021.
Communities
Access routes
Highways and numbered routes that run through the district municipality, including external routes that start or finish at the municipal boundary:
Highways
Trunk Routes
Collector Routes:
External Routes:
None
Culture
Musical groups from the area include:
Grand
|
https://en.wikipedia.org/wiki/Proportion%20%28architecture%29
|
Proportion is a central principle of architectural theory and an important connection between mathematics and art. It is the visual effect of the relationship of the various objects and spaces that make up a structure to one another and to the whole. These relationships are often governed by multiples of a standard unit of length known as a "module".
Proportion in architecture was discussed by Vitruvius, Leon Battista Alberti, Andrea Palladio, and Le Corbusier among others.
Roman architecture
Vitruvius
Architecture in Roman antiquity was rarely documented except in the writings of Vitruvius' treatise De architectura. Vitruvius served as an engineer under Julius Caesar during the first Gallic Wars (58–50 BC). The treatise was dedicated to Emperor Augustus. As Vitruvius defined the concept in the first chapters of the treatise, he mentioned the three prerequisites of architecture are firmness (firmitas), commodity (utilitas), and delight (venustas), which require the architects to be equipped with a varied kind of learning and knowledge of many branches. Moreover, Vitruvius identified the "Six Principles of Design" as order (ordinatio), arrangement (dispositio), proportion (eurythmia), symmetry (symmetria), propriety (decor) and economy (distributio). Among the six principles, proportion interrelates and supports all the other factors in geometrical forms and arithmetical ratios.
The word symmetria, usually translated to "symmetry" in modern renderings, in ancient times meant something more closely related to "mathematical harmony" and measurable proportions. Vitruvius tried to describe his theory in the makeup of the human body, which he referred to as the perfect or golden ratio. The principles of measurement units digit, foot, and cubit also came from the dimensions of a Vitruvian Man. More specifically, Vitruvius used the total height of 6 feet of a person, and each part of the body takes up a different ratio. For example, the face is about 1/10 of the total height, and the head is about 1/8 of the total height. Vitruvius used these ratios to prove that the composition of classical orders mimicked the human body, thereby ensuring aesthetic harmonization when people viewed architectural columns.
Classical architecture
In classical architecture, the module was established as the radius of the lower shaft of a classical column, with proportions expressed as a fraction or multiple of that module.
Le Corbusier
In his Le Modulor (1948), Le Corbusier presented a system of proportions which took the golden ratio and a man with a raised arm as the scalable modules of proportion.
See also
History of architecture
Mathematics and architecture
Mathematics and art
References
Further reading
P. H. Scholfield (1958). The Theory of Proportion in Architecture. Cambridge University Press.
Hanno-Walter Kruft (1994). History of Architectural Theory. Princeton Architectural Press. .
Architectural terminology
Architectural theory
|
https://en.wikipedia.org/wiki/Weil%E2%80%93Ch%C3%A2telet%20group
|
In arithmetic geometry, the Weil–Châtelet group or WC-group of an algebraic group such as an abelian variety A defined over a field K is the abelian group of principal homogeneous spaces for A, defined over K. named it for who introduced it for elliptic curves, and , who introduced it for more general groups. It plays a basic role in the arithmetic of abelian varieties, in particular for elliptic curves, because of its connection with infinite descent.
It can be defined directly from Galois cohomology, as , where is the absolute Galois group of K. It is of particular interest for local fields and global fields, such as algebraic number fields. For K a finite field, proved that the Weil–Châtelet group is trivial for elliptic curves, and proved that it is trivial for any connected algebraic group.
See also
The Tate–Shafarevich group of an abelian variety A defined over a number field K consists of the elements of the Weil–Châtelet group that become trivial in all of the completions of K.
The Selmer group, named after Ernst S. Selmer, of A with respect to an isogeny of abelian varieties is a related group which can be defined in terms of Galois cohomology as
where Av[f] denotes the f-torsion of Av and is the local Kummer map
.
References
English translation in his collected mathematical papers.
Number theory
|
https://en.wikipedia.org/wiki/Selmer%20group
|
In arithmetic geometry, the Selmer group, named in honor of the work of by , is a group constructed from an isogeny of abelian varieties.
The Selmer group of an isogeny
The Selmer group of an abelian variety A with respect to an isogeny f : A → B of abelian varieties can be defined in terms of Galois cohomology as
where Av[f] denotes the f-torsion of Av and is the local Kummer map . Note that is isomorphic to . Geometrically, the principal homogeneous spaces coming from elements of the Selmer group have Kv-rational points for all places v of K. The Selmer group is finite. This implies that the part of the Tate–Shafarevich group killed by f is finite due to the following exact sequence
0 → B(K)/f(A(K)) → Sel(f)(A/K) → Ш(A/K)[f] → 0.
The Selmer group in the middle of this exact sequence is finite and effectively computable. This implies the weak Mordell–Weil theorem that its subgroup B(K)/f(A(K)) is finite. There is a notorious problem about whether this subgroup can be effectively computed: there is a procedure for computing it that will terminate with the correct answer if there is some prime p such that the p-component of the Tate–Shafarevich group is finite. It is conjectured that the Tate–Shafarevich group is in fact finite, in which case any prime p would work. However, if (as seems unlikely) the Tate–Shafarevich group has an infinite p-component for every prime p, then the procedure may never terminate.
has generalized the notion of Selmer group to more general p-adic Galois representations and to p-adic variations of motives in the context of Iwasawa theory.
The Selmer group of a finite Galois module
More generally one can define the Selmer group of a finite Galois module M (such as the kernel of an isogeny) as the elements of H1(GK,M) that have images inside certain given subgroups of H1(GKv,M).
References
See also
Wiles's proof of Fermat's Last Theorem
Number theory
|
https://en.wikipedia.org/wiki/Municipality%20of%20the%20District%20of%20Argyle
|
Argyle, officially named the Municipality of the District of Argyle, is a district municipality in Yarmouth County, Nova Scotia. Statistics Canada classifies the district municipality as a municipal district.
The district municipality occupies the eastern portion of the county and is one of three municipal units - the other two being the Town of Yarmouth and the Municipality of the District of Yarmouth. Argyle is a bilingual community, in which native speakers of English and French each account for about half of the population. As of 2016, 60% of the population speaks both French and English, one of the highest rates of bilingualism in Canada.
History
Originally inhabited by the Mi'kmaq, it was called "Bapkoktek". In 1766, after his service in the French and Indian Wars, Lt. Ranald MacKinnon was given a land grant of . He called it Argyle (Argyll) because he was reminded of his previous home in the Highlands of Scotland. The township was granted in 1771.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, the Municipality of the District of Argyle had a population of living in of its total private dwellings, a change of from its 2016 population of . With a land area of , it had a population density of in 2021.
Education:
No certificate, diploma or degree: 41.64%
High school certificate: 16.38%
Apprenticeship or trade certificate or diploma: 14.16%
Community college, CEGEP or other non-university certificate or diploma: 19.36%
University certificate or diploma: 8.40%
Unemployment rate:
10.7%
Average house value:
$147,574
Communities
See also
List of municipalities in Nova Scotia
References
External links
Communities in Yarmouth County
Argyle
Bilingualism in Canada
Linguistic geography of Canada
|
https://en.wikipedia.org/wiki/Secretary%20problem
|
The secretary problem demonstrates a scenario involving optimal stopping theory that is studied extensively in the fields of applied probability, statistics, and decision theory. It is also known as the marriage problem, the sultan's dowry problem, the fussy suitor problem, the googol game, and the best choice problem.
The basic form of the problem is the following: imagine an administrator who wants to hire the best secretary out of rankable applicants for a position. The applicants are interviewed one by one in random order. A decision about each particular applicant is to be made immediately after the interview. Once rejected, an applicant cannot be recalled. During the interview, the administrator gains information sufficient to rank the applicant among all applicants interviewed so far, but is unaware of the quality of yet unseen applicants. The question is about the optimal strategy (stopping rule) to maximize the probability of selecting the best applicant. If the decision can be deferred to the end, this can be solved by the simple maximum selection algorithm of tracking the running maximum (and who achieved it), and selecting the overall maximum at the end. The difficulty is that the decision must be made immediately.
The shortest rigorous proof known so far is provided by the odds algorithm. It implies that the optimal win probability is always at least (where e is the base of the natural logarithm), and that the latter holds even in a much greater generality. The optimal stopping rule prescribes always rejecting the first applicants that are interviewed and then stopping at the first applicant who is better than every applicant interviewed so far (or continuing to the last applicant if this never occurs). Sometimes this strategy is called the stopping rule, because the probability of stopping at the best applicant with this strategy is about already for moderate values of . One reason why the secretary problem has received so much attention is that the optimal policy for the problem (the stopping rule) is simple and selects the single best candidate about 37% of the time, irrespective of whether there are 100 or 100 million applicants.
Formulation
Although there are many variations, the basic problem can be stated as follows:
There is a single position to fill.
There are n applicants for the position, and the value of n is known.
The applicants, if all seen together, can be ranked from best to worst unambiguously.
The applicants are interviewed sequentially in random order, with each order being equally likely.
Immediately after an interview, the interviewed applicant is either accepted or rejected, and the decision is irrevocable.
The decision to accept or reject an applicant can be based only on the relative ranks of the applicants interviewed so far.
The objective of the general solution is to have the highest probability of selecting the best applicant of the whole group. This is the same as maximizing the expect
|
https://en.wikipedia.org/wiki/Spurious%20correlation%20of%20ratios
|
In statistics, spurious correlation of ratios is a form of spurious correlation that arises between ratios of absolute measurements which themselves are uncorrelated.
The phenomenon of spurious correlation of ratios is one of the main motives for the field of compositional data analysis, which deals with the analysis of variables that carry only relative information, such as proportions, percentages and parts-per-million.
Spurious correlation is distinct from misconceptions about correlation and causality.
Illustration of spurious correlation
Pearson states a simple example of spurious correlation:
The scatter plot above illustrates this example using 500 observations of x, y, and z. Variables x, y and z are drawn from normal distributions with means 10, 10, and 30, respectively, and standard deviations 1, 1, and 3 respectively, i.e.,
Even though x, y, and z are statistically independent and therefore uncorrelated, in the depicted typical sample the ratios x/z and y/z have a correlation of 0.53. This is because of the common divisor (z) and can be better understood if we colour the points in the scatter plot by the z-value. Trios of (x, y, z) with relatively large z values tend to appear in the bottom left of the plot; trios with relatively small z values tend to appear in the top right.
Approximate amount of spurious correlation
Pearson derived an approximation of the correlation that would be observed between two indices ( and ), i.e., ratios of the absolute measurements :
where is the coefficient of variation of , and the Pearson correlation between and .
This expression can be simplified for situations where there is a common divisor by setting , and are uncorrelated, giving the spurious correlation:
For the special case in which all coefficients of variation are equal (as is the case in the illustrations at right),
Relevance to biology and other sciences
Pearson was joined by Sir Francis Galton and Walter Frank Raphael Weldon in cautioning scientists to be wary of spurious correlation, especially in biology where it is common to scale or normalize measurements by dividing them by a particular variable or total. The danger he saw was that conclusions would be drawn from correlations that are artifacts of the analysis method, rather than actual “organic” relationships.
However, it would appear that spurious correlation (and its potential to mislead) is not yet widely understood. In 1986 John Aitchison, who pioneered the log-ratio approach to compositional data analysis wrote:
More recent publications suggest that this lack of awareness prevails, at least in molecular bioscience.
References
Covariance and correlation
|
https://en.wikipedia.org/wiki/Arithmetic%20geometry
|
In mathematics, arithmetic geometry is roughly the application of techniques from algebraic geometry to problems in number theory. Arithmetic geometry is centered around Diophantine geometry, the study of rational points of algebraic varieties.
In more abstract terms, arithmetic geometry can be defined as the study of schemes of finite type over the spectrum of the ring of integers.
Overview
The classical objects of interest in arithmetic geometry are rational points: sets of solutions of a system of polynomial equations over number fields, finite fields, p-adic fields, or function fields, i.e. fields that are not algebraically closed excluding the real numbers. Rational points can be directly characterized by height functions which measure their arithmetic complexity.
The structure of algebraic varieties defined over non-algebraically closed fields has become a central area of interest that arose with the modern abstract development of algebraic geometry. Over finite fields, étale cohomology provides topological invariants associated to algebraic varieties. p-adic Hodge theory gives tools to examine when cohomological properties of varieties over the complex numbers extend to those over p-adic fields.
History
19th century: early arithmetic geometry
In the early 19th century, Carl Friedrich Gauss observed that non-zero integer solutions to homogeneous polynomial equations with rational coefficients exist if non-zero rational solutions exist.
In the 1850s, Leopold Kronecker formulated the Kronecker–Weber theorem, introduced the theory of divisors, and made numerous other connections between number theory and algebra. He then conjectured his "liebster Jugendtraum" ("dearest dream of youth"), a generalization that was later put forward by Hilbert in a modified form as his twelfth problem, which outlines a goal to have number theory operate only with rings that are quotients of polynomial rings over the integers.
Early-to-mid 20th century: algebraic developments and the Weil conjectures
In the late 1920s, André Weil demonstrated profound connections between algebraic geometry and number theory with his doctoral work leading to the Mordell–Weil theorem which demonstrates that the set of rational points of an abelian variety is a finitely generated abelian group.
Modern foundations of algebraic geometry were developed based on contemporary commutative algebra, including valuation theory and the theory of ideals by Oscar Zariski and others in the 1930s and 1940s.
In 1949, André Weil posed the landmark Weil conjectures about the local zeta-functions of algebraic varieties over finite fields. These conjectures offered a framework between algebraic geometry and number theory that propelled Alexander Grothendieck to recast the foundations making use of sheaf theory (together with Jean-Pierre Serre), and later scheme theory, in the 1950s and 1960s. Bernard Dwork proved one of the four Weil conjectures (rationality of the local zeta function) in 1960
|
https://en.wikipedia.org/wiki/Ring%20structure
|
Ring structure may refer to:
Chiastic structure, a literary technique
Heterocyclic compound, a chemical structure
Ring (mathematics), an algebraic structure
See also
Ring (disambiguation)
|
https://en.wikipedia.org/wiki/Generalized%20method%20of%20moments
|
In econometrics and statistics, the generalized method of moments (GMM) is a generic method for estimating parameters in statistical models. Usually it is applied in the context of semiparametric models, where the parameter of interest is finite-dimensional, whereas the full shape of the data's distribution function may not be known, and therefore maximum likelihood estimation is not applicable.
The method requires that a certain number of moment conditions be specified for the model. These moment conditions are functions of the model parameters and the data, such that their expectation is zero at the parameters' true values. The GMM method then minimizes a certain norm of the sample averages of the moment conditions, and can therefore be thought of as a special case of minimum-distance estimation.
The GMM estimators are known to be consistent, asymptotically normal, and most efficient in the class of all estimators that do not use any extra information aside from that contained in the moment conditions. GMM were advocated by Lars Peter Hansen in 1982 as a generalization of the method of moments, introduced by Karl Pearson in 1894. However, these estimators are mathematically equivalent to those based on "orthogonality conditions" (Sargan, 1958, 1959) or "unbiased estimating equations" (Huber, 1967; Wang et al., 1997).
Description
Suppose the available data consists of T observations , where each observation Yt is an n-dimensional multivariate random variable. We assume that the data come from a certain statistical model, defined up to an unknown parameter . The goal of the estimation problem is to find the “true” value of this parameter, θ0, or at least a reasonably close estimate.
A general assumption of GMM is that the data Yt be generated by a weakly stationary ergodic stochastic process. (The case of independent and identically distributed (iid) variables Yt is a special case of this condition.)
In order to apply GMM, we need to have "moment conditions", that is, we need to know a vector-valued function g(Y,θ) such that
where E denotes expectation, and Yt is a generic observation. Moreover, the function m(θ) must differ from zero for , otherwise the parameter θ will not be point-identified.
The basic idea behind GMM is to replace the theoretical expected value E[⋅] with its empirical analog—sample average:
and then to minimize the norm of this expression with respect to θ. The minimizing value of θ is our estimate for θ0.
By the law of large numbers, for large values of T, and thus we expect that . The generalized method of moments looks for a number which would make as close to zero as possible. Mathematically, this is equivalent to minimizing a certain norm of (norm of m, denoted as ||m||, measures the distance between m and zero). The properties of the resulting estimator will depend on the particular choice of the norm function, and therefore the theory of GMM considers an entire family of norms, defined as
where W is a
|
https://en.wikipedia.org/wiki/Mixture%20%28disambiguation%29
|
A mixture is a combination of two or more chemicals, in which the chemicals retain their identity.
Mixture may also refer to:
Mixture (probability), a set of probability distributions often used for statistical classification
Mixture (organ stop), a special kind of pipe organ stop which has several pipes to each note
Bombay mix, called "mixture" in southern India
The Mixtures, an Australian rock band formed in 1965
See also
Mixtur, a 1964 composition by Karlheinz Stockhausen
Mix (disambiguation)
|
https://en.wikipedia.org/wiki/Skew%20lines
|
In three-dimensional geometry, skew lines are two lines that do not intersect and are not parallel. A simple example of a pair of skew lines is the pair of lines through opposite edges of a regular tetrahedron. Two lines that both lie in the same plane must either cross each other or be parallel, so skew lines can exist only in three or more dimensions. Two lines are skew if and only if they are not coplanar.
General position
If four points are chosen at random uniformly within a unit cube, they will almost surely define a pair of skew lines. After the first three points have been chosen, the fourth point will define a non-skew line if, and only if, it is coplanar with the first three points. However, the plane through the first three points forms a subset of measure zero of the cube, and the probability that the fourth point lies on this plane is zero. If it does not, the lines defined by the points will be skew.
Similarly, in three-dimensional space a very small perturbation of any two parallel or intersecting lines will almost certainly turn them into skew lines. Therefore, any four points in general position always form skew lines.
In this sense, skew lines are the "usual" case, and parallel or intersecting lines are special cases.
Formulas
Testing for skewness
If each line in a pair of skew lines is defined by two points that it passes through, then these four points must not be coplanar, so they must be the vertices of a tetrahedron of nonzero volume. Conversely, any two pairs of points defining a tetrahedron of nonzero volume also define a pair of skew lines. Therefore, a test of whether two pairs of points define skew lines is to apply the formula for the volume of a tetrahedron in terms of its four vertices. Denoting one point as the 1×3 vector whose three elements are the point's three coordinate values, and likewise denoting , , and for the other points, we can check if the line through and is skew to the line through and by seeing if the tetrahedron volume formula gives a non-zero result:
Nearest points
Expressing the two lines as vectors:
The cross product of and is perpendicular to the lines.
The plane formed by the translations of Line 2 along contains the point and is perpendicular to .
Therefore, the intersecting point of Line 1 with the above-mentioned plane, which is also the point on Line 1 that is nearest to Line 2 is given by
Similarly, the point on Line 2 nearest to Line 1 is given by (where )
Distance
The nearest points and form the shortest line segment joining Line 1 and Line 2:
The distance between nearest points in two skew lines may also be expressed using other vectors:
Here the 1×3 vector represents an arbitrary point on the line through particular point with representing the direction of the line and with the value of the real number determining where the point is on the line, and similarly for arbitrary point on the line through particular point in direction .
The cross
|
https://en.wikipedia.org/wiki/Littlewood%E2%80%93Offord%20problem
|
In mathematical field of combinatorial geometry, the Littlewood–Offord problem is the problem of determining the number of subsums of a set of vectors that fall in a given convex set. More formally, if V is a vector space of dimension d, the problem is to determine, given a finite subset of vectors S and a convex subset A, the number of subsets of S whose summation is in A.
The first upper bound for this problem was proven (for d = 1 and d = 2) in 1938 by John Edensor Littlewood and A. Cyril Offord. This Littlewood–Offord lemma states that if S is a set of n real or complex numbers of absolute value at least one and A is any disc of radius one, then not more than of the 2n possible subsums of S fall into the disc.
In 1945 Paul Erdős improved the upper bound for d = 1 to
using Sperner's theorem. This bound is sharp; equality is attained when all vectors in S are equal. In 1966, Kleitman showed that the same bound held for complex numbers. In 1970, he extended this to the setting when V is a normed space.
Suppose S = {v1, …, vn}. By subtracting
from each possible subsum (that is, by changing the origin and then scaling by a factor of 2), the Littlewood–Offord problem is equivalent to the problem of determining the number of sums of the form
that fall in the target set A, where takes the value 1 or −1. This makes the problem into a probabilistic one, in which the question is of the distribution of these random vectors, and what can be said knowing nothing more about the vi.
References
Combinatorics
Probability problems
Lemmas
Mathematical problems
|
https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Information%20Theory
|
IEEE Transactions on Information Theory is a monthly peer-reviewed scientific journal published by the IEEE Information Theory Society. It covers information theory and the mathematics of communications. It was established in 1953 as IRE Transactions on Information Theory. The editor-in-chief is Muriel Médard (Massachusetts Institute of Technology). As of 2007, the journal allows the posting of preprints on arXiv.
According to Jack van Lint, it is the leading research journal in the whole field of coding theory. A 2006 study using the PageRank network analysis algorithm found that, among hundreds of computer science-related journals, IEEE Transactions on Information Theory had the highest ranking and was thus deemed the most prestigious. ACM Computing Surveys, with the highest impact factor, was deemed the most popular.
References
External links
List of past editors-in-chief
Engineering journals
Information theory
Transactions on Information Theory
Computer science journals
Cryptography journals
Academic journals established in 1953
|
https://en.wikipedia.org/wiki/Crossed%20module
|
In mathematics, and especially in homotopy theory, a crossed module consists of groups and , where acts on by automorphisms (which we will write on the left, , and a homomorphism of groups
that is equivariant with respect to the conjugation action of on itself:
and also satisfies the so-called Peiffer identity:
Origin
The first mention of the second identity for a crossed module seems to be in footnote 25 on p. 422 of J. H. C. Whitehead's 1941 paper cited below, while the term 'crossed module' is introduced in his 1946 paper cited below. These ideas were well worked up in his 1949 paper 'Combinatorial homotopy II', which also introduced the important idea of a free crossed module. Whitehead's ideas on crossed modules and their applications are developed and explained in the book by Brown, Higgins, Sivera listed below. Some generalisations of the idea of crossed module are explained in the paper of Janelidze.
Examples
Let be a normal subgroup of a group . Then, the inclusion
is a crossed module with the conjugation action of on .
For any group G, modules over the group ring are crossed G-modules with d = 0.
For any group H, the homomorphism from H to Aut(H) sending any element of H to the corresponding inner automorphism is a crossed module.
Given any central extension of groups
the surjective homomorphism
together with the action of on defines a crossed module. Thus, central extensions can be seen as special crossed modules. Conversely, a crossed module with surjective boundary defines a central extension.
If (X,A,x) is a pointed pair of topological spaces (i.e. is a subspace of , and is a point in ), then the homotopy boundary
from the second relative homotopy group to the fundamental group, may be given the structure of crossed module. The functor
satisfies a form of the van Kampen theorem, in that it preserves certain colimits.
The result on the crossed module of a pair can also be phrased as: if
is a pointed fibration of spaces, then the induced map of fundamental groups
may be given the structure of crossed module. This example is useful in algebraic K-theory. There are higher-dimensional versions of this fact using n-cubes of spaces.
These examples suggest that crossed modules may be thought of as "2-dimensional groups". In fact, this idea can be made precise using category theory. It can be shown that a crossed module is essentially the same as a categorical group or 2-group: that is, a group object in the category of categories, or equivalently a category object in the category of groups. This means that the concept of crossed module is one version of the result of blending the concepts of "group" and "category". This equivalence is important for higher-dimensional versions of groups.
Classifying space
Any crossed module
has a classifying space BM with the property that its homotopy groups are Coker d, in dimension 1, Ker d in dimension 2, and 0 in dimensions above 2. It is possible
|
https://en.wikipedia.org/wiki/Conic%20constant
|
In geometry, the conic constant (or Schwarzschild constant, after Karl Schwarzschild) is a quantity describing conic sections, and is represented by the letter K. The constant is given by where is the eccentricity of the conic section.
The equation for a conic section with apex at the origin and tangent to the y axis is
alternately
where R is the radius of curvature at .
This formulation is used in geometric optics to specify oblate elliptical (), spherical (), prolate elliptical (), parabolic (), and hyperbolic () lens and mirror surfaces. When the paraxial approximation is valid, the optical surface can be treated as a spherical surface with the same radius.
Some non-optical design references use the letter p as the conic constant. In these cases, .
References
Mathematical constants
Conic sections
Geometrical optics
|
https://en.wikipedia.org/wiki/William%20Alvin%20Howard
|
William Alvin Howard (born 1926) is a proof theorist best known for his work demonstrating formal similarity between intuitionistic logic and the simply typed lambda calculus that has come to be known as the Curry–Howard correspondence. He has also been active in the theory of proof-theoretic ordinals. He earned his Ph.D. at the University of Chicago in 1956 for his dissertation "k-fold recursion and well-ordering". He was a student of Saunders Mac Lane.
The Howard ordinal (also known as the Bachmann–Howard ordinal) was named after him.
He was elected to the 2018 class of fellows of the American Mathematical Society.
References
External links
Entry for William Alvin Howard at the Mathematics Genealogy Project.
20th-century American mathematicians
21st-century American mathematicians
American logicians
University of Chicago alumni
1926 births
Living people
Proof theorists
Fellows of the American Mathematical Society
|
https://en.wikipedia.org/wiki/Debabrata%20Basu
|
Debabrata Basu (5 July 1924 – 24 March 2001) was an Indian statistician who made fundamental contributions to the foundations of statistics. Basu invented simple examples that displayed some difficulties of likelihood-based statistics and frequentist statistics; Basu's paradoxes were especially important in the development of survey sampling. In statistical theory, Basu's theorem established the independence of a complete sufficient statistic and an ancillary statistic.
Basu was associated with the Indian Statistical Institute in India, and Florida State University in the United States.
Biography
Debabrata Basu was born in Dacca, Bengal, unpartitioned India, now Dhaka, Bangladesh. His father, N. M. Basu, was a mathematician specialising in number theory. Young Basu studied mathematics at Dacca University. He took a course in statistics as part of the under-graduate honours programme in Mathematics but his ambition was to become a pure mathematician. After getting his master's degree from Dacca University, Basu taught there from 1947 to 1948.
Following the partition of India in 1947, Basu made several trips to India. In 1948, he moved to Calcutta, where he worked for some time as an actuary in an insurance company. In 1950, he joined the Indian Statistical Institute as a research scholar under C.R. Rao.
In 1950, the Indian Statistical Institute was visited by Abraham Wald, who was giving a lecture tour sponsored by the International Statistical Institute. Wald greatly impressed Basu. Wald had developed a decision-theoretic foundations for statistics in which Bayesian statistics was a central part, because of Wald's theorem characterising admissible decision rules as Bayesian decision rules (or limits of Bayesian decision rules). Wald also showed the power of using measure-theoretic probability theory in statistics.
He married Kalyani Ray in 1952 and subsequently had two children, Monimala (Moni) Basu and Shantanu Basu. Moni is a journalism professor at the University of Florida and former CNN reporter, and Shantanu is an astrophysicist at the University of Western Ontario.
In 1953, after submitting his thesis to the University of Calcutta, Basu went as a Fulbright scholar to the University of California, Berkeley. There Basu had intensive discussions with Jerzy Neyman and "his brilliant younger colleagues" like Erich Leo Lehmann. Basu's theorem comes from this time. Basu thus had a good understanding of the decision-theoretic approach to statistics of Neyman, Pearson and Wald. In fact, Basu is described as having returned from Berkeley to India as a "complete Neyman Pearsonian" by J. K. Ghosh.
Basu met Ronald Fisher in the winter of 1954–1955; he wrote in 1988, "With his reference set argument, Sir Ronald was trying to find a via media between the two poles of Statistics – Berkeley and Bayes. My efforts to understand this Fisher compromise led me to the likelihood principle". In their festschrift for Basu, the editors Malay Ghosh and Patak
|
https://en.wikipedia.org/wiki/Latin%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering
|
Many letters of the Latin alphabet, both capital and small, are used in mathematics, science, and engineering to denote by convention specific or abstracted constants, variables of a certain type, units, multipliers, or physical entities. Certain letters, when combined with special formatting, take on special meaning.
Below is an alphabetical list of the letters of the alphabet with some of their uses. The field in which the convention applies is mathematics unless otherwise noted.
Aa
A represents:
the first point of a triangle
the digit "10" in hexadecimal and other positional numeral systems with a radix of 11 or greater
the unit ampere for electric current in physics
the area of a figure
the mass number or nucleon number of an element in chemistry
the Helmholtz free energy of a closed thermodynamic system of constant pressure and temperature
a vector potential, in electromagnetics it can refer to the magnetic vector potential
an Abelian group in abstract algebra
the Glaisher–Kinkelin constant
atomic weight, denoted by Ar
work in classical mechanics
the pre-exponential factor in the Arrhenius Equation
electron affinity
represents the algebraic numbers or affine space in algebraic geometry.
A blood type
A spectral type
a represents:
the first side of a triangle (opposite point A)
the scale factor of the expanding universe in cosmology
the acceleration in mechanics equations
the first constant in a linear equation
a constant in a polynomial
the unit are for area (100 m2)
the unit prefix atto (10−18)
the first term in a sequence or series
Reflectance
Bb
B represents:
the digit "11" in hexadecimal and other positional numeral systems with a radix of 12 or greater
the second point of a triangle
a ball (also denoted by ℬ () or )
a basis of a vector space or of a filter (both also denoted by ℬ ())
in econometrics and time-series statistics it is often used for the backshift or lag operator, the formal parameter of the lag polynomial
the magnetic field, denoted or
B with various subscripts represents several variations of Brun's constant and Betti numbers; it can also be used to mean the Bernoulli numbers.
B meson
A blood type
Boron
Buoyancy
Bulk modulus
Luminance
A spectral type
b represents:
the second side of a triangle (opposite point B)
the impact parameter in nuclear scattering
the second constant in a linear equation
usually with an index, sometimes with an arrow over it, a basis vector
a breadth
the molality of a solution
Bottom quark
Barn ( cm)
Cc
C represents:
the third point of a triangle
the digit "12" in hexadecimal and other positional numeral systems with a radix of 13 or greater
the unit coulomb of electrical charge
capacitance in electrical theory
with indices denoting the number of combinations, a binomial coefficient
together with a degree symbol (°), the Celsius measurement of temperature = °C
the circumference of a circle or other closed curve
the complement of a set (lowercase c and the symbol ∁ are
|
https://en.wikipedia.org/wiki/Culture%20of%20Goa
|
Goa is a state of India. Goans are commonly said to be born with music and football in their blood because both are deeply entrenched in Goan culture.
Religion
According to the 1909 statistics in the Catholic Encyclopedia, the total Catholic population was 293,628 out of a total population 365,291 (80.33%). Within Goa, there has been a steady decline of Christianity due to Goan emigration, and a steady rise of other religions, due to massive non-Goan immigration since the Annexation of Goa. (Native Goans are outnumbered by non-Goans in Goa.) Conversion seems to play little role in the demographic change. According to the 2011 census, in a population of 1,458,545 people, 66.1% were Hindu, 25.1% were Christian, 8.3% were Muslim and 0.1% were Sikh.
Festivals
The most popular celebrations in the Indian state of Goa include the Goa Carnival, (Konkani: Intruz), São João (Feast of John the Baptist), Ganesh Chaturthi (Konkani: Chavath), Diwali, Christmas (Konkani: Natalam), Easter (Konkani: Paskanchem Fest), Samvatsar Padvo or Sanvsar Padvo, and Shigmo. The largest festival in the state is the Feast of St. Francis Xavier, who is known as Goencho Saib.
Education
Cuisine
Rice with fish curry (Xit kodi in Konkani) is the staple diet in Goa. Goan cuisine is renowned for its rich variety of fish dishes cooked with elaborate recipes. Coconut and coconut oil is widely used in Goan cooking along with chili peppers, spices and vinegar giving the food a unique flavour. Pork and beef dishes such as Vindaloo, Xacuti and Sorpotel are cooked for major occasions among the Catholics. An exotic Goan vegetable stew, known as Khatkhate, is a very popular dish during the celebrations of festivals, Hindu and Christian alike. Khatkhate contains at least five vegetables, fresh coconut, and special Goan spices that add to the aroma. A rich egg-based multi-layered sweet dish known as bebinca is a favourite at Christmas. Cashew feni is made from the fermentation of the fruit of the cashew tree, while coconut feni is made from the sap of toddy palms.
Architecture
The architecture of Goa shows a distinct Portuguese influence. Fontainhas in Panaji has been declared a cultural quarter, showcasing the life, architecture and culture of Goa.
The Churches and Convents of Goa are a group of six churches that are a UNESCO World Heritage Site. The Basilica of Bom Jesus holds the mortal remains of St. Francis Xavier, the patron saint of Goa. Once every ten years, the body is taken down for veneration and for public viewing. The last such event was conducted in 2014.
Influences from other eras (Kadambas of Goa, Maratha Empire) are visible in some of Goa's temples, notably the Mahadev Temple and Saptakoteshwar Temple.
Sports
Football is the most popular sport in Goa, followed by hockey. Cricket, athletics, chess, swimming, table tennis and basketball are other popular sports in Goa. Fishing is also a popular recreational activity.
Science
Arts
Music
Goan Catholics have been pe
|
https://en.wikipedia.org/wiki/Mark%20Rudd
|
Mark William Rudd (born June 2, 1947) is an American political organizer, mathematics instructor, anti-war activist and counterculture icon who was involved with the Weather Underground in the 1960s.
Rudd became a member of the Columbia University chapter of Students for a Democratic Society (SDS) in 1963. By 1968, he had emerged as a leader for Columbia's SDS chapter. During the 1968 Columbia University Protests, he served as spokesperson for dissident students protesting a variety of issues, particularly the Vietnam War. As the war escalated, Mark Rudd worked with other youth movement leaders to take SDS in a more militant direction. While much of the general membership of SDS refused to countenance violence, Rudd together with some other prominent SDS members formed a paramilitary organization inspired by the Red Guard, referring to themselves collectively as "Weatherman" after the lyrics from a famous Bob Dylan song.
Rudd went "underground" in 1970, hiding from law enforcement following the Greenwich Village townhouse explosion that killed three of his Weather Underground peers. He surrendered to authorities in 1977 and served a short jail sentence. He taught mathematics at Central New Mexico Community College, and retired in Albuquerque, New Mexico. Rudd has since expressed regret for his role in the Weather Underground, and advocates for nonviolence and electoral change.
Early life
Rudd was born in Irvington, New Jersey. His father, Jacob S. Rudd (1909–1995), was born Jacob Shmuel Rudnitsky in Stanislower, Poland; he was a former army officer who sold real estate in Maplewood, New Jersey. His mother, Bertha Bass (1912–2009), was born in Elizabeth, New Jersey, the year after her parents emigrated from Lithuania. Rudd had a brother, David R. Rudd (1939–2009), who became an attorney. His family was Jewish. Rudd attended Columbia High School in his hometown of Maplewood, New Jersey, and later Columbia University in New York.
Campus activism
Mark Rudd's website says that his commitment to "fighting U.S imperialism" was inspired by the revolutionary movement in Cuba, which at that time was in its ninth year.
In 1968, Rudd and Bernardine Dohrn and other leaders of SDS were invited to Cuba to meet with Cuban, Soviet, and North Vietnamese delegates. His experiences in Cuba strengthened Rudd's anti-war and pro-Communist sentiments. Rudd had described the life of Cuba as "extremely humanistic" and he idealized Ernesto "Che" Guevara, referring to him as the "Heroic Guerrilla."
Once he returned from Cuba, Rudd was elected President of the Columbia chapter of SDS. In 1968, during his junior year, Rudd was expelled from Columbia after a series of sit-ins and riots that disrupted campus life and attracted nationwide attention. These events culminated in the dramatic occupation of several campus buildings, including the Administration building, Low Memorial Library, and which ended only after violent clashes between students and the New York Poli
|
https://en.wikipedia.org/wiki/Bundle%20gerbe
|
In mathematics, a bundle gerbe is a geometrical model of certain 1-gerbes with connection, or equivalently of a 2-class in Deligne cohomology.
Topology
-principal bundles over a space (see circle bundle) are geometrical realizations of 1-classes in Deligne cohomology which consist of 1-form connections and 2-form curvatures. The topology of a bundle is classified by its Chern class, which is an element of , the second integral cohomology of .
Gerbes, or more precisely 1-gerbes, are abstract descriptions of Deligne 2-classes, which each define an element of , the third integral cohomology of M.
As a cohomology class in Deligne cohomology
Recall for a smooth manifold the p-th Deligne cohomology groups are defined by the hypercohomology of the complex called the weight q Deligne complex, where is the sheaf of germs of smooth differential k-forms tensored with . So, we write for the Deligne-cohomology groups of weight . In the case the Deligne complex is then
We can understand the Deligne cohomology groups by looking at the Cech resolution giving a double complex. There is also an associated short exact sequence
where are the closed germs of complex valued 2-forms on and is the subspace of such forms where period integrals are integral. This can be used to show are the isomorphism classes of bundle-gerbes on a smooth manifold , or equivalently, the isomorphism classes of -bundles on .
History
Historically the most popular construction of a gerbe is a category-theoretic model featured in Giraud's theory of gerbes, which are roughly sheaves of groupoids over M.
In 1994 Murray introduced bundle gerbes, which are geometric realizations of 1-gerbes.
For many purposes these are more suitable for calculations than Giraud's realization, because their construction is entirely within the framework of classical geometry. In fact, as their name suggests, they are fiber bundles.
This notion was extended to higher gerbes the following year.
Relationship with twisted K-theory
In Twisted K-theory and the K-theory of Bundle Gerbes the authors defined modules of bundle gerbes and used this to define a K-theory for bundle gerbes. They then showed that this K-theory is isomorphic to Rosenberg's twisted K-theory, and provides an analysis-free construction.
In addition they defined a notion of twisted Chern character which is a characteristic class for an element of twisted K-theory. The twisted Chern character is a differential form that represents a class in the twisted cohomology with respect to the nilpotent operator
where is the ordinary exterior derivative and the twist is a closed 3-form. This construction was extended to equivariant K-theory and to holomorphic K-theory by Mathai and Stevenson.
Relationship with field theory
Bundle gerbes have also appeared in the context of conformal field theories. Gawedzki and Reis have interpreted the Wess–Zumino term in the Wess–Zumino–Witten model (WZW) of string propagation on a group
|
https://en.wikipedia.org/wiki/Lanczos%20approximation
|
In mathematics, the Lanczos approximation is a method for computing the gamma function numerically, published by Cornelius Lanczos in 1964. It is a practical alternative to the more popular Stirling's approximation for calculating the gamma function with fixed precision.
Introduction
The Lanczos approximation consists of the formula
for the gamma function, with
Here g is a real constant that may be chosen arbitrarily subject to the restriction that Re(z+g+) > 0. The coefficients p, which depend on g, are slightly more difficult to calculate (see below). Although the formula as stated here is only valid for arguments in the right complex half-plane, it can be extended to the entire complex plane by the reflection formula,
The series A is convergent, and may be truncated to obtain an approximation with the desired precision. By choosing an appropriate g (typically a small integer), only some 5–10 terms of the series are needed to compute the gamma function with typical single or double floating-point precision. If a fixed g is chosen, the coefficients can be calculated in advance and, thanks to partial fraction decomposition, the sum is recast into the following form:
Thus computing the gamma function becomes a matter of evaluating only a small number of elementary functions and multiplying by stored constants. The Lanczos approximation was popularized by Numerical Recipes, according to which computing the gamma function becomes "not much more difficult than other built-in functions that we take for granted, such as sin x or ex." The method is also implemented in the GNU Scientific Library, Boost, CPython and musl.
Coefficients
The coefficients are given by
where represents the (n, m)th element of the matrix of coefficients for the Chebyshev polynomials, which can be calculated recursively from these identities:
Godfrey (2001) describes how to obtain the coefficients and also the value of the truncated series A as a matrix product.
Derivation
Lanczos derived the formula from Leonhard Euler's integral
performing a sequence of basic manipulations to obtain
and deriving a series for the integral.
Simple implementation
The following implementation in the Python programming language works for complex arguments and typically gives 13 correct decimal places.
Note that omitting the smallest coefficients (in pursuit of speed, for example) gives totally inaccurate results; the coefficients must be recomputed from scratch for an expansion with fewer terms.
from cmath import sin, sqrt, pi, exp
"""
The coefficients used in the code are for when g = 7 and n = 9
Here are some other samples
g = 5
n = 5
p = [
1.0000018972739440364,
76.180082222642137322,
-86.505092037054859197,
24.012898581922685900,
-1.2296028490285820771
]
g = 5
n = 7
p = [
1.0000000001900148240,
76.180091729471463483,
-86.505320329416767652,
24.014098240830910490,
-1.2317395724501553875,
0.0012086509738661785061,
-5.39523938495312
|
https://en.wikipedia.org/wiki/Antoni%20%C5%81omnicki
|
Antoni Marian Łomnicki (17 January 1881 – 4 July 1941) was a Polish mathematician. He contributed to applied mathematics and cartography. He was the author of several textbooks of mathematics and was an influential mathematics teacher at the University of Lwów.
Life and work
Antoni Łomnicki was born in Lwów, the son of Marian Łomnicki. He was educated at the Lviv's IV Gymnasium, Jan Kazimierz University in Lwów in Poland (1899-1903). . His teachers included Józef Puzyna, Jan Rajewski, Stanisław Kępiński, Marian Smoluchowski, and Kazimierz Twardowski. He passed the teachers exam in 1903 and received a government scholarship in 1906 to study at the University of Göttingen where he attended lectures by David Hilbert, Felix Klein, H. Minkovsky, and others. He taught at the 7th Gymnasium in Lviv and from 1913 at the Lviv Polytechnic School. In 1918-19 he took part in the Polish-Ukrainian war. In 1920 he became professor of the Lwów University of Technology and taught for the next twenty years. He was part of the Lwów school of mathematics and influenced many other mathematicians including Stefan Banach, Kazimierz Kuratowski, Stanisław Stożek, Antoni Nikliborc, Stefan Kaczmarz, Władysław Orlicz, and Stanisław Mazur. In 1938 he became a member of the Warsaw Scientific Society (TNW). He worked on probability, calculus, statistics and mathematical cartography and wrote on the teaching of mathematics. His major works included Kartografia matematyczna (Warszawa 1927).Łomnicki was arrested on July 3, 1941 by the invading Nazi Germans during the Second World War and shot along with several other professors (see Massacre of Lwów professors) the next day on the Wzgórza Wuleckie in Lwów.
In December 1944 Stefan Banach wrote the following tribute to Łomnicki:
A native of Lwów, he worked for over twenty years as a mathematics professor at the Lwów University of Technology. He prepared hundreds of engineers for their profession. I was his assistant. He was the first to instil in me the importance and responsibility of a professor’s task. He was an unrivalled educator, one of the best I ever knew. He was the author of many popular schoolbooks as well as textbooks on advanced analysis for technologists, surpassing in quality those published abroad. His work in the field of cartography was at a high level. Equally effective were his teaching and pedagogic efforts. Professor Łomnicki had tremendous energy and a great work ethic.
References
External links
1881 births
1941 deaths
Lwów School of Mathematics
Victims of the Massacre of Lwów professors
Academic staff of Lviv Polytechnic
Textbook writers
|
https://en.wikipedia.org/wiki/Herman%20Auerbach
|
Herman Auerbach (October 26, 1901, Tarnopol – August 17, 1942) was a Polish mathematician and member of the Lwów School of Mathematics.
Auerbach was professor at Lwów University. During the Second World War because of his Jewish descent he was imprisoned by the Germans in the Lwów ghetto. In 1942 he was murdered at Bełżec extermination camp.
See also
Jewish ghettos in German-occupied Poland
List of Nazi-German concentration camps
The Holocaust in Poland
World War II casualties of Poland
References
External links
Author profile in the database zbMATH
1901 births
1942 deaths
People from Ternopil
People from the Kingdom of Galicia and Lodomeria
Jews from Galicia (Eastern Europe)
Austro-Hungarian Jews
Lwów School of Mathematics
Lwów Ghetto inmates
People who died in Belzec extermination camp
Polish civilians killed in World War II
Polish people who died in Nazi concentration camps
Polish Jews who died in the Holocaust
|
https://en.wikipedia.org/wiki/148%20%28number%29
|
148 (one hundred [and] forty-eight) is the natural number following 147 and before 149.
In mathematics
148 is the second number to be both a heptagonal number and a centered heptagonal number (the first is 1). It is the twelfth member of the Mian–Chowla sequence, the lexicographically smallest sequence of distinct positive integers with distinct pairwise sums.
There are 148 perfect graphs with six vertices, and 148 ways of partitioning four people into subsets, ordering the subsets, and selecting a leader for each subset.
In other fields
In the Book of Nehemiah 7:44 there are 148 singers, sons of Asaph, at the census of men of Israel upon return from exile. This differs from Ezra 2:41, where the number is given as 128.
Dunbar's number is a theoretical cognitive limit to the number of people with whom one can maintain stable interpersonal relationships. Dunbar predicted a "mean group size" of 148, but this is commonly rounded to 150.
See also
The year AD 148 or 148 BC
List of highways numbered 148
References
Integers
|
https://en.wikipedia.org/wiki/F-coalgebra
|
In mathematics, specifically in category theory, an -coalgebra is a structure defined according to a functor , with specific properties as defined below. For both algebras and coalgebras, a functor is a convenient and general way of organizing a signature. This has applications in computer science: examples of coalgebras include lazy evaluation, infinite data structures, such as streams, and also transition systems.
-coalgebras are dual to -algebras. Just as the class of all algebras for a given signature and equational theory form a variety, so does the class of all -coalgebras satisfying a given equational theory form a covariety, where the signature is given by .
Definition
Let
be an endofunctor on a category .
An -coalgebra is an object of together with a morphism
of , usually written as .
An -coalgebra homomorphism from to another -coalgebra
is a morphism
in such that
.
Thus the -coalgebras for a given functor F constitute a category.
Examples
Consider the endofunctor that sends a set to its disjoint union with the singleton set . A coalgebra of this endofunctor is given by , where is the so-called conatural numbers, consisting of the nonnegative integers and also infinity, and the function is given by , for and . In fact, is the terminal coalgebra of this endofunctor.
More generally, fix some set , and consider the functor that sends to . Then an -coalgebra is a finite or infinite stream over the alphabet where is the set of states and is the state-transition function. Applying the state-transition function to a state may yield two possible results: either an element of together with the next state of the stream, or the element of the singleton set as a separate "final state" indicating that there are no more values in the stream.
In many practical applications, the state-transition function of such a coalgebraic object may be of the form , which readily factorizes into a collection of "selectors", "observers", "methods" . Special cases of practical interest include observers yielding attribute values, and mutator methods of the form taking additional parameters and yielding states. This decomposition is dual to the decomposition of initial -algebras into sums of 'constructors'.
Let P be the power set construction on the category of sets, considered as a covariant functor. The P-coalgebras are in bijective correspondence with sets with a binary relation.
Now fix another set, A. Then coalgebras for the endofunctor P(A×(-)) are in bijective correspondence with labelled transition systems, and homomorphisms between coalgebras correspond to functional bisimulations between labelled transition systems.
Applications
In computer science, coalgebra has emerged as a convenient and suitably general way of specifying the behaviour of systems and data structures that are potentially infinite, for example classes in object-oriented programming, streams and transition systems. While algebraic specifi
|
https://en.wikipedia.org/wiki/Gelfond%27s%20constant
|
In mathematics, Gelfond's constant, named after Aleksandr Gelfond, is , that is, raised to the power . Like both and , this constant is a transcendental number. This was first established by Gelfond and may now be considered as an application of the Gelfond–Schneider theorem, noting that
where is the imaginary unit. Since is algebraic but not rational, is transcendental. The constant was mentioned in Hilbert's seventh problem. A related constant is , known as the Gelfond–Schneider constant. The related value + is also irrational.
Numerical value
The decimal expansion of Gelfond's constant begins
...
Construction
If one defines and
for , then the sequence
converges rapidly to .
Continued fraction expansion
This is based on the digits for the simple continued fraction:
As given by the integer sequence A058287.
Geometric property
The volume of the n-dimensional ball (or n-ball), is given by
where is its radius, and is the gamma function. Any even-dimensional ball has volume
and, summing up all the unit-ball () volumes of even-dimension gives
Similar or related constants
Ramanujan's constant
This is known as Ramanujan's constant. It is an application of Heegner numbers, where 163 is the Heegner number in question.
Similar to , is very close to an integer:
...
This number was discovered in 1859 by the mathematician Charles Hermite.
In a 1975 April Fool article in Scientific American magazine, "Mathematical Games" columnist Martin Gardner made the hoax claim that the number was in fact an integer, and that the Indian mathematical genius Srinivasa Ramanujan had predicted it—hence its name.
The coincidental closeness, to within 0.000 000 000 000 75 of the number is explained by complex multiplication and the q-expansion of the j-invariant, specifically:
and,
where is the error term,
which explains why is 0.000 000 000 000 75 below .
(For more detail on this proof, consult the article on Heegner numbers.)
The number
The decimal expansion of is given by A018938:
...
Despite this being nearly the integer 20, no explanation has been given for this fact and it is believed to be a mathematical coincidence.
The number
The decimal expansion of is given by A059850:
...
It is not known whether or not this number is transcendental. Note that, by Gelfond-Schneider theorem, we can only infer definitively that is transcendental if is algebraic and is not rational ( and are both considered complex numbers, also , ).
In the case of , we are only able to prove this number transcendental due to properties of complex exponential forms, where is considered the modulus of the complex number , and the above equivalency given to transform it into , allowing the application of Gelfond-Schneider theorem.
has no such equivalence, and hence, as both and are transcendental, we can make no conclusion about the transcendence of .
The number
As with , it is not known whether is transcendental. Further, no proof ex
|
https://en.wikipedia.org/wiki/Simply%20typed%20lambda%20calculus
|
The simply typed lambda calculus (), a form
of type theory, is a typed interpretation of the lambda calculus with only one type constructor () that builds function types. It is the canonical and simplest example of a typed lambda calculus. The simply typed lambda calculus was originally introduced by Alonzo Church in 1940 as an attempt to avoid paradoxical use of the untyped lambda calculus.
The term simple type is also used to refer extensions of the simply typed lambda calculus such as products, coproducts or natural numbers (System T) or even full recursion (like PCF). In contrast, systems which introduce polymorphic types (like System F) or dependent types (like the Logical Framework) are not considered simply typed. The simple types, except for full recursion, are still considered simple because the Church encodings of such structures can be done using only and suitable type variables, while polymorphism and dependency cannot.
Syntax
In this article, the symbols and are used to range over types. Informally, the function type refers to the type of functions that, given an input of type , produce an output of type .
By convention, associates to the right: is read as .
To define the types, a set of base types, , must first be defined. These are sometimes called atomic types or type constants. With this fixed, the syntax of types is:
.
For example, , generates an infinite set of types starting with
A set of term constants is also fixed for the base types. For example, it might be assumed that a base type , and the term constants could be the natural numbers. In the original presentation, Church used only two base types: for "the type of propositions" and for "the type of individuals". The type has no term constants, whereas has one term constant. Frequently the calculus with only one base type, usually , is considered.
The syntax of the simply typed lambda calculus is essentially that of the lambda calculus itself. The term denotes that the variable is of type . The term syntax, in BNF, is then:
where is a term constant.
That is, variable reference, abstractions, application, and constant. A variable reference is bound if it is inside of an abstraction binding . A term is closed if there are no unbound variables.
In comparison, the syntax of untyped lambda calculus has no such typing or term constants:
Whereas in typed lambda calculus every abstraction (i.e. function) must specify the type of its argument.
Typing rules
To define the set of well-typed lambda terms of a given type, one defines a typing relation between terms and types. First, one introduces typing contexts or typing environments , which are sets of typing assumptions. A typing assumption has the form , meaning has type .
The typing relation indicates that is a term of type in context . In this case is said to be well-typed (having type ). Instances of the typing relation are called typing judgements. The validity of a typing judgemen
|
https://en.wikipedia.org/wiki/Symbolic%20dynamics
|
In mathematics, symbolic dynamics is the practice of modeling a topological or smooth dynamical system by a discrete space consisting of infinite sequences of abstract symbols, each of which corresponds to a state of the system, with the dynamics (evolution) given by the shift operator. Formally, a Markov partition is used to provide a finite cover for the smooth system; each set of the cover is associated with a single symbol, and the sequences of symbols result as a trajectory of the system moves from one covering set to another.
History
The idea goes back to Jacques Hadamard's 1898 paper on the geodesics on surfaces of negative curvature. It was applied by Marston Morse in 1921 to the construction of a nonperiodic recurrent geodesic. Related work was done by Emil Artin in 1924 (for the system now called Artin billiard), Pekka Myrberg, Paul Koebe, Jakob Nielsen, G. A. Hedlund.
The first formal treatment was developed by Morse and Hedlund in their 1938 paper. George Birkhoff, Norman Levinson and the pair Mary Cartwright and J. E. Littlewood have applied similar methods to qualitative analysis of nonautonomous second order differential equations.
Claude Shannon used symbolic sequences and shifts of finite type in his 1948 paper A mathematical theory of communication that gave birth to information theory.
During the late 1960s the method of symbolic dynamics was developed to hyperbolic toral automorphisms by Roy Adler and Benjamin Weiss, and to Anosov diffeomorphisms by Yakov Sinai who used the symbolic model to construct Gibbs measures. In the early 1970s the theory was extended to Anosov flows by Marina Ratner, and to Axiom A diffeomorphisms and flows by Rufus Bowen.
A spectacular application of the methods of symbolic dynamics is Sharkovskii's theorem about periodic orbits of a continuous map of an interval into itself (1964).
Examples
Concepts such as heteroclinic orbits and homoclinic orbits have a particularly simple representation in symbolic dynamics.
Itinerary
Itinerary of point with respect to the partition is a sequence of symbols. It describes dynamic of the point.
Applications
Symbolic dynamics originated as a method to study general dynamical systems; now its techniques and ideas have found significant applications in data storage and transmission, linear algebra, the motions of the planets and many other areas. The distinct feature in symbolic dynamics is that time is measured in discrete intervals. So at each time interval the system is in a particular state. Each state is associated with a symbol and the evolution of the system is described by an infinite sequence of symbols—represented effectively as strings. If the system states are not inherently discrete, then the state vector must be discretized, so as to get a coarse-grained description of the system.
See also
Measure-preserving dynamical system
Combinatorics and dynamical systems
Shift space
Shift of finite type
Complex dynamics
Arithmetic dynamics
Referen
|
https://en.wikipedia.org/wiki/Subshift%20of%20finite%20type
|
In mathematics, subshifts of finite type are used to model dynamical systems, and in particular are the objects of study in symbolic dynamics and ergodic theory. They also describe the set of all possible sequences executed by a finite state machine. The most widely studied shift spaces are the subshifts of finite type.
Definition
Let be a finite set of symbols (alphabet). Let denote the set of all bi-infinite sequences of elements of together with the shift operator . We endow with the discrete topology and with the product topology. A symbolic flow or subshift is a closed -invariant subset of and the associated language is the set of finite subsequences of .
Now let be an adjacency matrix with entries in Using these elements we construct a directed graph with the set of vertices and the set of edges containing the directed edge in if and only if . Let be the set of all infinite admissible sequences of edges, where by admissible it is meant that the sequence is a walk of the graph, and the sequence can be either one-sided or two-sided infinite. Let be the left shift operator on such sequences; it plays the role of the time-evolution operator of the dynamical system. A subshift of finite type is then defined as a pair obtained in this way. If the sequence extends to infinity in only one direction, it is called a one-sided subshift of finite type, and if it is bilateral, it is called a two-sided subshift of finite type.
Formally, one may define the sequences of edges as
This is the space of all sequences of symbols such that the symbol can be followed by the symbol only if the -th entry of the matrix is 1. The space of all bi-infinite sequences is defined analogously:
The shift operator maps a sequence in the one- or two-sided shift to another by shifting all symbols to the left, i.e.
Clearly this map is only invertible in the case of the two-sided shift.
A subshift of finite type is called transitive if is strongly connected: there is a sequence of edges from any one vertex to any other vertex. It is precisely transitive subshifts of finite type which correspond to dynamical systems with orbits that are dense.
An important special case is the full -shift: it has a graph with an edge that connects every vertex to every other vertex; that is, all of the entries of the adjacency matrix are 1. The full -shift corresponds to the Bernoulli scheme without the measure.
Terminology
By convention, the term shift is understood to refer to the full -shift. A subshift is then any subspace of the full shift that is shift-invariant (that is, a subspace that is invariant under the action of the shift operator), non-empty, and closed for the product topology defined below. Some subshifts can be characterized by a transition matrix, as above; such subshifts are then called subshifts of finite type. Often, subshifts of finite type are called simply shifts of finite type. Subshifts of finite type are also sometimes cal
|
https://en.wikipedia.org/wiki/Marcel%20J.%20E.%20Golay
|
Marcel Jules Edouard Golay (; May 3, 1902 – April 27, 1989) was a Swiss mathematician, physicist, and information theorist, who applied mathematics to real-world military and industrial problems. He was born in Neuchâtel, Switzerland.
Career
Golay studied electrical engineering at the Eidgenössische Technische Hochschule (Swiss Federal Institute of Technology) in Zürich. He joined Bell laboratories in New York City in 1924, spending four years there. He received a Ph.D. in physics from the University of Chicago in 1931.
Golay then joined the US Army Signal Corps, eventually rising to the post of Chief Scientist. He was based mostly in Fort Monmouth, New Jersey. He developed an IR "radar" based on its Golay Detector, the SCR-268T specifically designed to the detection of vessels (S/S Normandie was detected at its inaugural crossing). The SCR-268 (using Barkausen vacuum tubes) and the SCR-268T were to work together. However the 268T, only used in the Pacific theater was abandoned before the end of the war.
Between 1955 and 1963, Golay was a consultant for Philco Corporation of Philadelphia, PA, and the Perkin-Elmer Corporation of Norwalk, Connecticut. In 1963, Golay joined the Perkin-Elmer company full-time as senior research scientist. Golay worked on many problems, including gas chromatography and optical spectroscopy. It was during this period when he patented an Analysis of Images, for two-dimensional parallel data processing, and worked to develop the idea, called Golay Logic for Optical Pattern Recognition along with Kendall Preston, Philip Norgren, David Dacava and Joseph Carvalko, Jr. He remained with Perkin-Elmer for the rest of his life.
Achievements
Discoverer of the famous binary and ternary Golay codes, which are perfect error-correcting codes that generalize the Hamming code. They were used in the Voyager probes, and led to advances in the theory of finite groups.
Co-author with Abraham Savitzky of the Savitzky–Golay filter.
Inventor of the Golay cell, a type of infrared detector.
He introduced complementary sequences. Those are pairs of binary sequences whose autocorrelation functions add up to zero for all non-zero time shifts. Today they are used in various WiFi and 3G standards.
He introduced the theory of dispersion in open tubular columns (capillary columns) and demonstrated their efficacy at the Second International Symposium on Gas Chromatography at Amsterdam in 1958.
Significant bibliography
References
External links
Bibliography of writings by and about Marcel Golay, including awards. Compiled by his daughter, Nona Golay Bloomer (2007).
Reprints of papers are in the Archives of the Science History Institute:
1902 births
1989 deaths
Swiss mathematicians
20th-century American engineers
American information theorists
Members of the French Academy of Sciences
United States Army officers
20th-century American mathematicians
University of Chicago alumni
ETH Zurich alumni
Swiss emigrants to the United States
20th-cent
|
https://en.wikipedia.org/wiki/Euclidean%20plane%20isometry
|
In geometry, a Euclidean plane isometry is an isometry of the Euclidean plane, or more informally, a way of transforming the plane that preserves geometrical properties such as length. There are four types: translations, rotations, reflections, and glide reflections (see below under ).
The set of Euclidean plane isometries forms a group under composition: the Euclidean group in two dimensions. It is generated by reflections in lines, and every element of the Euclidean group is the composite of at most three distinct reflections.
Informal discussion
Informally, a Euclidean plane isometry is any way of transforming the plane without "deforming" it. For example, suppose that the Euclidean plane is represented by a sheet of transparent plastic sitting on a desk. Examples of isometries include:
Shifting the sheet one inch to the right.
Rotating the sheet by ten degrees around some marked point (which remains motionless).
Turning the sheet over to look at it from behind. Notice that if a picture is drawn on one side of the sheet, then after turning the sheet over, we see the mirror image of the picture.
These are examples of translations, rotations, and reflections respectively. There is one further type of isometry, called a glide reflection (see below under classification of Euclidean plane isometries).
However, folding, cutting, or melting the sheet are not considered isometries. Neither are less drastic alterations like bending, stretching, or twisting.
Formal definition
An isometry of the Euclidean plane is a distance-preserving transformation of the plane. That is, it is a map
such that for any points p and q in the plane,
where d(p, q) is the usual Euclidean distance between p and q.
Classification
It can be shown that there are four types of Euclidean plane isometries. (Note: the notations for the types of isometries listed below are not completely standardised.)
Reflections
Reflections, or mirror isometries, denoted by Fc,v, where c is a point in the plane and v is a unit vector in R2. (F is for "flip".) have the effect of reflecting the point p in the line L that is perpendicular to v and that passes through c. The line L is called the reflection axis or the associated mirror. To find a formula for Fc,v, we first use the dot product to find the component t of p − c in the v direction,
and then we obtain the reflection of p by subtraction,
The combination of rotations about the origin and reflections about a line through the origin is obtained with all orthogonal matrices (i.e. with determinant 1 and −1) forming orthogonal group O(2). In the case of a determinant of −1 we have:
which is a reflection in the x-axis followed by a rotation by an angle θ, or equivalently, a reflection in a line making an angle of θ/2 with the x-axis. Reflection in a parallel line corresponds to adding a vector perpendicular to it.
Translations
Translations, denoted by Tv, where v is a vector in R2 have the effect of shifting the plane in t
|
https://en.wikipedia.org/wiki/David%20Cox%20%28statistician%29
|
Sir David Roxbee Cox (15 July 1924 – 18 January 2022) was a British statistician and educator. His wide-ranging contributions to the field of statistics included introducing logistic regression, the proportional hazards model and the Cox process, a point process named after him.
He was a professor of statistics at Birkbeck College, London, Imperial College London and the University of Oxford, and served as Warden of Nuffield College, Oxford. The first recipient of the International Prize in Statistics, he also received the Guy, George Box and Copley medals, as well as a knighthood.
Early life
Cox was born in Birmingham on 15 July 1924. His father was a die sinker and part-owner of a jewellery shop, and they lived near the Jewellery Quarter. The aeronautical engineer Harold Roxbee Cox was a distant cousin. He attended Handsworth Grammar School, Birmingham. He received a Master of Arts in mathematics at St John's College, Cambridge, and obtained his PhD from the University of Leeds in 1949, advised by Henry Daniels and Bernard Welch. His dissertation was entitled Theory of Fibre Motion.
Career
Cox was employed from 1944 to 1946 at the Royal Aircraft Establishment, from 1946 to 1950 at the Wool Industries Research Association in Leeds, and from 1950 to 1955 worked at the Statistical Laboratory at the University of Cambridge. From 1956 to 1966 he was Reader and then Professor of Statistics at Birkbeck College, London. In 1966, he took up the Chair position in Statistics at Imperial College London where he later became head of the mathematics department. In 1988 he became Warden of Nuffield College and a member of the Department of Statistics at Oxford University. He formally retired from these positions in 1994, but continued to work at Oxford.
Cox supervised, collaborated with, and encouraged many notable researchers prominent in statistics. He collaborated with George Box on a study of transformations such as the Box–Cox transformation and they were especially delighted to be credited as Box and Cox. He was the doctoral advisor of David Hinkley, Peter McCullagh, Basilio de Bragança Pereira, Wally Smith, Gauss Moutinho Cordeiro, Valerie Isham, Henry Wynn and Jane Hutton. He served as president of the Bernoulli Society from 1979 to 1981, of the Royal Statistical Society from 1980 to 1982, and of the International Statistical Institute from 1995 to 1997. He was an Honorary Fellow of Nuffield College and St John's College, Cambridge, and was a member of the Department of Statistics at the University of Oxford.
Personal life
In 1947, Cox married Joyce Drummond, and they had four children. He died on 18 January 2022, at the age of 97.
Research
Cox made pioneering and important contributions to numerous areas of statistics and applied probability, of which the best known are:
Logistic regression, which is employed when the variable to be predicted is categorical (i.e., can take a limited number of values, e.g., gender, race (in the US census)),
|
https://en.wikipedia.org/wiki/Self-descriptive%20number
|
In mathematics, a self-descriptive number is an integer m that in a given base b is b digits long in which each digit d at position n (the most significant digit being at position 0 and the least significant at position b−1) counts how many instances of digit n are in m.
Example
For example, in base 10, the number 6210001000 is self-descriptive because of the following reasons:
In base 10, the number has 10 digits, indicating its base;
It contains 6 at position 0, indicating that there are six 0s in 6210001000;
It contains 2 at position 1, indicating that there are two 1s in 6210001000;
It contains 1 at position 2, indicating that there is one 2 in 6210001000;
It contains 0 at position 3, indicating that there is no 3 in 6210001000;
It contains 0 at position 4, indicating that there is no 4 in 6210001000;
It contains 0 at position 5, indicating that there is no 5 in 6210001000;
It contains 1 at position 6, indicating that there is one 6 in 6210001000;
It contains 0 at position 7, indicating that there is no 7 in 6210001000;
It contains 0 at position 8, indicating that there is no 8 in 6210001000;
It contains 0 at position 9, indicating that there is no 9 in 6210001000.
In different bases
There are no self-descriptive numbers in bases 2, 3 or 6. In bases 7 and above, there is, if nothing else, a self-descriptive number of the form , which has b−4 instances of the digit 0, two instances of the digit 1, one instance of the digit 2, one instance of digit b – 4, and no instances of any other digits. The following table lists some self-descriptive numbers in a few selected bases:
Properties
From the numbers listed in the table, it would seem that all self-descriptive numbers have digit sums equal to their base, and that they're multiples of that base. The first fact follows trivially from the fact that the digit sum equals the total number of digits, which is equal to the base, from the definition of self-descriptive number.
That a self-descriptive number in base b must be a multiple of that base (or equivalently, that the last digit of the self-descriptive number must be 0) can be proven by contradiction as follows: assume that there is in fact a self-descriptive number m in base b that is b-digits long but not a multiple of b. The digit at position b – 1 must be at least 1, meaning that there is at least one instance of the digit b – 1 in m. At whatever position x that digit b – 1 falls, there must be at least b – 1 instances of digit x in m. Therefore, we have at least one instance of the digit 1, and b – 1 instances of x. If x > 1, then m has more than b digits, leading to a contradiction of our initial statement. And if x = 0 or 1, that also leads to a contradiction.
It follows that a self-descriptive number in base b is a Harshad number in base b.
Autobiographical numbers
A generalization of the self-descriptive numbers, called the autobiographical numbers, allow fewer digits than the base, as long as the digits that are included in the n
|
https://en.wikipedia.org/wiki/Frege%27s%20propositional%20calculus
|
In mathematical logic, Frege's propositional calculus was the first axiomatization of propositional calculus. It was invented by Gottlob Frege, who also invented predicate calculus, in 1879 as part of his second-order predicate calculus (although Charles Peirce was the first to use the term "second-order" and developed his own version of the predicate calculus independently of Frege).
It makes use of just two logical operators: implication and negation, and it is constituted by six axioms and one inference rule: modus ponens.
Frege's propositional calculus is equivalent to any other classical propositional calculus, such as the "standard PC" with 11 axioms. Frege's PC and standard PC share two common axioms: THEN-1 and THEN-2. Notice that axioms THEN-1 through THEN-3 only make use of (and define) the implication operator, whereas axioms FRG-1 through FRG-3 define the negation operator.
The following theorems will aim to find the remaining nine axioms of standard PC within the "theorem-space" of Frege's PC, showing that the theory of standard PC is contained within the theory of Frege's PC.
(A theory, also called here, for figurative purposes, a "theorem-space", is a set of theorems that are a subset of a universal set of well-formed formulas. The theorems are linked to each other in a directed manner by inference rules, forming a sort of dendritic network. At the roots of the theorem-space are found the axioms, which "generate" the theorem-space much like a generating set generates a group.)
Rules
Theorems
Note: ¬(A→¬B)→A (TH4), ¬(A→¬B)→B (TH6), and A→(B→¬(A→¬B)) (TH10), so ¬(A→¬B) behaves just like A∧B (compare with axioms AND-1, AND-2, and AND-3).
TH11 is axiom NOT-1 of standard PC, called "reductio ad absurdum".
Theorem TH15 is the converse of axiom THEN-2.
Compare TH17 with theorem TH5.
Note: A→((A→B)→B) (TH8), B→((A→B)→B) (TH9), and
(A→C)→((B→C)→(((A→B)→B)→C)) (TH19), so ((A→B)→B) behaves just like A∨B. (Compare with axioms OR-1, OR-2, and OR-3.)
TH20 corresponds to axiom NOT-3 of standard PC, called "tertium non datur".
TH21 corresponds to axiom NOT-2 of standard PC, called "ex contradictione quodlibet".
All the axioms of standard PC have been derived from Frege's PC, after having let
A∧B := ¬(A→¬B) and A∨B := (A→B)→B. These expressions are not unique, e.g. A∨B could also have been defined as (B→A)→A, ¬A→B, or ¬B→A. Notice, though, that the definition A∨B := (A→B)→B contains no negations. On the other hand, A∧B cannot be defined in terms of implication alone, without using negation.
In a sense, the expressions A∧B and A∨B can be thought of as "black boxes". Inside, these black boxes contain formulas made up only of implication and negation. The black boxes can contain anything, as long as when plugged into the AND-1 through AND-3 and OR-1 through OR-3 axioms of standard PC the axioms remain true. These axioms provide complete syntactic definitions of the conjunction and disjunction operators.
The next set of t
|
https://en.wikipedia.org/wiki/Normed%20algebra
|
In mathematics, a normed algebra A is an algebra over a field which has a sub-multiplicative norm:
Some authors require it to have a multiplicative identity 1 such that ║1║ = 1.
See also
Banach algebra
Composition algebra
Division algebra
Gelfand–Mazur theorem
Hurwitz's theorem (composition algebras)
External reading
Algebras
|
https://en.wikipedia.org/wiki/Clelia
|
Clelia may refer to:
Clelia (given name) (includes a list of people with the name)
Cloelia, a legendary Roman figure
Clelia curve, an algebraic curve
Clelia (snake genus), a genus of snakes
|
https://en.wikipedia.org/wiki/Pohlig%E2%80%93Hellman%20algorithm
|
In group theory, the Pohlig–Hellman algorithm, sometimes credited as the Silver–Pohlig–Hellman algorithm, is a special-purpose algorithm for computing discrete logarithms in a finite abelian group whose order is a smooth integer.
The algorithm was introduced by Roland Silver, but first published by Stephen Pohlig and Martin Hellman (independent of Silver).
Groups of prime-power order
As an important special case, which is used as a subroutine in the general algorithm (see below), the Pohlig–Hellman algorithm applies to groups whose order is a prime power. The basic idea of this algorithm is to iteratively compute the -adic digits of the logarithm by repeatedly "shifting out" all but one unknown digit in the exponent, and computing that digit by elementary methods.
(Note that for readability, the algorithm is stated for cyclic groups — in general, must be replaced by the subgroup generated by , which is always cyclic.)
Input. A cyclic group of order with generator and an element .
Output. The unique integer such that .
Initialize
Compute . By Lagrange's theorem, this element has order .
For all , do:
Compute . By construction, the order of this element must divide , hence .
Using the baby-step giant-step algorithm, compute such that . It takes time .
Set .
Return .
Assuming is much smaller than , the algorithm computes discrete logarithms in time complexity , far better than the baby-step giant-step algorithm's .
The general algorithm
In this section, we present the general case of the Pohlig–Hellman algorithm. The core ingredients are the algorithm from the previous section (to compute a logarithm modulo each prime power in the group order) and the Chinese remainder theorem (to combine these to a logarithm in the full group).
(Again, we assume the group to be cyclic, with the understanding that a non-cyclic group must be replaced by the subgroup generated by the logarithm's base element.)
Input. A cyclic group of order with generator , an element , and a prime factorization .
Output. The unique integer such that .
For each , do:
Compute . By Lagrange's theorem, this element has order .
Compute . By construction, .
Using the algorithm above in the group , compute such that .
Solve the simultaneous congruence The Chinese remainder theorem guarantees there exists a unique solution .
Return .
The correctness of this algorithm can be verified via the classification of finite abelian groups: Raising and to the power of can be understood as the projection to the factor group of order .
Complexity
The worst-case input for the Pohlig–Hellman algorithm is a group of prime order: In that case, it degrades to the baby-step giant-step algorithm, hence the worst-case time complexity is . However, it is much more efficient if the order is smooth: Specifically, if is the prime factorization of , then the algorithm's complexity is group operations.
Notes
References
Number theoretic algorithms
|
https://en.wikipedia.org/wiki/Subtended%20angle
|
In geometry, an angle is subtended by an arc, line segment or any other section of a curve when its two rays pass through the endpoints of that arc, line segment or curve section. Conversely, the arc, line segment or curve section confined within the rays of an angle is regarded as the corresponding subtension of that angle. It is also sometimes said that an arc is intercepted or enclosed by that angle.
The precise meaning varies with context. For example, one may speak of the angle subtended by an arc of a circle when the angle's vertex is the centre of the circle.
See also
Central angle
Inscribed angle
External links
Definition of subtended angle, mathisfun.com, with interactive applet
How an object subtends an angle, Math Open Reference, with interactive applet
Angle definition pages, Math Open Reference, with interactive applets that are also useful in a classroom setting.
Angle
|
https://en.wikipedia.org/wiki/George%20Blakley
|
George Robert (Bob) Blakley Jr. was an American cryptographer and a professor of mathematics at Texas A&M University, best known for inventing a secret sharing scheme in 1979 (see ).
Biography
Blakley did his undergraduate studies in physics at Georgetown University, and received his Ph.D. in mathematics from the University of Maryland in 1960. After postdoctoral studies at Cornell University and Harvard University, he held faculty positions at the University of Illinois at Urbana–Champaign and the State University of New York at Buffalo before joining Texas A&M in 1970. At Texas A&M, he was chairman of the mathematics department from 1970 to 1978.
Blakley served on the board of directors of the International Association for Cryptologic Research from 1993 to 1995. He co-founded the International Journal of Information Security, published by Springer-Verlag, in 2000, and then served on its advisory board.
His son, George Robert (Bob) Blakley III, is also a computer security researcher.
Secret-sharing scheme
In order to split a secret into several shares, Blakley's scheme specifies the secret as a point in n-dimensional space, and gives out shares that correspond to hyperplanes that intersect the secret point. Any n such hyperplanes will specify the point, while fewer than n hyperplanes will leave at least one degree of freedom, and thus leave the point unspecified.
In contrast, Shamir's secret sharing scheme represents the secret as the y-intercept of an n-degree polynomial, and shares correspond to points on the polynomial.
Awards and honors
In 2001 Blakley received an honorary doctorate from Queensland University of Technology.
In 2009 he was named a fellow of the International Association for Cryptologic Research.
References
American cryptographers
Modern cryptographers
Georgetown University College of Arts & Sciences alumni
University of Maryland, College Park alumni
University of Illinois Urbana-Champaign faculty
University at Buffalo faculty
Texas A&M University faculty
20th-century American mathematicians
21st-century American mathematicians
Number theorists
International Association for Cryptologic Research fellows
2018 deaths
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.