source
stringlengths 31
168
| text
stringlengths 51
3k
|
|---|---|
https://en.wikipedia.org/wiki/Beira%20Airport
|
Beira Airport is an airport in Beira, Mozambique . It has 3 asphalt runways.
Airlines and destinations
Statistics
References
External links
Airports in Mozambique
Buildings and structures in Beira, Mozambique
Buildings and structures in Sofala Province
|
https://en.wikipedia.org/wiki/Nampula%20Airport
|
Nampula Airport is an airport in Nampula, Mozambique . In the northeastern part of Mozambique, with two paved runways.
Airlines and destinations
Statistics
References
External links
Mozambique Airport Authority
Airports in Mozambique
Buildings and structures in Nampula
|
https://en.wikipedia.org/wiki/Pemba%20Airport%20%28Mozambique%29
|
Pemba Airport is a small international airport in Pemba, Mozambique.
Airlines and destinations
Passenger
Cargo
Statistics
References
Airports in Mozambique
Buildings and structures in Cabo Delgado Province
|
https://en.wikipedia.org/wiki/Quelimane%20Airport
|
Quelimane Airport is an airport in Quelimane, Mozambique .
Airlines and destinations
Statistics
Accidents and incidents
On 23 February 1944 a Lockheed L-14 CR-AAV of DETA - Direcção de Exploração de Transportes Aéreos crashed on takeoff at Quelimane Airport, killing all 13 on board.
On 21 April 1988, Douglas C-47A N47FE of African Air Carriers was damaged beyond economic repair in a take-off accident. Both crew were killed, one other person on board was seriously injured. The aircraft may have been shot down.
On 27 March 1983 a Boeing 737-200 C9-BAB LAM Mozambique Airlines had an Undercarriage failure after landing some 400 metres (1,300 ft) short of the runway at Quelimane Airport. All 110 on board survived.
References
Airports in Mozambique
Buildings and structures in Zambezia Province
|
https://en.wikipedia.org/wiki/Benin%20Airport
|
Benin Airport is an airport serving Benin City, the capital of Edo State in Nigeria. The runway is in the middle of the city.
Airlines and destinations
Statistics
See also
Transport in Nigeria
List of airports in Nigeria
List of the busiest airports in Africa
References
External links
SkyVector Aeronautical Charts
OurAirports - Benin
Airports in Nigeria
Benin City
|
https://en.wikipedia.org/wiki/Milnor%20conjecture%20%28topology%29
|
In knot theory, the Milnor conjecture says that the slice genus of the torus knot is
It is in a similar vein to the Thom conjecture.
It was first proved by gauge theoretic methods by Peter Kronheimer and Tomasz Mrowka. Jacob Rasmussen later gave a purely combinatorial proof using Khovanov homology, by means of the s-invariant.
References
Geometric topology
Knot theory
4-manifolds
Conjectures that have been proved
|
https://en.wikipedia.org/wiki/List%20of%20Swindon%20Town%20F.C.%20records%20and%20statistics
|
This page details Swindon Town Football Club records.
Player records
Appearances
Youngest first-team player – Paul Rideout, 16 years 107 days (v. Hull City, 29 November 1980)
Most appearances
As of 1 February 2007. (Former players only, competitive matches only, includes appearances as substitute):
Goalscorers
Most goals in a season – 48, Harry Morris (1926–27)
Most League goals in a season – 47, Harry Morris, (1926–27)
Most goals in a single match – 5
Harry Morris (v. Queens Park Rangers, Third Division South, 18 December 1926)
Harry Morris (v. Norwich City, Third Division South, 26 April 1930)
Keith East (v. Mansfield Town, Third Division, 20 November 1965)
Most goals in the League – 216, Harry Morris
Top scorers
As of 18 November 2006 (competitive matches only):
Club records
Wins
Most League wins in a season – 32 in 46 matches, Fourth Division, 1985–86
Fewest League wins in a season –
0 in 16 matches, Western League, 1901–02
2 in 16 matches, Western League, 1900–01
2 in 30 matches, Southern League First Division, 1901–02
Defeats
Most League defeats in a season – 26 in 46 matches, First Division, 1999–2000
Fewest League defeats in a season –
2 in 8 matches, Western League, 1898–99
4 in 46 matches, Second Division, 1995–96
Goals
Most League goals scored in a season – 100 in 42 matches, Third Division South, 1926–27
Fewest League goals scored in a season –
7 in 6 matches, Western League, 1899–1900
17 in 30 matches, Southern League First Division, 1901–02
Most League goals conceded in a season – 105 in 42 matches, Third Division South, 1932–33
Fewest League goals conceded in a season –
7 in 6 matches, Western League, 1899–1900
31 in 38 matches, Southern League First Division, 1910–11
Points
Most points in a League season (2 for a win) – 64 in 46 matches, Third Division, 1968–69
Most points in a League season (3 for a win) – 102 in 46 matches, Fourth Division, 1985–86
Matches
Firsts
First match
(Unofficial history) – v. Rovers F.C., Friendly, 29 November 1879 (lost 4–0)
(Official history) – v. St. Mark's Young Men's Friendly Society, Friendly, 12 November 1881 (drew 2–2)
First FA Cup match (pre-qualifying) – v. Watford Rovers, First Round, 23 October 1886 (won 1–0)
First FA Cup match (proper) – v. Brighton and Hove Albion, First Round, 13 January 1906 (lost 3–0)
First League match – v. Reading, Southern League First Division, 22 September 1894 (lost 4–3)
First European match – v. A.S. Roma, Anglo-Italian League Cup, 27 August 1969 (lost 2–1)
First League Cup match – v. Shrewsbury Town, 12 October 1960 (won 2–1)
Record wins
Record League win – 9–1 (home v. Luton Town, Third Division South, 28 August 1920)
Record FA Cup win – 10–1 (away v. Farnham United Breweries, 28 November 1925)
Record defeats
Record League defeat – 0–8 (away v. Loughborough, Second Division, 12 December 1896)
Record FA Cup defeat – 1–10 (away v. Manchester City, 25 January 1930)
Transfers
Record transfer fee received – £4,000,000 from Q.P.
|
https://en.wikipedia.org/wiki/Qualitative%20variation
|
An index of qualitative variation (IQV) is a measure of statistical dispersion in nominal distributions. There are a variety of these, but they have been relatively little-studied in the statistics literature. The simplest is the variation ratio, while more complex indices include the information entropy.
Properties
There are several types of indices used for the analysis of nominal data. Several are standard statistics that are used elsewhere - range, standard deviation, variance, mean deviation, coefficient of variation, median absolute deviation, interquartile range and quartile deviation.
In addition to these several statistics have been developed with nominal data in mind. A number have been summarized and devised by Wilcox , , who requires the following standardization properties to be satisfied:
Variation varies between 0 and 1.
Variation is 0 if and only if all cases belong to a single category.
Variation is 1 if and only if cases are evenly divided across all categories.
In particular, the value of these standardized indices does not depend on the number of categories or number of samples.
For any index, the closer to uniform the distribution, the larger the variance, and the larger the differences in frequencies across categories, the smaller the variance.
Indices of qualitative variation are then analogous to information entropy, which is minimized when all cases belong to a single category and maximized in a uniform distribution. Indeed, information entropy can be used as an index of qualitative variation.
One characterization of a particular index of qualitative variation (IQV) is as a ratio of observed differences to maximum differences.
Wilcox's indexes
Wilcox gives a number of formulae for various indices of QV , the first, which he designates DM for "Deviation from the Mode", is a standardized form of the variation ratio, and is analogous to variance as deviation from the mean.
ModVR
The formula for the variation around the mode (ModVR) is derived as follows:
where fm is the modal frequency, K is the number of categories and fi is the frequency of the ith group.
This can be simplified to
where N is the total size of the sample.
Freeman's index (or variation ratio) is
This is related to M as follows:
The ModVR is defined as
where v is Freeman's index.
Low values of ModVR correspond to small amount of variation and high values to larger amounts of variation.
When K is large, ModVR is approximately equal to Freeman's index v.
RanVR
This is based on the range around the mode. It is defined to be
where fm is the modal frequency and fl is the lowest frequency.
AvDev
This is an analog of the mean deviation. It is defined as the arithmetic mean of the absolute differences of each value from the mean.
MNDif
This is an analog of the mean difference - the average of the differences of all the possible pairs of variate values, taken regardless of sign. The mean difference differs from the mean and standard
|
https://en.wikipedia.org/wiki/Deviation%20%28statistics%29
|
In mathematics and statistics, deviation is a measure of difference between the observed value of a variable and some other value, often that variable's mean. The sign of the deviation reports the direction of that difference (the deviation is positive when the observed value exceeds the reference value). The magnitude of the value indicates the size of the difference.
Types
A deviation that is a difference between an observed value and the true value of a quantity of interest (where true value denotes the Expected Value, such as the population mean) is an error.
A deviation that is the difference between the observed value and an estimate of the true value (e.g. the sample mean; the Expected Value of a sample can be used as an estimate of the Expected Value of the population) is a residual. These concepts are applicable for data at the interval and ratio levels of measurement.
Unsigned or absolute deviation
In statistics, the absolute deviation of an element of a data set is the absolute difference between that element and a given point. Typically the deviation is reckoned from the central value, being construed as some type of average, most often the median or sometimes the mean of the data set:
where
Di is the absolute deviation,
xi is the data element,
m(X) is the chosen measure of central tendency of the data set—sometimes the mean (), but most often the median.
Measures
Mean signed deviation
For an unbiased estimator, the average of the signed deviations across the entire set of all observations from the unobserved population parameter value averages zero over an arbitrarily large number of samples. However, by construction the average of signed deviations of values from the sample mean value is always zero, though the average signed deviation from another measure of central tendency, such as the sample median, need not be zero.
Dispersion
Statistics of the distribution of deviations are used as measures of statistical dispersion.
Standard deviation is the frequently used measure of dispersion: it uses squared deviations, and has desirable properties, but is not robust.
Average absolute deviation, is the sum of absolute values of the deviations divided by the number of observations.
Median absolute deviation is a robust statistic, which uses the median, not the mean, of absolute deviations.
Maximum absolute deviation is a highly non-robust measure, which uses the maximum absolute deviation.
Normalization
Deviations have units of the measurement scale (for instance, meters if measuring lengths).
One can nondimensionalize in two ways.
One way is by dividing by a measure of scale (statistical dispersion), most often either the population standard deviation, in standardizing, or the sample standard deviation, in studentizing (e.g., Studentized residual).
One can scale instead by location, not dispersion: the formula for a percent deviation is the observed value minus accepted value divided by the accepted value multiplied by
|
https://en.wikipedia.org/wiki/Univariate%20distribution
|
In statistics, a univariate distribution is a probability distribution of only one random variable. This is in contrast to a multivariate distribution, the probability distribution of a random vector (consisting of multiple random variables).
Examples
One of the simplest examples of a discrete univariate distribution is the discrete uniform distribution, where all elements of a finite set are equally likely. It is the probability model for the outcomes of tossing a fair coin, rolling a fair die, etc. The univariate continuous uniform distribution on an interval [a, b] has the property that all sub-intervals of the same length are equally likely.
Other examples of discrete univariate distributions include the binomial, geometric, negative binomial, and Poisson distributions. At least 750 univariate discrete distributions have been reported in the literature.
Examples of commonly applied continuous univariate distributions include the normal distribution, Student's t distribution, chisquare distribution, F distribution, exponential and gamma distributions.
See also
Univariate
Bivariate distribution
List of probability distributions
References
Further reading
Types of probability distributions
|
https://en.wikipedia.org/wiki/Glen%20Van%20Brummelen
|
Glen Robert Van Brummelen (born May 20th, 1965) is a Canadian historian of mathematics specializing in historical applications of mathematics to astronomy.
He is president of the Canadian Society for History and Philosophy of Mathematics, and was a co-editor of Mathematics and the Historian's Craft: The Kenneth O. May Lectures (Springer, 2005).
Life
Van Brummelen earned his PhD degree from Simon Fraser University in 1993, and served as a professor of mathematics at Bennington College from 1999 to 2006. He then transferred to Quest University Canada as a founding faculty member. In 2020, he became the dean of the Faculty of Natural and Applied Sciences at Trinity Western University in Langley, BC.
Glen Van Brummelen has published the first major history in English of the origins and early development of trigonometry, The Mathematics of the Heavens and the Earth: The Early History of Trigonometry. His second book, Heavenly Mathematics: The Forgotten Art of Spherical Trigonometry, concerns spherical trigonometry.
He teaches courses on the history of mathematics and trigonometry at MathPath, specifically Heavenly Mathematics and Spherical Trigonometry. He is also well known for the glensheep and the "glenneagon", a variant on the enneagon (as well as to a lesser extent the glenelephant, and to even lesser extent the glenturtle), a two-dimensional animal he coined at MathPath.
Works
The Mathematics of the Heavens and the Earth: The Early History of Trigonometry Princeton; Oxford: Princeton University Press, 2009. ,
Heavenly Mathematics: The Forgotten Art of Spherical Trigonometry Princeton; Oxford: Princeton University Press, 2013. ,
Trigonometry: A Very Short Introduction; Oxford: Princeton University Press, 2020 ,
The Doctrine of Triangles: The History of Modern Trigonometry Princeton; Oxford: Princeton University Press, 2021 ,
References
External links
Bio at Quest's Website
Homepage at Bennington College
Publication list
Trigonometry Book page
Academic staff of Trinity Western University
Canadian mathematicians
1965 births
Living people
Simon Fraser University alumni
Bennington College faculty
|
https://en.wikipedia.org/wiki/Ruppeiner%20geometry
|
Ruppeiner geometry is thermodynamic geometry (a type of information geometry) using the language of Riemannian geometry to study thermodynamics. George Ruppeiner proposed it in 1979. He claimed that thermodynamic systems can be represented by Riemannian geometry, and that statistical properties can be derived from the model.
This geometrical model is based on the inclusion of the theory of fluctuations into the axioms of equilibrium thermodynamics, namely, there exist equilibrium states which can be represented by points on two-dimensional surface (manifold) and the distance between these equilibrium states is related to the fluctuation between them. This concept is associated to probabilities, i.e. the less probable a fluctuation between states, the further apart they are. This can be recognized if one considers the metric tensor gij in the distance formula (line element) between the two equilibrium states
where the matrix of coefficients gij is the symmetric metric tensor which is called a Ruppeiner metric, defined as a negative Hessian of the entropy function
where U is the internal energy (mass) of the system and Na refers to the extensive parameters of the system. Mathematically, the Ruppeiner geometry is one particular type of information geometry and it is similar to the Fisher-Rao metric used in mathematical statistics.
The Ruppeiner metric can be understood as the thermodynamic limit (large systems limit) of the more general Fisher information metric. For small systems (systems where fluctuations are large), the Ruppeiner metric may not exist, as second derivatives of the entropy are not guaranteed to be non-negative.
The Ruppeiner metric is conformally related to the Weinhold metric via
where T is the temperature of the system under consideration. Proof of the conformal relation can be easily done when one writes down the first law of thermodynamics (dU=TdS+...) in differential form with a few manipulations. The Weinhold geometry is also considered as a thermodynamic geometry. It is defined as a Hessian of the internal energy with respect to entropy and other extensive parameters.
It has long been observed that the Ruppeiner metric is flat for systems with noninteracting underlying statistical mechanics such as the ideal gas. Curvature singularities signal critical behaviors. In addition, it has been applied to a number of statistical systems including Van der Waals gas. Recently the anyon gas has been studied using this approach.
Application to black hole systems
In the last five years or so, this geometry has been applied to black hole thermodynamics, with some physically relevant results. The most physically significant case is for the Kerr black hole in higher dimensions, where the curvature singularity signals thermodynamic instability, as found earlier by conventional methods.
The entropy of a black hole is given by the well-known Bekenstein–Hawking formula
where is Boltzmann's constant, the speed of light, Newton's
|
https://en.wikipedia.org/wiki/Halmos%20College%20of%20Natural%20Sciences%20and%20Oceanography
|
The Halmos College of Natural Sciences and Oceanography is a natural science college at Nova Southeastern University in Florida. The college offers programs in subjects like biology and mathematics and conducts oceanographical research.
Degree Programs
The college offers multiple bachelor's, master's and doctoral programs.
B.S. in Biology
B.S. in Chemistry
B.S. in Marine Biology
B.S. in Mathematics
B.S. in Environmental Science/Studies
M.S. in Biological Sciences
M.S. in Marine Science
Ph.D. in Marine Biology/Oceanography
Facilities
The college has a presence at two campuses: in the Parker Building on the Fort Lauderdale/Davie Campus, and the Oceanographic Center, located on a site on the ocean side of Port Everglades, adjacent to the port's entrance. The center has a boat basin and affords immediate access to the Gulf Stream, the Florida Straits, and the Bahama Banks. The center is composed of three buildings, and several modulars. The main two-story building houses seven laboratories, conference rooms, workroom, and 13 offices. A second building contains a large two-story warehouse and staging area, classroom, biology laboratory, electron microscopy laboratory, darkroom, machine shop, carpentry shop, electronics laboratory, the library, student computer lab, computing center, and 15 offices. A one-story building contains a wetlab/classroom, coral workshop, and an X-ray facility. A modular laboratory is used for aquaculture studies. The Oceanographic Center grows and sells red mangroves.
Affiliated institutions
National Coral Reef Institute
The Oceanographic Center is host to the National Coral Reef Institute was established by Congressional mandate in 1998. NCRI's primary objective is the assessment, monitoring, and restoration of coral reefs through basic and applied research and through training and education.
Guy Harvey Research Institute
The Guy Harvey Research Institute conducts basic and applied scientific research needed for effective conservation, biodiversity maintenance, restoration, and understanding of the world's wild fishes. The GHRI also provides scientific training to US and international students interested in ocean health. The GHRI is named for Jamaican artist Guy Harvey known for his marine themed works.
Broward County, Florida Sea Turtle Conservation
The college supplies the Broward County Sea Turtle Conservation Program with contract employees and research facilities.
References
External links
Official website
Nova Southeastern University
Oceanographic organizations
Universities and colleges in Broward County, Florida
|
https://en.wikipedia.org/wiki/Azaz
|
Azaz () is a city in northwest Syria, roughly north-northwest of Aleppo. According to the Syria Central Bureau of Statistics (CBS), Azaz had a population of 31,623 in the 2004 census. , its inhabitants were almost entirely Sunni Muslims, mostly Arabs but also some Kurds and Turkmen.
It is historically significant as the site of the Battle of Azaz between the Crusader States and the Seljuk Turks on June 11, 1125. It is close to a Syria–Turkey border crossing, which enters Turkey at Öncüpınar, south of the city of Kilis. It is the capital of the Syrian Interim Government.
History
The city was known in ancient times with different names: in Hurrian as Azazuwa, in Medieval Greek as Αζάζιον (Azázion), in Old Aramaic as Ḥzz (later evolved in Neo-Assyrian as Ḫazazu).
Early Islamic period
In excavations of the site of Tell Azaz, considerable quantities of ceramics from the early and middle Islamic periods were found. Despite the importance of Azaz as indicated by archaeological finds, the settlement was rarely mentioned in Islamic texts prior to the 12th century. However, a visit to the town by the Muslim musician Ishaq al-Mawsili (767–850) gives some indication of Azaz's importance during Abbasid rule. The Hamdanids of Aleppo (945–1002) built a brick citadel at Azaz. It was a square fortress with two enclosures, situated atop a tell.
On 10 August 1030, Tubbal near Azaz became the scene of a humiliating defeat of the Byzantine emperor Romanos III at the hands of the Mirdasids. In December of the same year, the Byzantine generals Niketas of Mistheia and Symeon besieged and captured Azaz, and burned Tubbal to the ground in retaliation.
Crusader period
During the Crusader era, Azaz, which was referred to in Crusader sources as "Hazart", became of particular strategic significance due to its topography and location, overlooking the surrounding region. In the hands of the Muslims, Azaz stymied communications between the Crusader states of Edessa and Antioch, while in Crusader hands it threatened the major Muslim city of Aleppo. Around December 1118, the Crusader prince Roger of Antioch and the Armenian prince Leo I besieged and captured Azaz from the Turcoman prince Ilghazi of Mardin.
In January 1124, Balak and Toghtekin, the Burid atabeg of Damascus, breached Azaz's defenses, but were repulsed by Crusader reinforcements. In April 1125, the Seljuk atabeg Aqsunqur al-Bursuqi of Mosul and Toghtekin invaded the Principality of Antioch and surrounded Azaz. In response, in May or June 1125, a 3,000-strong Crusader coalition commanded by King Baldwin II of Jerusalem confronted and defeated the 15,000-strong Muslim coalition at the Battle of Azaz, raising the siege of the town.
However, the Crusaders' strength in the region was dealt a blow following the Zengid capture of Edessa in 1144. Afterward, the other fortresses in the County of Edessa, including Azaz, gradually became neglected. In 1146, Humphrey II of Toron sent sixty knights to reinforce the garr
|
https://en.wikipedia.org/wiki/Novikov%27s%20condition
|
In probability theory, Novikov's condition is the sufficient condition for a stochastic process which takes the form of the Radon–Nikodym derivative in Girsanov's theorem to be a martingale. If satisfied together with other conditions, Girsanov's theorem may be applied to a Brownian motion stochastic process to change from the original measure to the new measure defined by the Radon–Nikodym derivative.
This condition was suggested and proved by Alexander Novikov. There are other results which may be used to show that the Radon–Nikodym derivative is a martingale, such as the more general criterion Kazamaki's condition, however Novikov's condition is the most well-known result.
Assume that
is a real valued adapted process on the probability space and is an adapted Brownian motion:
If the condition
is fulfilled then the process
is a martingale under the probability measure and the filtration . Here denotes the Doléans-Dade exponential.
References
External links
Martingale theory
|
https://en.wikipedia.org/wiki/Local%20independence
|
Within statistics, Local independence is the underlying assumption of latent variable models.
The observed items are conditionally independent of each other given an individual score on the latent variable(s). This means that the latent variable explains why the observed items are related to one another. This can be explained by the following example.
Example
Local independence can be explained by an example of Lazarsfeld and Henry (1968). Suppose that a sample of 1000 people was asked whether they read journals A and B. Their responses were as follows:
One can easily see that the two variables (reading A and reading B) are strongly related, and thus dependent on each other. Readers of A tend to read B more often (52%) than non-readers of A (28%). If reading A and B were independent, then the formula P(A&B) = P(A)×P(B) would hold. But 260/1000 isn't 400/1000 × 500/1000. Thus, reading A and B are statistically dependent on each other.
If the analysis is extended to also look at the education level of these people, the following tables are found.
Again, if reading A and B were independent, then P(A&B) = P(A)×P(B) would hold separately for each education level. And, in fact, 240/500 = 300/500×400/500 and 20/500 = 100/500×100/500. Thus if a separation is made between people with high and low education backgrounds,
there is no dependence between readership of the two journals. That is, reading A and B are independent once educational level is taken into consideration. The educational level 'explains' the difference in reading of A and B. If educational level is never actually observed or known, it may still appear as a latent variable in the model.
See also
Conditional independence
References
Lazarsfeld, P.F., and Henry, N.W. (1968) Latent Structure analysis. Boston: Houghton Mill.
Further reading
External links
Local independence by Jeroen K. Vermunt & Jay Magidson
Local Independence and Latent Class Analysis
Econometric modeling
Independence (probability theory)
Latent variable models
|
https://en.wikipedia.org/wiki/Refinement
|
Refinement may refer to:
Mathematics
Equilibrium refinement, the identification of actualized equilibria in game theory
Refinement of an equivalence relation, in mathematics
Refinement (topology), the refinement of an open cover in mathematical topology
Refinement (category theory)
Other uses
Refinement (computing), computer science approaches for designing correct computer programs and enabling their formal verification
Refining, a process of purification
Refining (metallurgy)
Refinement (culture), a quality of cultural sophistication
Refinement (horse), a racehorse ridden by jockey Tony McCoy
|
https://en.wikipedia.org/wiki/Victor%20Flynn
|
Eugene Victor Flynn is an American-born mathematician. He is currently a professor of mathematics at the University of Oxford.
Biography
Flynn was born in Washington, D.C., the son of academic James Flynn who took up a position at the University of Otago. He first studied at the University of Otago, before taking a PhD at Trinity College, Cambridge, supervised by J. W. S. Cassels. He then spent a year as an assistant professor at the University of Michigan, returning to Cambridge as a research fellow at Robinson College. He then moved to the University of Liverpool, including four years as head of the pure mathematics department there. In 2005 he left Liverpool to move to the University of Oxford; he took up a fellowship at New College in October 2005 and was appointed a university professor of mathematics in October 2006.
His fields of specialisation are the arithmetic of elliptic curves and algebraic geometry.
Family
Flynn's father, James Flynn, was primarily involved in the research of intelligence and is noteworthy for his work on the Flynn effect; he died in 2020. Victor Flynn's parents met on a picket line protesting against segregation in the USA. Their daughter Natalie Flynn is a clinical psychologist in Auckland, New Zealand.
References
External links
Home page
1962 births
Living people
20th-century British mathematicians
21st-century British mathematicians
Alumni of Trinity College, Cambridge
Fellows of Robinson College, Cambridge
University of Michigan faculty
Academics of the University of Liverpool
Fellows of New College, Oxford
Algebraic geometers
University of Otago alumni
Academics from Washington, D.C.
American emigrants to New Zealand
Naturalised citizens of New Zealand
|
https://en.wikipedia.org/wiki/Dan%20Segal
|
Daniel Segal (born 1947) is a British mathematician and a Professor of Mathematics at the University of Oxford. He specialises in algebra and group theory.
He studied at Peterhouse, Cambridge, before taking a PhD at Queen Mary College, University of London, in 1972, supervised by Bertram Wehrfritz, with a dissertation on group theory entitled Groups of Automorphisms of Infinite Soluble Groups. He is an Emeritus Fellow of All Souls College at Oxford, where he was sub-warden from 2006 to 2008.
His postgraduate students have included Marcus du Sautoy and Geoff Smith. He is the son of psychoanalyst Hanna Segal and brother of philosopher Gabriel Segal as well of Michael Segal, a senior civil servant.
Publications
Articles
Books
Polycyclic Groups, Cambridge University Press 1983; 2005 pbk edition
with J. Dixon, M. Du Sautoy, A. Mann Analytic pro-p-groups, Cambridge University Press 1999, Paperback edn. 2003
ed. with M. Du Sautoy, A. Shalev New horizons in pro-p-groups, Birkhäuser 2000 Paperback edn. 2012
with Alexander Lubotzky Subgroup growth, Birkhäuser 2003 Paperback edn. 2012
Words: notes on verbal width in groups, London Mathematical Society Lecture Notes, vol. 361, Cambridge University Press 2009
References
Living people
20th-century British mathematicians
21st-century British mathematicians
Alumni of Peterhouse, Cambridge
Alumni of the University of London
Fellows of All Souls College, Oxford
Group theorists
Algebraists
1947 births
|
https://en.wikipedia.org/wiki/Kevin%20Buzzard
|
Kevin Mark Buzzard (born 21 September 1968) is a British mathematician and currently a professor of pure mathematics at Imperial College London. He specialises in arithmetic geometry and the Langlands program.
Biography
While attending the Royal Grammar School, High Wycombe he competed in the International Mathematical Olympiad, where he won a bronze medal in 1986 and a gold medal with a perfect score in 1987.
He obtained a B.A. degree (Parts I & II) in Mathematics at Trinity College, Cambridge, where he was Senior Wrangler (achiever of the highest mark), and went on to complete the C.A.S.M. He then completed his dissertation, entitled The levels of modular representations, under the supervision of Richard Taylor, for which he was awarded a Ph.D. degree.
He took a lectureship at Imperial College London in 1998, a readership in 2002, and was appointed to a professorship in 2004. From October to December 2002 he held a visiting professorship at Harvard University, having previously worked at the Institute for Advanced Study, Princeton (1995), the University of California Berkeley (1996-7), and the Institute Henri Poincaré in Paris (2000).
He was awarded a Whitehead Prize by the London Mathematical Society in 2002 for "his distinguished work in number theory", and the Senior Berwick Prize in 2008.
In 2017, he launched an ongoing formalization project and blog involving the Lean theorem prover and has since promoted the use of computer proof assistants in future mathematics research. He gave a plenary lecture at the International Congress of Mathematicians in 2022 on the topic.
He was the PhD supervisor to musician Dan Snaith, also known as Caribou, who received a PhD in mathematics from Imperial College London for his work on Overconvergent Siegel Modular Symbols.
References
External links
Kevin Buzzard's professional webpage
Kevin Buzzard's personal webpage
Kevin Buzzard's blog (Xena Project)
1968 births
20th-century British mathematicians
21st-century British mathematicians
Institute for Advanced Study visiting scholars
Living people
Number theorists
Harvard University staff
Alumni of Trinity College, Cambridge
Academics of Imperial College London
People educated at the Royal Grammar School, High Wycombe
Whitehead Prize winners
International Mathematical Olympiad participants
|
https://en.wikipedia.org/wiki/Thom%E2%80%93Mather%20stratified%20space
|
In topology, a branch of mathematics, an abstract stratified space, or a Thom–Mather stratified space is a topological space X that has been decomposed into pieces called strata; these strata are manifolds and are required to fit together in a certain way. Thom–Mather stratified spaces provide a purely topological setting for the study of singularities analogous to the more differential-geometric theory of Whitney. They were introduced by René Thom, who showed that every Whitney stratified space was also a topologically stratified space, with the same strata. Another proof was given by John Mather in 1970, inspired by Thom's proof.
Basic examples of Thom–Mather stratified spaces include manifolds with boundary (top dimension and codimension 1 boundary) and manifolds with corners (top dimension, codimension 1 boundary, codimension 2 corners), real or complex analytic varieties, or orbit spaces of smooth transformation groups.
Definition
A Thom–Mather stratified space is a triple where is a topological space (often we require that it is locally compact, Hausdorff, and second countable), is a decomposition of into strata,
and is the set of control data where is an open neighborhood of the stratum (called the tubular neighborhood), is a continuous retraction, and is a continuous function. These data need to satisfy the following conditions.
Each stratum is a locally closed subset and the decomposition is locally finite.
The decomposition satisfies the axiom of the frontier: if and , then . This condition implies that there is a partial order among strata: if and only if and .
Each stratum is a smooth manifold.
. So can be viewed as the distance function from the stratum .
For each pair of strata , the restriction is a submersion.
For each pair of strata , there holds and (both over the common domain of both sides of the equation).
Examples
One of the original motivations for stratified spaces were decomposing singular spaces into smooth chunks. For example, given a singular variety , there is a naturally defined subvariety, , which is the singular locus. This may not be a smooth variety, so taking the iterated singularity locus will eventually give a natural stratification. A simple algebreo-geometric example is the singular hypersurface
where is the prime spectrum.
See also
Singularity theory
Whitney conditions
Stratifold
Intersection homology
Thom's first isotopy lemma
stratified space
References
Goresky, Mark; MacPherson, Robert Stratified Morse theory, Springer-Verlag, Berlin, 1988.
Goresky, Mark; MacPherson, Robert Intersection homology II, Invent. Math. 72 (1983), no. 1, 77--129.
Mather, J. Notes on topological stability, Harvard University, 1970.
Thom, R. Ensembles et morphismes stratifiés, Bulletin of the American Mathematical Society 75 (1969), pp.240-284.
Generalized manifolds
Singularity theory
Stratifications
|
https://en.wikipedia.org/wiki/Intersection%20of%20a%20polyhedron%20with%20a%20line
|
In computational geometry, the problem of computing the intersection of a polyhedron with a line has important applications in computer graphics, optimization, and even in some Monte Carlo methods. It can be viewed as a three-dimensional version of the line clipping problem.
If the polyhedron is given as the intersection of a finite number of halfspaces, then one may partition the halfspaces into three subsets: the ones that include only one infinite end of the line, the ones that include the other end, and the ones that include both ends. The halfspaces that include both ends must be parallel to the given line, and do not contribute to the solution. Each of the other two subsets (if it is non-empty) contributes a single endpoint to the intersection, which may be found by intersecting the line with each of the halfplane boundary planes and choosing the intersection point that is closest to the end of the line contained by the halfspaces in the subset. This method, a variant of the Cyrus–Beck algorithm, takes time linear in the number of face planes of the input polyhedron. Alternatively, by testing the line against each of the polygonal facets of the given polyhedron, it is possible to stop the search early when a facet pierced by the line is found.
If a single polyhedron is to be intersected with many lines, it is possible to preprocess the polyhedron into a hierarchical data structure in such a way that intersections with each query line can be determined in logarithmic time per query.
References
External links
Intersection of convex hull with a line with pseudo code
Euclidean geometry
|
https://en.wikipedia.org/wiki/List%20of%20Wolverhampton%20Wanderers%20F.C.%20records%20and%20statistics
|
Wolverhampton Wanderers Football Club is an English football club based in Wolverhampton. The club was founded as St Luke's in 1877, soon becoming Wolverhampton Wanderers, before being a founder member of the Football League in 1888. Since that time, the club has played in all four professional divisions of the English football pyramid, and been champions of all these levels. They have also been involved in European football, having been one of the first English clubs to enter the European Cup, as well as reaching the final of the first staging of the UEFA Cup.
This list encompasses all honours won by Wolverhampton Wanderers and records set by the club, their managers and their players. The player records section includes details of the club's leading goalscorers and those who have made most appearances in first-team competitions, as well as transfer fee records paid and received by the club. A list of streaks recording all elements of the game (wins, losses, clean sheets, etc.) is also presented.
Honours
In the all-time top flight league table since the league's inception in 1888, Wolves sit in the top fifteen, in terms of all-time English first level league position.
Alternatively, they sit in the top four, behind only Manchester United, Liverpool and Arsenal in terms of all-time league position from points gained at any level of English professional football.
Cumulatively, they are the joint 11th most successful club in domestic English football history, tied with Nottingham Forest. One place behind Blackburn Rovers, with nine major trophy wins, not including super cups. Alternatively they are joint 10th with Nottingham Forrest, in competitive honours with 13 trophy wins, behind Newcastle United.(see here).
Uniquely, they are the only club to have won titles in five different Football League divisions, and, in 1988, became the first team to have been champions of all four professional leagues in English football; although this feat has since been matched by Burnley (in 1992) and Preston (in 1996). They remain the only club to have won all the main domestic cup competitions (FA Cup, League Cup and EFL Trophy) currently contested in English football.
League
First Division/Premier League
Champions: 1953–54, 1957–58, 1958–59
Runners-up: 1937–38, 1938–39, 1949–50, 1954–55, 1959–60
Second Division/Championship
Champions: 1931–32, 1976–77, 2008–09, 2017–18
Runners-up: 1966–67, 1982–83
Play-off winners: 2003
Third Division/League One
Champions: 1923–24 (North), 1988–89, 2013–14
Fourth Division
Champions: 1987–88
Cup
UEFA Cup
Runners-up: 1972
FA Cup
Winners: 1893, 1908, 1949, 1960
Runners-up: 1889, 1896, 1921, 1939
EFL Cup
Winners: 1974, 1980
FA Charity Shield
Winners: 1949*, 1954*, 1959, 1960* (* joint holders)
Runners-up: 1958
EFL Trophy
Winners: 1988
Texaco Cup
Winners: 1971
Minor honours
Premier League Asia Trophy
Winners: 2019
Uhrencup
Winners: 2018
Football League War Cup
Winners: 1942
FA Youth Cup
Winners: 1958
Runners-up: 195
|
https://en.wikipedia.org/wiki/HP%2039/40%20series
|
HP 39/40 series are graphing calculators from Hewlett-Packard, the successors of HP 38G. The series consists of six calculators, which all have algebraic entry modes, and can perform numeric analysis together with varying degrees of symbolic calculation. All calculators in this series are aimed at high school level students and are characterized by their ability to download (via cable or infrared) APLETs or E-lessons. These are programs of varying complexity which are generally intended to be used in the classroom to enhance the learning of mathematics by the graphical and/or numerical exploration of concepts.
HP 39g
The HP 39g (F1906A) was released in 2000.
Basic characteristics:
CPU: 4 MHz Yorke (Saturn core)
Communication: Proprietary infrared, serial RS-232 (serial port).
Memory: 256 KB RAM
Screen resolution: 131 × 64 pixels
Includes a hard cover
Limited symbolic equation functionality.
HP 40g
HP 40g (F1907A) was released in 2000 in parallel with the HP 39g. The HP 40g's operating system is identical to the HP 39g. Differences detected in hardware during start-up trigger the differences in software functionality.
The hardware is identical to the HP 49G/39G series (complete with rubber keyboard). In contrast to the 39g, it integrates the same computer algebra system (CAS) also found in the HP 49G, HP CAS. Unlike its "bigger brothers", the HP 40g has no flags to set/mis-set resulting in a "better behaved" calculator for straightforward math analysis. Additionally the HP 40g does not have infrared connectivity, and is limited to 27 variables. A list-based solver, and other handicaps make this simple-to-use calculator less adapted to higher end use. The HP 40g is not allowed for use in many standardized tests including the ACT. It is allowed on the SAT as of 2019, however.
Basic characteristics:
Identical to HP 39g except:
Communication: No infrared communication, serial RS-232 (serial port).
Software: Includes an equation writer and advanced CAS.
HP 39g+
The HP 39g+ (F2224A) was released in September 2003.
Basic characteristics:
CPU: 75 MHz Samsung S3C2410X (ARM920T core)
Memory: 256 KB RAM, 1 MB flash
Communication: USB Mini-B port (using the Kermit or XModem protocols), IrDA (infrared).
Power: 3 × AAA as main power, CR2032 for memory backup
Screen resolution: 131×64 pixels
Does not come with a hard cover
Limited symbolic equation functionality.
Note: Although an ARM processor is used in this model, the operating system is substantially the same as that of the 39G, with the Saturn chip being emulated on the ARM at a higher speed than was possible for the 39G. The CAS component of the HP 40G's operating system appears to have been totally removed, rather than simply being hidden at start-up.
HP 39gs
The HP 39gs (F2223A) was released in June 2006.
Basic characteristics:
CPU: 75 MHz Samsung S3C2410A (ARM920T core)
Memory: 256 KB RAM, 1 MB flash
Communication: USB Mini-B port (using the Kermit or XModem protocols), Ir
|
https://en.wikipedia.org/wiki/Gianni%20A.%20Sarcone
|
Gianni A. Sarcone (born March 20, 1962) is a visual artist and author who collaborates with educational publications, writing articles and columns on topics related to art, science, and mathematics education. He has contributed to several science magazines, including Focus Junior (Italy), Query-CICAP (Italy), Rivista Magia (Italy), Alice & Bob / Bocconi University (Italy), Brain Games (USA), and Tangente (France). Sarcone has over 30 years of experience as a designer and researcher in the areas of visual creativity, recreational mathematics and educational games.
Visual research
Considered a leading authority on visual perception by academic institutions, Sarcone was invited to serve as a juror at the Third Annual "Best Illusion of the Year Contest" held in Sarasota, Florida (USA). His optical illusion projects 'Mask of Love' and 'Autokinetic Illusion' were named among the top 10 best optical illusions in the 2011 and 2014 "Best Illusion of the Year Contests", respectively. In 2017, he placed third in the contest for his ‘Dynamic Müller-Lyer Illusion’.
Amongst other notable projects, he created and designed an “hypnoptical” visual illusion that was used in the logo and institutional signage of the 2014 Grec Festival of Barcelona, a significant cultural event featuring avant-garde musical, dance, and theater performances.
On October 16, 2021, for the International Observe the Moon Night, his joint work “Moona Lisa” has been selected as Astronomy Picture of the Day (APOD) by the NASA.
Honors and awards
Educational project
G. Sarcone has authored and published numerous educational textbooks and illustrated books in English, French and Italian on brain training and the mechanism of vision. He is the founder of Archimedes-lab.org a consulting network of experts specializing in improving and enhancing creativity - for which he has been commended with a long list of accolades and awards, including the 2003 Scientific American Sci/Tech Web Award in Mathematics and received recognition in the US from: CNN Headline News, National Council of Teachers of Mathematics (NCTM), and NewScientist.com.
Media and broadcasting
Some of Sarcone’s artworks such as The Other Face of Paris or Flashing Star have gone viral on the Internet.
His works were also presented in several national and international television programs, including 'Rai 3' Italy, 'RTL 9 Channel' France, 'TSR 1 Channel' Switzerland, and in the following TV series:
‘Nippon Television Network’ / NTV (Japan): "Fukashigi"; Japanese: 不可思議探偵団 (2012).
‘National Geographic Television’ (US): "Brain Games Science" (2014).
‘Beyond Production’ PTY LTD (Australia): "Wild But True" – Season 1 (2014).
"Masahiro Nakai’s Useful Library" show, a widely followed Japanese TV program (June, 2015).
Selected works
Bibliography
Recent published works (2014-20)
G. Sarcone is the author (and co-author) of the following books:
Fantastic Optical Illusions: More Than 150 Deceptive Images and Visual Tricks, Carlto
|
https://en.wikipedia.org/wiki/Kasos%20Island%20Public%20Airport
|
Kasos Island Public Airport is an airport in Kasos, Greece.
Airlines and destinations
The following airlines operate regular scheduled and charter flights at Kasos Island Airport:
Statistics
See also
Transport in Greece
References
External links
Airports in Greece
Dodecanese
Buildings and structures in the South Aegean
|
https://en.wikipedia.org/wiki/Pointclass
|
In the mathematical field of descriptive set theory, a pointclass is a collection of sets of points, where a point is ordinarily understood to be an element of some perfect Polish space. In practice, a pointclass is usually characterized by some sort of definability property; for example, the collection of all open sets in some fixed collection of Polish spaces is a pointclass. (An open set may be seen as in some sense definable because it cannot be a purely arbitrary collection of points; for any point in the set, all points sufficiently close to that point must also be in the set.)
Pointclasses find application in formulating many important principles and theorems from set theory and real analysis. Strong set-theoretic principles may be stated in terms of the determinacy of various pointclasses, which in turn implies that sets in those pointclasses (or sometimes larger ones) have regularity properties such as Lebesgue measurability (and indeed universal measurability), the property of Baire, and the perfect set property.
Basic framework
In practice, descriptive set theorists often simplify matters by working in a fixed Polish space such as Baire space or sometimes Cantor space, each of which has the advantage of being zero dimensional, and indeed homeomorphic to its finite or countable powers, so that considerations of dimensionality never arise. Yiannis Moschovakis provides greater generality by fixing once and for all a collection of underlying Polish spaces, including the set of all naturals, the set of all reals, Baire space, and Cantor space, and otherwise allowing the reader to throw in any desired perfect Polish space. Then he defines a product space to be any finite Cartesian product of these underlying spaces. Then, for example, the pointclass of all open sets means the collection of all open subsets of one of these product spaces. This approach prevents from being a proper class, while avoiding excessive specificity as to the particular Polish spaces being considered (given that the focus is on the fact that is the collection of open sets, not on the spaces themselves).
Boldface pointclasses
The pointclasses in the Borel hierarchy, and in the more complex projective hierarchy, are represented by sub- and super-scripted Greek letters in boldface fonts; for example, is the pointclass of all closed sets, is the pointclass of all Fσ sets, is the collection of all sets that are simultaneously Fσ and Gδ, and is the pointclass of all analytic sets.
Sets in such pointclasses need be "definable" only up to a point. For example, every singleton set in a Polish space is closed, and thus . Therefore, it cannot be that every set must be "more definable" than an arbitrary element of a Polish space (say, an arbitrary real number, or an arbitrary countable sequence of natural numbers). Boldface pointclasses, however, may (and in practice ordinarily do) require that sets in the class be definable relative to some real number, taken as an
|
https://en.wikipedia.org/wiki/Coset%20construction
|
In mathematics, the coset construction (or GKO construction) is a method of constructing unitary highest weight representations of the Virasoro algebra, introduced by Peter Goddard, Adrian Kent and David Olive (1986). The construction produces the complete discrete series of highest weight representations of the Virasoro algebra and demonstrates their unitarity, thus establishing the classification of unitary highest weight representations.
References
Conformal field theory
Lie algebras
|
https://en.wikipedia.org/wiki/Closest%20pair%20of%20points%20problem
|
The closest pair of points problem or closest pair problem is a problem of computational geometry: given points in metric space, find a pair of points with the smallest distance between them. The closest pair problem for points in the Euclidean plane was among the first geometric problems that were treated at the origins of the systematic study of the computational complexity of geometric algorithms.
Time bounds
Randomized algorithms that solve the problem in linear time are known, in Euclidean spaces whose dimension is treated as a constant for the purposes of asymptotic analysis. This is significantly faster than the time (expressed here in big O notation) that would be obtained by a naive algorithm of finding distances between all pairs of points and selecting the smallest.
It is also possible to solve the problem without randomization, in random-access machine models of computation with unlimited memory that allow the use of the floor function, in near-linear time. In even more restricted models of computation, such as the algebraic decision tree, the problem can be solved in the somewhat slower time bound, and this is optimal for this model, by a reduction from the element uniqueness problem. Both sweep line algorithms and divide-and-conquer algorithms with this slower time bound are commonly taught as examples of these algorithm design techniques.
Linear-time randomized algorithms
A linear expected time randomized algorithm of , modified slightly by Richard Lipton to make its analysis easier, proceeds as follows, on an input set consisting of points in a -dimensional Euclidean space:
Select pairs of points uniformly at random, with replacement, and let be the minimum distance of the selected pairs.
Round the input points to a square grid of points whose size (the separation between adjacent grid points) is , and use a hash table to collect together pairs of input points that round to the same grid point.
For each input point, compute the distance to all other inputs that either round to the same grid point or to another grid point within the Moore neighborhood of surrounding grid points.
Return the smallest of the distances computed throughout this process.
The algorithm will always correctly determine the closest pair, because it maps any pair closer than distance to the same grid point or to adjacent grid points. The uniform sampling of pairs in the first step of the algorithm (compared to a different method of Rabin for sampling a similar number of pairs) simplifies the proof that the expected number of distances computed by the algorithm is linear.
Instead, a different algorithm goes through two phases: a random iterated filtering process that approximates the closest distance to within an approximation ratio of , together with a finishing step that turns this approximate distance into the exact closest distance. The filtering process repeat the following steps, until becomes empty:
Choose a point uniformly at rand
|
https://en.wikipedia.org/wiki/Gabriel%20graph
|
In mathematics and computational geometry, the Gabriel graph of a set of points in the Euclidean plane expresses one notion of proximity or nearness of those points. Formally, it is the graph with vertex set in which any two distinct points and are adjacent precisely when the closed disc having as a diameter contains no other points. Another way of expressing the same adjacency criterion is that and should be the two closest given points to their midpoint, with no other given point being as close. Gabriel graphs naturally generalize to higher dimensions, with the empty disks replaced by empty closed balls. Gabriel graphs are named after K. Ruben Gabriel, who introduced them in a paper with Robert R. Sokal in 1969.
Percolation
For Gabriel graphs of infinite random point sets, the finite site percolation threshold gives the fraction of points needed to support connectivity: if a random subset of fewer vertices than the threshold is given, the remaining graph will almost surely have only finite connected components, while if the size of the random subset is more than the threshold, then the remaining graph will almost surely have an infinite component (as well as finite components). This threshold was proved to exist by , and more precise values of both site and bond thresholds have been given by Norrenbrock.
Related geometric graphs
The Gabriel graph is a subgraph of the Delaunay triangulation. It can be found in linear time if the Delaunay triangulation is given.
The Gabriel graph contains, as subgraphs, the Euclidean minimum spanning tree, the relative neighborhood graph, and the nearest neighbor graph.
It is an instance of a beta-skeleton. Like beta-skeletons, and unlike Delaunay triangulations, it is not a geometric spanner: for some point sets, distances within the Gabriel graph can be much larger than the Euclidean distances between points.
References
Euclidean plane geometry
Geometric graphs
|
https://en.wikipedia.org/wiki/Fractal%20sequence
|
In mathematics, a fractal sequence is one that contains itself as a proper subsequence. An example is
1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, ...
If the first occurrence of each n is deleted, the remaining sequence is identical to the original. The process can be repeated indefinitely, so that actually, the original sequence contains not only one copy of itself, but rather, infinitely many.
Definition
The precise definition of fractal sequence depends on a preliminary definition: a sequence x = (xn) is an infinitive sequence if for every i,
(F1) xn = i for infinitely many n.
Let a(i,j) be the jth index n for which xn = i. An infinitive sequence x is a fractal sequence if two additional conditions hold:
(F2) if i+1 = xn, then there exists m < n such that
(F3) if h < i then for every j there is exactly one k such that
According to (F2), the first occurrence of each i > 1 in x must be preceded at least once by each of the numbers 1, 2, ..., i-1, and according to (F3), between consecutive occurrences of i in x, each h less than i occurs exactly once.
Example
Suppose θ is a positive irrational number. Let
S(θ) = the set of numbers c + dθ, where c and d are positive integers
and let
cn(θ) + θdn(θ)
be the sequence obtained by arranging the numbers in S(θ) in increasing order. The sequence cn(θ) is the signature of θ, and it is a fractal sequence.
For example, the signature of the golden ratio (i.e., θ = (1 + sqrt(5))/2) begins with
1, 2, 1, 3, 2, 4, 1, 3, 5, 2, 4, 1, 6, 3, 5, 2, 7, 4, 1, 6, 3, 8, 5, ...
and the signature of 1/θ = θ - 1 begins with
1, 1, 2, 1, 2, 1, 3, 2, 1, 3, 2, 4, 1, 3, 2, 4, 1, 3, 2, 4, 1, 3, 5, ...
These are sequences and in the On-Line Encyclopedia of Integer Sequences, where further examples from a variety of number-theoretic and combinatorial settings are given.
See also
Thue-Morse Sequence
External links
On-Line Encyclopedia of Integer Sequences:
References
Fractals
Integer sequences
|
https://en.wikipedia.org/wiki/Markov%20brothers%27%20inequality
|
In mathematics, the Markov brothers' inequality is an inequality proved in the 1890s by brothers Andrey Markov and Vladimir Markov, two Russian mathematicians. This inequality bounds the maximum of the derivatives of a polynomial on an interval in terms of the maximum of the polynomial. For k = 1 it was proved by Andrey Markov, and for k = 2,3,... by his brother Vladimir Markov.
The statement
Let P be a polynomial of degree ≤ n. Then for all nonnegative integers
Equality is attained for Chebyshev polynomials of the first kind.
Related inequalities
Bernstein's inequality (mathematical analysis)
Remez inequality
Applications
Markov's inequality is used to obtain lower bounds in computational complexity theory via the so-called "Polynomial Method".
References
Theorems in analysis
Inequalities
|
https://en.wikipedia.org/wiki/Elementary%20proof
|
In mathematics, an elementary proof is a mathematical proof that only uses basic techniques. More specifically, the term is used in number theory to refer to proofs that make no use of complex analysis. Historically, it was once thought that certain theorems, like the prime number theorem, could only be proved by invoking "higher" mathematical theorems or techniques. However, as time progresses, many of these results have also been subsequently reproven using only elementary techniques.
While there is generally no consensus as to what counts as elementary, the term is nevertheless a common part of the mathematical jargon. An elementary proof is not necessarily simple, in the sense of being easy to understand or trivial. In fact, some elementary proofs can be quite complicated — and this is especially true when a statement of notable importance is involved.
Prime number theorem
The distinction between elementary and non-elementary proofs has been considered especially important in regard to the prime number theorem. This theorem was first proved in 1896 by Jacques Hadamard and Charles Jean de la Vallée-Poussin using complex analysis. Many mathematicians then attempted to construct elementary proofs of the theorem, without success. G. H. Hardy expressed strong reservations; he considered that the essential "depth" of the result ruled out elementary proofs:
However, in 1948, Atle Selberg produced new methods which led him and Paul Erdős to find elementary proofs of the prime number theorem.
A possible formalization of the notion of "elementary" in connection to a proof of a number-theoretical result is the restriction that the proof can be carried out in Peano arithmetic. Also in that sense, these proofs are elementary.
Friedman's conjecture
Harvey Friedman conjectured, "Every theorem published in the Annals of Mathematics whose statement involves only finitary mathematical objects (i.e., what logicians call an arithmetical statement) can be proved in elementary arithmetic." The form of elementary arithmetic referred to in this conjecture can be formalized by a small set of axioms concerning integer arithmetic and mathematical induction. For instance, according to this conjecture, Fermat's Last Theorem should have an elementary proof; Wiles's proof of Fermat's Last Theorem is not elementary. However, there are other simple statements about arithmetic such as the existence of iterated exponential functions that cannot be proven in this theory.
References
Elementary mathematics
Mathematical proofs
|
https://en.wikipedia.org/wiki/Gr%C3%ADmsey%20Airport
|
Grímsey Airport ( ) is an airport serving Grímsey, a small island north of Iceland.
Airlines and destinations
Statistics
Passengers and movements
See also
Transport in Iceland
List of airports in Iceland
Notes
References
External links
OpenStreetMap - Grímsey
OurAirports - Grímsey
Airports in Iceland
Akureyri
|
https://en.wikipedia.org/wiki/Vopnafj%C3%B6r%C3%B0ur%20Airport
|
Vopnafjörður Airport ( ) is an airport serving the village of Vopnafjörður, in the Eastern Region (Austurland) of Iceland.
Airlines and destinations
Statistics
Passengers and movements
Notes
References
External links
Official online guide to Vopnafjordur
Airports in Iceland
|
https://en.wikipedia.org/wiki/Carleman%27s%20inequality
|
Carleman's inequality is an inequality in mathematics, named after Torsten Carleman, who proved it in 1923 and used it to prove the Denjoy–Carleman theorem on quasi-analytic classes.
Statement
Let be a sequence of non-negative real numbers, then
The constant (euler number) in the inequality is optimal, that is, the inequality does not always hold if is replaced by a smaller number. The inequality is strict (it holds with "<" instead of "≤") if some element in the sequence is non-zero.
Integral version
Carleman's inequality has an integral version, which states that
for any f ≥ 0.
Carleson's inequality
A generalisation, due to Lennart Carleson, states the following:
for any convex function g with g(0) = 0, and for any -1 < p < ∞,
Carleman's inequality follows from the case p = 0.
Proof
An elementary proof is sketched below. From the inequality of arithmetic and geometric means applied to the numbers
where MG stands for geometric mean, and MA — for arithmetic mean. The Stirling-type inequality applied to implies
for all
Therefore,
whence
proving the inequality. Moreover, the inequality of arithmetic and geometric means of non-negative numbers is known to be an equality if and only if all the numbers coincide, that is, in the present case, if and only if for . As a consequence, Carleman's inequality is never an equality for a convergent series, unless all vanish, just because the harmonic series is divergent.
One can also prove Carleman's inequality by starting with Hardy's inequality
for the non-negative numbers a1,a2,... and p > 1, replacing each an with a, and letting p → ∞.
Versions for specific sequences
Christian Axler and Mehdi Hassani investigated Carleman's inequality for the specific cases of where is the th prime number. They also investigated the case where . They found that if one can replace with in Carleman's inequality, but that if then remained the best possible constant.
Notes
References
External links
Real analysis
Inequalities
|
https://en.wikipedia.org/wiki/Whitney%20conditions
|
In differential topology, a branch of mathematics, the Whitney conditions are conditions on a pair of submanifolds of a manifold introduced by Hassler Whitney in 1965.
A stratification of a topological space is a finite filtration by closed subsets Fi , such that the difference between successive members Fi and F(i − 1) of the filtration is either empty or a smooth submanifold of dimension i. The connected components of the difference Fi − F(i − 1) are the strata of dimension i. A stratification is called a Whitney stratification if all pairs of strata satisfy the Whitney conditions A and B, as defined below.
The Whitney conditions in Rn
Let X and Y be two disjoint (locally closed) submanifolds of Rn, of dimensions i and j.
X and Y satisfy Whitney's condition A if whenever a sequence of points x1, x2, … in X converges to a point y in Y, and the sequence of tangent i-planes Tm to X at the points xm converges to an i-plane T as m tends to infinity, then T contains the tangent j-plane to Y at y.
X and Y satisfy Whitney's condition B if for each sequence x1, x2, … of points in X and each sequence y1, y2, … of points in Y, both converging to the same point y in Y, such that the sequence of secant lines Lm between xm and ym converges to a line L as m tends to infinity, and the sequence of tangent i-planes Tm to X at the points xm converges to an i-plane T as m tends to infinity, then L is contained in T.
John Mather first pointed out that Whitney's condition B implies Whitney's condition A in the notes of his lectures at Harvard in 1970, which have been widely distributed. He also defined the notion of Thom–Mather stratified space, and proved that every Whitney stratification is a Thom–Mather stratified space and hence is a topologically stratified space. Another approach to this fundamental result was given earlier by René Thom in 1969.
David Trotman showed in his 1977 Warwick thesis that a stratification of a closed subset in a smooth manifold M satisfies Whitney's condition A if and only if the subspace of the space of smooth mappings from a smooth manifold N into M consisting of all those maps which are transverse to all of the strata of the stratification, is open (using the Whitney, or strong, topology). The subspace of mappings transverse to any countable family of submanifolds of M is always dense by Thom's transversality theorem. The density of the set of transverse mappings is often interpreted by saying that transversality is a 'generic' property for smooth mappings, while the openness is often interpreted by saying that the property is 'stable'.
The reason that Whitney conditions have become so widely used is because of Whitney's 1965 theorem that every algebraic variety, or indeed analytic variety, admits a Whitney stratification, i.e. admits a partition into smooth submanifolds satisfying the Whitney conditions. More general singular spaces can be given Whitney stratifications, such as semialgebraic sets (due to René Thom) and
|
https://en.wikipedia.org/wiki/Indicators%20of%20spatial%20association
|
Indicators of spatial association are statistics that evaluate the existence of clusters in the spatial arrangement of a given variable. For instance, if we are studying cancer rates among census tracts in a given city local clusters in the rates mean that there are areas that have higher or lower rates than is to be expected by chance alone; that is, the values occurring are above or below those of a random distribution in space.
Global indicators
Notable global indicators of spatial association include:
Global Moran's I: The most commonly used measure of global spatial autocorrelation or the overall clustering of the spatial data developed by Patrick Alfred Pierce Moran.
Geary's C (Geary's Contiguity Ratio): A measure of global spatial autocorrelation developed by Geary in 1954. It is inversely related to Moran's I, but more sensitive to local autocorrelation than Moran's I.
Getis–Ord G (Getis–Ord global G, Geleral G-Statistic): Introduced by Getis and Ord in 1992 to supplement Moran's I.
Local indicators
Notable local indicators of spatial association (LISA) include:
Local Moran's I: Derived from Global Moran's I, it was introduced by Luc Anselin in 1995 and can be computed using GeoDa.
Getis–Ord Gi (local Gi): Developed by Getis and Ord based on their global G.
INDICATE's IN: Originally developed to assess the spatial behaviour of stars, can be computed for any discrete 2+D dataset using python-based INDICATE tool available from GitHub.
See also
Spatial analysis
Tobler's first law of geography
References
Further reading
Spatial analysis
|
https://en.wikipedia.org/wiki/J.%20Michael%20Steele
|
John Michael Steele is C.F. Koo Professor of Statistics at the Wharton School of the University of Pennsylvania, and he was previously affiliated with Stanford University, Columbia University and Princeton University.
Steele was elected the 2009 president of the Institute of Mathematical Statistics.
Awards
Source:
Fellow, Institute for Mathematical Statistics, 1984;
Fellow, American Statistical Association, 1989;
Frank Wilcoxon Prize, American Society for Quality Control and the American Statistical Association, 1990
Chauvenet Prize (with Vladimir Pozdnyakov), in 2020, for their paper "Buses, Bullies, and Bijections"
Books
References
External links
J. Michael Steele's homepage
20th-century American mathematicians
21st-century American mathematicians
Living people
Probability theorists
Year of birth missing (living people)
Fellows of the American Statistical Association
Presidents of the Institute of Mathematical Statistics
Wharton School of the University of Pennsylvania faculty
Stanford University Department of Statistics faculty
Columbia University faculty
Princeton University faculty
|
https://en.wikipedia.org/wiki/Aleksandr%20Kuchma
|
Alexander Kuchma (; born 9 December 1980) is a former Kazakh football defender.
Career
In December 2014, Kuchma left FC Taraz.
Career statistics
International goals
References
External links
1980 births
Living people
Kazakhstani men's footballers
Men's association football defenders
Kazakhstan men's international footballers
FC Taraz players
FC Zhenis players
FC Kairat players
Ruch Chorzów players
FC Tobol players
FC Ordabasy players
FC Okzhetpes players
FC Irtysh Pavlodar players
SG Sonnenhof Großaspach players
Kazakhstan Premier League players
Kazakhstani expatriate men's footballers
Expatriate men's footballers in Poland
Kazakhstani expatriate sportspeople in Poland
Kazakhstani people of Ukrainian descent
Sportspeople from Taraz
|
https://en.wikipedia.org/wiki/Boltzmann%27s%20entropy%20formula
|
In statistical mechanics, Boltzmann's equation (also known as the Boltzmann–Planck equation) is a probability equation relating the entropy , also written as , of an ideal gas to the multiplicity (commonly denoted as or ), the number of real microstates corresponding to the gas's macrostate:
where is the Boltzmann constant (also written as simply ) and equal to 1.380649 × 10−23 J/K, and is the natural logarithm function (also written as , as in the image above).
In short, the Boltzmann formula shows the relationship between entropy and the number of ways the atoms or molecules of a certain kind of thermodynamic system can be arranged.
History
The equation was originally formulated by Ludwig Boltzmann between 1872 and 1875, but later put into its current form by Max Planck in about 1900. To quote Planck, "the logarithmic connection between entropy and probability was first stated by L. Boltzmann in his kinetic theory of gases".
A 'microstate' is a state specified in terms of the constituent particles of a body of matter or radiation that has been specified as a macrostate in terms of such variables as internal energy and pressure. A macrostate is experimentally observable, with at least a finite extent in spacetime. A microstate can be instantaneous, or can be a trajectory composed of a temporal progression of instantaneous microstates. In experimental practice, such are scarcely observable. The present account concerns instantaneous microstates.
The value of was originally intended to be proportional to the Wahrscheinlichkeit (the German word for probability) of a macroscopic state for some probability distribution of possible microstates—the collection of (unobservable microscopic single particle) "ways" in which the (observable macroscopic) thermodynamic state of a system can be realized by assigning different positions and momenta to the respective molecules.
There are many instantaneous microstates that apply to a given macrostate. Boltzmann considered collections of such microstates. For a given macrostate, he called the collection of all possible instantaneous microstates of a certain kind by the name monode, for which Gibbs' term ensemble is used nowadays. For single particle instantaneous microstates, Boltzmann called the collection an ergode. Subsequently, Gibbs called it a microcanonical ensemble, and this name is widely used today, perhaps partly because Bohr was more interested in the writings of Gibbs than of Boltzmann.
Interpreted in this way, Boltzmann's formula is the most basic formula for the thermodynamic entropy. Boltzmann's paradigm was an ideal gas of identical particles, of which are in the -th microscopic condition (range) of position and momentum. For this case, the probability of each microstate of the system is equal, so it was equivalent for Boltzmann to calculate the number of microstates associated with a macrostate. was historically misinterpreted as literally meaning the number of microstates, a
|
https://en.wikipedia.org/wiki/Shape%20of%20a%20probability%20distribution
|
In statistics, the concept of the shape of a probability distribution arises in questions of finding an appropriate distribution to use to model the statistical properties of a population, given a sample from that population. The shape of a distribution may be considered either descriptively, using terms such as "J-shaped", or numerically, using quantitative measures such as skewness and kurtosis.
Considerations of the shape of a distribution arise in statistical data analysis, where simple quantitative descriptive statistics and plotting techniques such as histograms can lead on to the selection of a particular family of distributions for modelling purposes.
Descriptions of shape
The shape of a distribution will fall somewhere in a continuum where a flat distribution might be considered central and where types of departure from this include: mounded (or unimodal), U-shaped, J-shaped, reverse-J shaped and multi-modal. A bimodal distribution would have two high points rather than one. The shape of a distribution is sometimes characterised by the behaviours of the tails (as in a long or short tail). For example, a flat distribution can be said either to have no tails, or to have short tails. A normal distribution is usually regarded as having short tails, while an exponential distribution has exponential tails and a Pareto distribution has long tails.
See also
Shape parameter
List of probability distributions
Notes
References
Yule, G.U., Kendall, M.G. (1950) An Introduction to the Theory of Statistics, 14th Edition (5th Impression, 1968), Griffin, London.
den Dekker A. J., Sijbers J., (2014) "Data distributions in magnetic resonance images: a review", Physica Medica
Theory of probability distributions
|
https://en.wikipedia.org/wiki/Visibility%20%28geometry%29
|
In geometry, visibility is a mathematical abstraction of the real-life notion of visibility.
Given a set of obstacles in the Euclidean space, two points in the space are said to be visible to each other, if the line segment that joins them does not intersect any obstacles. (In the Earth's atmosphere light follows a slightly curved path that is not perfectly predictable, complicating the calculation of actual visibility.)
Computation of visibility is among the basic problems in computational geometry and has applications in computer graphics, motion planning, and other areas.
Concepts and problems
Point visibility
Edge visibility
Visibility polygon
Weak visibility
Art gallery problem or museum problem
Visibility graph
Visibility graph of vertical line segments
Watchman route problem
Computer graphics applications:
Hidden surface determination
Hidden line removal
z-buffering
portal engine
Star-shaped polygon
Kernel of a polygon
Isovist
Viewshed
Zone of Visual Influence
Painter's algorithm
References
Chapter 15: "Visibility graphs"
External links
Software
VisiLibity: A free open source C++ library of floating-point visibility algorithms and supporting data types
Geometry
Geometric algorithms
|
https://en.wikipedia.org/wiki/Double%20colon
|
The double colon ( :: ) may refer to:
an analogy symbolism operator, in logic and mathematics
a notation for equality of ratios
a scope resolution operator, in computer programming languages
See also
Colon (punctuation)
|
https://en.wikipedia.org/wiki/L0
|
L0 may refer to:
Haplogroup L0 (mtDNA), a human mitochondrial DNA haplogroup
L0 norm, a norm in mathematics
L0 Series, a high-speed maglev train operated by the Japanese railway company JR Central
See also
Level 0 (disambiguation)
|
https://en.wikipedia.org/wiki/Journal%20of%20Industrial%20and%20Management%20Optimization
|
The Journal of Industrial and Management Optimization (JIMO) is an international journal published by American Institute of Mathematical Sciences and sponsored by Department of Mathematics and Statistics, Curtin University of Technology, and Department of Mathematics, Zhejiang University. This journal illustrates original research papers on the non-trivial interplay between numerical optimization methods and problems in industry or management. The objective of this journal is to develop new optimization ideas so as to solve industrial and management problems by the use of appropriate, advanced optimization techniques.
Its impact factor has been frequently ranked by SCImago as in the top quartile of business and international management journals.
References
External links
Home page
Journal of Industrial and Management Optimization
Academic journals established in 2005
|
https://en.wikipedia.org/wiki/Fujiki%20class%20C
|
In algebraic geometry, a complex manifold is called Fujiki class if it is bimeromorphic to a compact Kähler manifold. This notion was defined by Akira Fujiki.
Properties
Let M be a compact manifold of Fujiki class , and
its complex subvariety. Then X
is also in Fujiki class (, Lemma 4.6). Moreover, the Douady space of X (that is, the moduli of deformations of a subvariety , M fixed) is compact and in Fujiki class .
Fujiki class manifolds are examples of compact complex manifolds which are not necessarily Kähler, but for which the -lemma holds.
Conjectures
J.-P. Demailly and M. Pǎun have
shown that a manifold is in Fujiki class if and only
if it supports a Kähler current.
They also conjectured that a manifold M is in Fujiki class if it admits a nef current which is big, that is, satisfies
For a cohomology class which is rational, this statement is known: by Grauert-Riemenschneider conjecture, a holomorphic line bundle L with first Chern class
nef and big has maximal Kodaira dimension, hence the corresponding rational map to
is generically finite onto its image, which is algebraic, and therefore Kähler.
Fujiki and Ueno asked whether the property is stable under deformations. This conjecture was disproven in 1992 by Y.-S. Poon and Claude LeBrun
References
Algebraic geometry
Complex manifolds
|
https://en.wikipedia.org/wiki/Title%2013%20of%20the%20United%20States%20Code
|
Title 13 of the United States Code outlines the role of the United States Census in the United States Code.
Chapters
: Administration
: Collection and Publication of Statistics
: Censuses
: Offenses and Penalties
: Collection and Publication of Foreign Commerce and Trade Statistics
: Exchange of Census Information
References
External links
U.S. Code Title 13, via United States Government Printing Office
U.S. Code Title 13, via Cornell University
13
|
https://en.wikipedia.org/wiki/Imperfect%20induction
|
The imperfect induction is the process of inferring from a sample of a group to what is characteristic of the whole group.
References
Sampling (statistics)
Inductive reasoning
|
https://en.wikipedia.org/wiki/Philo%20line
|
In geometry, the Philo line is a line segment defined from an angle and a point inside the angle as the shortest line segment through the point that has its endpoints on the two sides of the angle. Also known as the Philon line, it is named after Philo of Byzantium, a Greek writer on mechanical devices, who lived probably during the 1st or 2nd century BC. Philo used the line to double the cube; because doubling the cube cannot be done by a straightedge and compass construction, neither can constructing the Philo line.
Geometric characterization
The defining point of a Philo line, and the base of a perpendicular from the apex of the angle to the line, are equidistant from the endpoints of the line.
That is, suppose that segment is the Philo line for point and angle , and let be the base of a perpendicular line to . Then and .
Conversely, if and are any two points equidistant from the ends of a line segment , and if is any point on the line through that is perpendicular to , then is the Philo line for angle and point .
Algebraic Construction
A suitable fixation of the line given the directions from to and from to and the location of in that infinite triangle is obtained by the following algebra:
The point is put into the center of the coordinate system, the direction from to defines the horizontal -coordinate, and the direction from to defines the line with the equation in the rectilinear coordinate system. is the tangent of the angle in the triangle . Then has the Cartesian Coordinates and the task is to find on the horizontal axis and on the other side of the triangle.
The equation of a bundle of lines with inclinations that
run through the point is
These lines intersect the horizontal axis at
which has the solution
These lines intersect the opposite side at
which has the solution
The squared Euclidean distance between the intersections of the horizontal line
and the diagonal is
The Philo Line is defined by the minimum of that distance at
negative .
An arithmetic expression for the location of the minimum
is obtained by setting the derivative ,
so
So calculating the root of the polynomial in the numerator,
determines the slope of the particular line in the line bundle which has the shortest length.
[The global minimum at inclination from the root of the other factor is not of interest; it does not define a triangle but means
that the horizontal line, the diagonal and the line of the bundle all intersect at .]
is the tangent of the angle .
Inverting the equation above as and plugging this into the previous equation
one finds that is a root of the cubic polynomial
So solving that cubic equation finds the intersection of the Philo line on the horizontal axis.
Plugging in the same expression into the expression for the squared distance gives
Location of
Since the line is orthogonal to , its slope is , so the points on that line are . The coordinates of the point are calculated
|
https://en.wikipedia.org/wiki/Friedlander%E2%80%93Iwaniec%20theorem
|
In analytic number theory the Friedlander–Iwaniec theorem states that there are infinitely many prime numbers of the form . The first few such primes are
2, 5, 17, 37, 41, 97, 101, 137, 181, 197, 241, 257, 277, 281, 337, 401, 457, 577, 617, 641, 661, 677, 757, 769, 821, 857, 881, 977, … .
The difficulty in this statement lies in the very sparse nature of this sequence: the number of integers of the form less than is roughly of the order .
History
The theorem was proved in 1997 by John Friedlander and Henryk Iwaniec. Iwaniec was awarded the 2001 Ostrowski Prize in part for his contributions to this work.
Refinements
The theorem was refined by D.R. Heath-Brown and Xiannan Li in 2017. In particular, they proved that the polynomial represents infinitely many primes when the variable is also required to be prime. Namely, if is the prime numbers less than in the form then
where
Special case
When , the Friedlander–Iwaniec primes have the form , forming the set
2, 5, 17, 37, 101, 197, 257, 401, 577, 677, 1297, 1601, 2917, 3137, 4357, 5477, 7057, 8101, 8837, 12101, 13457, 14401, 15377, … .
It is conjectured (one of Landau's problems) that this set is infinite. However, this is not implied by the Friedlander–Iwaniec theorem.
References
Further reading
.
Additive number theory
Theorems in analytic number theory
Theorems about prime numbers
|
https://en.wikipedia.org/wiki/Flag%20%28geometry%29
|
In (polyhedral) geometry, a flag is a sequence of faces of a polytope, each contained in the next, with exactly one face from each dimension.
More formally, a flag of an -polytope is a set such that and there is precisely one in for each , Since, however, the minimal face and the maximal face must be in every flag, they are often omitted from the list of faces, as a shorthand. These latter two are called improper faces.
For example, a flag of a polyhedron comprises one vertex, one edge incident to that vertex, and one polygonal face incident to both, plus the two improper faces.
A polytope may be regarded as regular if, and only if, its symmetry group is transitive on its flags. This definition excludes chiral polytopes.
Incidence geometry
In the more abstract setting of incidence geometry, which is a set having a symmetric and reflexive relation called incidence defined on its elements, a flag is a set of elements that are mutually incident. This level of abstraction generalizes both the polyhedral concept given above as well as the related flag concept from linear algebra.
A flag is maximal if it is not contained in a larger flag. An incidence geometry (Ω, ) has rank if Ω can be partitioned into sets Ω1, Ω2, ..., Ω, such that each maximal flag of the geometry intersects each of these sets in exactly one element. In this case, the elements of set Ω are called elements of type .
Consequently, in a geometry of rank , each maximal flag has exactly elements.
An incidence geometry of rank 2 is commonly called an incidence structure with elements of type 1 called points and elements of type 2 called blocks (or lines in some situations). More formally,
An incidence structure is a triple D = (V, B, ) where V and B are any two disjoint sets and is a binary relation between V and B, that is, ⊆ V × B. The elements of V will be called points, those of B blocks and those of flags.
Notes
References
Peter R. Cromwell, Polyhedra, Cambridge University Press 1997,
Peter McMullen, Egon Schulte, Abstract Regular Polytopes, Cambridge University Press, 2002.
Incidence geometry
Polygons
Polyhedra
4-polytopes
|
https://en.wikipedia.org/wiki/Sophomore%27s%20dream
|
In mathematics, the sophomore's dream is the pair of identities (especially the first)
discovered in 1697 by Johann Bernoulli.
The numerical values of these constants are approximately 1.291285997... and 0.7834305107..., respectively.
The name "sophomore's dream" is in contrast to the name "freshman's dream" which is given to the incorrect identity The sophomore's dream has a similar too-good-to-be-true feel, but is true.
Proof
The proofs of the two identities are completely analogous, so only the proof of the second is presented here.
The key ingredients of the proof are:
to write (using the notation for the natural logarithm and for the exponential function);
to expand using the power series for ; and
to integrate termwise, using integration by substitution.
In details, can be expanded as
Therefore,
By uniform convergence of the power series, one may interchange summation and integration to yield
To evaluate the above integrals, one may change the variable in the integral via the substitution With this substitution, the bounds of integration are transformed to giving the identity
By Euler's integral identity for the Gamma function, one has
so that
Summing these (and changing indexing so it starts at instead of ) yields the formula.
Historical proof
The original proof, given in Bernoulli, and presented in modernized form in Dunham, differs from the one above in how the termwise integral is computed, but is otherwise the same, omitting technical details to justify steps (such as termwise integration). Rather than integrating by substitution, yielding the Gamma function (which was not yet known), Bernoulli used integration by parts to iteratively compute these terms.
The integration by parts proceeds as follows, varying the two exponents independently to obtain a recursion. An indefinite integral is computed initially, omitting the constant of integration both because this was done historically, and because it drops out when computing the definite integral.
Integrating by substituting and yields:
(also in the list of integrals of logarithmic functions). This reduces the power on the logarithm in the integrand by 1 (from to ) and thus one can compute the integral inductively, as
where denotes the falling factorial; there is a finite sum because the induction stops at 0, since is an integer.
In this case , and they are integers, so
Integrating from 0 to 1, all the terms vanish except the last term at 1, which yields:
This is equivalent to computing Euler's integral identity for the Gamma function on a different domain (corresponding to changing variables by substitution), as Euler's identity itself can also be computed via an analogous integration by parts.
See also
Series (mathematics)
Notes
References
Formula
OEIS, and
Max R. P. Grossmann (2017): Sophomore's dream. 1,000,000 digits of the first constant
Function
Literature for x^x and Sophomore's Dream, Tetration Forum, 03/02/2
|
https://en.wikipedia.org/wiki/Zalman%20Usiskin
|
Zalman Usiskin is an educator best known as the Director of the University of Chicago School Mathematics Project.
He was born to Nathan and Esther Usiskin.
A faculty member since 1969, he also has taught junior and senior high-school mathematics and has authored and co-authored many textbooks, including a six-volume series used as part of the University School Mathematics Project secondary curriculum. In recognition of his work, he has received a Lifetime Achievement Award from the National Council of Teachers of Mathematics.
Usiskin's doctoral dissertation in mathematical education at the University of Michigan involved the field testing of his book, Geometry: A Transformation Approach, which was written with Arthur Coxford. This book has greatly influenced the way geometry is taught in many American high schools, according to the NCTM citation. With the founding of UCSMP in 1983, he became Director of the secondary component and has been the project's overall Director since 1987. The University School Mathematics Project has grown to become the nation's largest university-based curriculum project for kindergarten through 12th-grade mathematics, with several million students using its elementary and secondary textbooks and other materials.
References
Profile at the University of Chicago
http://genealogy.math.ndsu.nodak.edu/id.php?id=130631
20th-century American mathematicians
21st-century American mathematicians
Mathematics educators
University of Michigan School of Education alumni
University of Chicago faculty
Year of birth missing (living people)
Living people
|
https://en.wikipedia.org/wiki/Jackson%27s%20inequality
|
In approximation theory, Jackson's inequality is an inequality bounding the value of function's best approximation by algebraic or trigonometric polynomials in terms of the modulus of continuity or modulus of smoothness of the function or of its derivatives. Informally speaking, the smoother the function is, the better it can be approximated by polynomials.
Statement: trigonometric polynomials
For trigonometric polynomials, the following was proved by Dunham Jackson:
Theorem 1: If is an times differentiable periodic function such that
then, for every positive integer , there exists a trigonometric polynomial of degree at most such that
where depends only on .
The Akhiezer–Krein–Favard theorem gives the sharp value of (called the Akhiezer–Krein–Favard constant):
Jackson also proved the following generalisation of Theorem 1:
Theorem 2: One can find a trigonometric polynomial of degree such that
where denotes the modulus of continuity of function with the step
An even more general result of four authors can be formulated as the following Jackson theorem.
Theorem 3: For every natural number , if is -periodic continuous function, there exists a trigonometric polynomial of degree such that
where constant depends on and is the -th order modulus of smoothness.
For this result was proved by Dunham Jackson. Antoni Zygmund proved the inequality in the case when in 1945. Naum Akhiezer proved the theorem in the case in 1956. For this result was established by Sergey Stechkin in 1967.
Further remarks
Generalisations and extensions are called Jackson-type theorems. A converse to Jackson's inequality is given by Bernstein's theorem. See also constructive function theory.
References
External links
Approximation theory
Inequalities
Theorems in approximation theory
|
https://en.wikipedia.org/wiki/Lycksele%20Airport
|
Lycksele Airport is a regional airport in Lycksele, northern Sweden.
Airlines and destinations
The following airlines operate regular scheduled and charter flights at Lycksele Airport:
Statistics
See also
List of the largest airports in the Nordic countries
References
Airports in Sweden
Buildings and structures in Västerbotten County
|
https://en.wikipedia.org/wiki/Sveg%20Airport
|
Sveg Airport is an airport in Sveg, Sweden .
Airlines and destinations
The following airlines operate regular scheduled and charter flights at Sveg Airport:
Statistics
See also
List of the largest airports in the Nordic countries
References
Airports in Sweden
|
https://en.wikipedia.org/wiki/Torsby%20Airport
|
Torsby Airport is an airport in Torsby, Sweden .
Airlines and destinations
The following airlines operate regular scheduled and charter flights at Torsby Airport:
Statistics
See also
List of the largest airports in the Nordic countries
References
External links
Airports in Sweden
|
https://en.wikipedia.org/wiki/Batman%20Airport
|
Batman Airport is an airport in Batman, Turkey .
Airlines and destinations
Traffic statistics
References
External links
Airport Profile
Airports in Turkey
Buildings and structures in Batman Province
Transport in Batman Province
|
https://en.wikipedia.org/wiki/Erzincan%20Airport
|
Erzincan Yıldırım Akbulut Airport is an airport located in Erzincan, Turkey.
Airlines and destinations
Traffic Statistics
(*)Source: DHMI.gov.tr
References
External links
Airports in Turkey
Buildings and structures in Erzincan Province
Transport in Erzincan Province
|
https://en.wikipedia.org/wiki/Mu%C5%9F%20Airport
|
Muş "Sultan Alparslan" Airport is an airport in Muş, Turkey.
Airlines and destinations
The following airlines operate regular scheduled and charter flights at Muş Airport:
Traffic Statistics
(*)Source: DHMI.gov.tr
References
External links
Airports in Turkey
Muş
Buildings and structures in Muş Province
Transport in Muş Province
|
https://en.wikipedia.org/wiki/Dikran%20Tahta
|
Dikran Tahta (, 7 August 1928 – 2 December 2006) was a British mathematician, teacher and author. He was also the maths teacher of Stephen Hawking.
Early life
Dikran Tahta was a descendant of Istanbul-based Armenian family of cotton merchants. His father, Kevork Tahtabrounian, (1895–1980), settled in Manchester with his wife in 1927, after the First World War and the Armenian genocide, shortening his surname to Tahta. Kevork run a branch of the business which took the name Manchester Textile Exporters and was able to donate £100,000 to the Armenian Community Council, which enabled the community to set up a Trust for the benefit of the Community's Religious, Educational and Cultural needs, including the school named after Kevork Tahta.
Much of Dikran's childhood, including the influence of his Armenian religious upbringing, is reflected in his penultimate book Ararat Associations. Dikran remembers how his "father, who would be standing, like the other males, with open arms extended in their own way of praying. Kneeling was for women and children". In the book, he notes how his parents were keen for their children to have an English education, yet to speak Armenian at home. Dikran was christened by Bishop Tourian in the Holy Trinity Armenian Church in Manchester, and his first name Dikran was shortened to Dick. He never forgot his Armenian roots. In his childhood, he would visit his relatives in Istanbul every other year.
From Rossall School, in Fleetwood, Lancashire, Dikran gained a scholarship to Christ Church, Oxford, in 1946. His main subject was Mathematics, but he also read widely in English literature, philosophy and history.
Career
Between graduating the university and just before the national service, Tahta took time out to catalogue the library of the late Archbishop Matheos Indjeian (1877–1950), and read a number of his books.
Tahta did national service in the RAF from 1950 to 1952, then after a brief foray into journalism, he returned to Rossall School in 1954, where he began teaching English and History. In 1955, he moved to teach mathematics at St Albans School, Hertfordshire, where the young Stephen Hawking was a pupil. When asked later to name a teacher who had inspired him, Hawking named "Mr Tahta". Tahta remained at St Albans for six years before taking up the post of lecturer in mathematics education at St Luke's College, Exeter, in 1961. By 1974, he was a Mathematics tutor at the University of Exeter's School of Education at Thornlea, on the New North Road. In 1978, the School of Education merged with St Luke's College to form the University's Department of Education. He remained there until his retirement in late 1981.
In the 1970s Tahta was involved in the ATV television programme of mathematics for schools entitled 'Leapfrogs' (produced and directed by Paul Martin), promoting visual approaches to mathematics. His paper "On Geometry" argued that geometrical approaches to mathematics could not be reduced to algebraic app
|
https://en.wikipedia.org/wiki/Error%20analysis
|
Error analysis can refer to one of the following:
Error analysis (mathematics) is concerned with the changes in the output of the model as the parameters to the model vary about a mean.
Error analysis (linguistics) studies the types and causes of language errors.
Error analysis for the Global Positioning System
"Error analysis" is sometimes used for engineering practices such as described under Fault tree analysis.
|
https://en.wikipedia.org/wiki/Logarithmically%20concave%20measure
|
In mathematics, a Borel measure μ on n-dimensional Euclidean space is called logarithmically concave (or log-concave for short) if, for any compact subsets A and B of and 0 < λ < 1, one has
where λ A + (1 − λ) B denotes the Minkowski sum of λ A and (1 − λ) B.
Examples
The Brunn–Minkowski inequality asserts that the Lebesgue measure is log-concave. The restriction of the Lebesgue measure to any convex set is also log-concave.
By a theorem of Borell, a probability measure on R^d is log-concave if and only if it has a density with respect to the Lebesgue measure on some affine hyperplane, and this density is a logarithmically concave function. Thus, any Gaussian measure is log-concave.
The Prékopa–Leindler inequality shows that a convolution of log-concave measures is log-concave.
See also
Convex measure, a generalisation of this concept
Logarithmically concave function
References
Measures (measure theory)
|
https://en.wikipedia.org/wiki/Title%2029%20of%20the%20United%20States%20Code
|
Title 29 of the United States Code is a code that outlines labor regulations in the United States.
Code Chapters
Title 29 has 35 chapters:
: Labor Statistics
: Women's Bureau
. Children's Bureau (Transferred)
. National Trade Unions (Repealed)
. Vocational Rehabilitation of Persons Injured in Industry
. Employment Stabilization (Omitted or Repealed)
. Federal Employment Service
. Apprentice Labor
. Labor Disputes; Mediation and Injunctive Relief
. Jurisdiction of Courts in Matters Affecting Employer and Employee
: Labor-Management Relations
. Fair Labor Standards
. Portal-To-Portal Pay
. Disclosure of Welfare and Pension Plans (Repealed)
. Labor-Management Reporting and Disclosure Procedure
. Department of Labor
. Exemplary Rehabilitation Certificates (Repealed)
. Age Discrimination in Employment
. Occupational Safety and Health
. Vocational Rehabilitation and Other Rehabilitation Services
. Comprehensive Employment and Training Programs (Repealed)
. Employee Retirement Income Security Program
. Job Training Partnership (Repealed, Transferred, or Omitted)
. Migrant and Seasonal Agricultural Worker Protection
. Helen Keller National Center for Youths and Adults Who Are Deaf-Blind
. Employee Polygraph Protection
. Worker Adjustment and Retraining Notification
. Technology Related Assistance for Individuals With Disabilities (Repealed)
. Displaced Homemakers Self-Sufficiency Assistance (Repealed)
. National Center for the Workplace (Repealed)
. Women in Apprenticeship and Nontraditional Occupations
. Family and Medical Leave
. Workers Technology Skill Development
. Workforce Investment Systems
. Assistive Technology For Individuals With Disabilities
References
External links
U.S. Code Title 29, via United States Government Printing Office
U.S. Code Title 29, via Cornell University
29
Title 29
|
https://en.wikipedia.org/wiki/Eric%20Bach
|
Eric Bach is an American computer scientist who has made contributions to computational number theory.
Bach completed his undergraduate studies at the University of Michigan, Ann Arbor, and got his Ph.D. in computer science from the University of California, Berkeley, in 1984 under the supervision of Manuel Blum. He is currently a professor at the Computer Science Department, University of Wisconsin–Madison.
Among other work, he gave explicit bounds for the Chebotarev density theorem, which imply that if one assumes the generalized Riemann hypothesis then is generated by its elements smaller than 2(log n)2. This result shows that the generalized Riemann hypothesis implies tight bounds for the necessary run-time of the deterministic version of the Miller–Rabin primality test. Bach also did some of the first work on pinning down the actual expected run-time of the Pollard rho method where previous work relied on heuristic estimates and empirical data. He is the namesake of Bach's algorithm for generating random factored numbers.
References
American computer scientists
Living people
University of Michigan alumni
UC Berkeley College of Engineering alumni
University of Wisconsin–Madison faculty
Year of birth missing (living people)
Number theorists
|
https://en.wikipedia.org/wiki/General%20set%20theory
|
General set theory (GST) is George Boolos's (1998) name for a fragment of the axiomatic set theory Z. GST is sufficient for all mathematics not requiring infinite sets, and is the weakest known set theory whose theorems include the Peano axioms.
Ontology
The ontology of GST is identical to that of ZFC, and hence is thoroughly canonical. GST features a single primitive ontological notion, that of set, and a single ontological assumption, namely that all individuals in the universe of discourse (hence all mathematical objects) are sets. There is a single primitive binary relation, set membership; that set a is a member of set b is written a ∈ b (usually read "a is an element of b").
Axioms
The symbolic axioms below are from Boolos (1998: 196), and govern how sets behave and interact.
As with Z, the background logic for GST is first order logic with identity. Indeed, GST is the fragment of Z obtained by omitting the axioms Union, Power Set, Elementary Sets (essentially Pairing) and Infinity and then taking a theorem of Z, Adjunction, as an axiom.
The natural language versions of the axioms are intended to aid the intuition.
1) Axiom of Extensionality: The sets x and y are the same set if they have the same members.
The converse of this axiom follows from the substitution property of equality.
2) Axiom Schema of Specification (or Separation or Restricted Comprehension): If z is a set and is any property which may be satisfied by all, some, or no elements of z, then there exists a subset y of z containing just those elements x in z which satisfy the property . The restriction to z is necessary to avoid Russell's paradox and its variants. More formally, let be any formula in the language of GST in which x may occur freely and y does not. Then all instances of the following schema are axioms:
3) Axiom of Adjunction: If x and y are sets, then there exists a set w, the adjunction of x and y, whose members are just y and the members of x.
Adjunction refers to an elementary operation on two sets, and has no bearing on the use of that term elsewhere in mathematics, including in category theory.
ST is GST with the axiom schema of specification replaced by the axiom of empty set.
Discussion
Metamathematics
Note that Specification is an axiom schema. The theory given by these axioms is not finitely axiomatizable. Montague (1961) showed that ZFC is not finitely axiomatizable, and his argument carries over to GST. Hence any axiomatization of GST must include at least one axiom schema.
With its simple axioms, GST is also immune to the three great antinomies of naïve set theory: Russell's, Burali-Forti's, and Cantor's.
GST is Interpretable in relation algebra because no part of any GST axiom lies in the scope of more than three quantifiers. This is the necessary and sufficient condition given in Tarski and Givant (1987).
Peano arithmetic
Setting φ(x) in Separation to x≠x, and assuming that the domain is nonempty, assures the existence of the em
|
https://en.wikipedia.org/wiki/Jean-Charles%20Faug%C3%A8re
|
Jean-Charles Faugère is the head of the POLSYS project-team (Solvers for Algebraic Systems and Applications) of the Laboratoire d'Informatique de Paris 6 (LIP6) and Paris–Rocquencourt center of INRIA, in Paris. The team was formerly known as SPIRAL and SALSA.
Faugère obtained his Ph.D. in mathematics in 1994 at the University of Paris VI, with the dissertation "Résolution des systemes d’équations algébriques" (Solving systems of algebraic equations), under the supervision of Daniel Lazard.
He works on Gröbner bases and their applications, in particular, in cryptology. With his collaborators, he has devised the FGLM algorithm for computing Gröbner bases; he has also introduced the F4 and F5 algorithms for calculating Gröbner bases. In particular, his F5 algorithm allowed him to solve various problems in cryptography such as HFE; he also introduced a new type of cryptanalysis, called algebraic cryptanalysis.
Notes
External links
POLSYS web site
The old SPIRAL web site
The old SALSA web site
Jean-Charles Faugère's page
Living people
French mathematicians
Pierre and Marie Curie University alumni
Year of birth missing (living people)
French computer scientists
|
https://en.wikipedia.org/wiki/Jari%20Tolsa
|
Jari Juha Tolsa (born April 20, 1981) is a Swedish professional ice hockey left winger who plays for Varberg Vipers in the Swedish Division 2.
Career statistics
Regular season and playoffs
External links
1981 births
Detroit Red Wings draft picks
Espoo Blues players
Frölunda HC players
Living people
Modo Hockey players
Swedish people of Finnish descent
Swedish ice hockey left wingers
Ice hockey people from Gothenburg
|
https://en.wikipedia.org/wiki/Support%20function
|
In mathematics, the support function hA of a non-empty closed convex set A in
describes the (signed) distances of supporting hyperplanes of A from the origin. The support function is a convex function on .
Any non-empty closed convex set A is uniquely determined by hA. Furthermore, the support function, as a function of the set A, is compatible with many natural geometric operations, like scaling, translation, rotation and Minkowski addition.
Due to these properties, the support function is one of the most central basic concepts in convex geometry.
Definition
The support function
of a non-empty closed convex set A in is given by
; see
. Its interpretation is most intuitive when x is a unit vector:
by definition, A is contained in the closed half space
and there is at least one point of A in the boundary
of this half space. The hyperplane H(x) is therefore called a supporting hyperplane
with exterior (or outer) unit normal vector x.
The word exterior is important here, as
the orientation of x plays a role, the set H(x) is in general different from H(-x).
Now hA is the (signed) distance of H(x) from the origin.
Examples
The support function of a singleton A={a} is .
The support function of the Euclidean unit ball is where is the 2-norm.
If A is a line segment through the origin with endpoints -a and a then .
Properties
As a function of x
The support function of a compact nonempty convex set is real valued and continuous, but if the
set is closed and unbounded, its support function is extended real valued (it takes the value
). As any nonempty closed convex set is the intersection of
its supporting half spaces, the function hA determines A uniquely.
This can be used to describe certain geometric properties of convex sets analytically.
For instance, a set A is point symmetric with respect to the origin if and only if hA
is an even function.
In general, the support function is not differentiable.
However, directional derivatives exist and yield support functions of support sets. If A is compact and convex,
and hA'(u;x) denotes the directional derivative of
hA at u ≠ 0 in direction x,
we have
Here H(u) is the supporting hyperplane of A with exterior normal vector u, defined
above. If A ∩ H(u) is a singleton {y}, say, it follows that the support function is differentiable at
u and its gradient coincides with y. Conversely, if hA is differentiable at u, then A ∩ H(u) is a singleton. Hence hA is differentiable at all points u ≠ 0
if and only if A is strictly convex (the boundary of A does not contain any line segments).
More generally, when is convex and closed then for any ,
where denotes the set of subgradients of at .
It follows directly from its definition that the support function is positive homogeneous:
and subadditive:
It follows that hA is a convex function.
It is crucial in convex geometry that these properties characterize support functions:
Any positive homogeneous, convex, real valued
|
https://en.wikipedia.org/wiki/Radical%20of%20a%20module
|
In mathematics, in the theory of modules, the radical of a module is a component in the theory of structure and classification. It is a generalization of the Jacobson radical for rings. In many ways, it is the dual notion to that of the socle soc(M) of M.
Definition
Let R be a ring and M a left R-module. A submodule N of M is called maximal or cosimple if the quotient M/N is a simple module. The radical of the module M is the intersection of all maximal submodules of M,
Equivalently,
These definitions have direct dual analogues for soc(M).
Properties
In addition to the fact rad(M) is the sum of superfluous submodules, in a Noetherian module rad(M) itself is a superfluous submodule.
A ring for which rad(M) = {0} for every right R-module M is called a right V-ring.
For any module M, rad(M/rad(M)) is zero.
M is a finitely generated module if and only if the cosocle M/rad(M) is finitely generated and rad(M) is a superfluous submodule of M.
See also
Socle (mathematics)
Jacobson radical
References
Module theory
|
https://en.wikipedia.org/wiki/Integrated%20Postsecondary%20Education%20Data%20System
|
The Integrated Postsecondary Education Data System (IPEDS) is a system of interrelated surveys conducted annually by the National Center for Education Statistics (NCES), a part of the Institute for Education Sciences within the United States Department of Education. IPEDS consists of twelve interrelated survey components that are collected over three collection periods (fall, winter, and spring) each year as described in the Data Collection and Dissemination Cycle. The completion of all IPEDS surveys is mandatory for all institutions that participate in, or are applicants for participation in, any federal financial assistance program authorized by Title IV of the Higher Education Act of 1965, as amended.
The IPEDS program department of NCES was created in 1992 and began collecting data in 1993.
Data collected in IPEDS
IPEDS collects data on postsecondary education in the United States in the following areas: institutional characteristics, institutional prices, admissions, enrollment, student financial aid, degrees and certificates conferred, student persistence and success (retention rates, graduation rates, and outcome measures), institutional human resources, fiscal resources, and academic libraries.
Institutional characteristics
Institutional characteristics data are the foundation of the entire IPEDS system. These include basic institutional contact information, tuition and fees, room and board charges, control or affiliation, type of calendar system, levels of awards offered, types of programs, and admissions requirements.
Institutional prices
IPEDS collects institutional pricing data from institutions for full-time, first-time degree/certificate-seeking undergraduate students. This includes tuition and fee data as well as information on the estimated student budgets for students based on living situations (on-campus or off-campus).
Admissions
Basic information is collected from institutions that do not have an open-admissions policy on the undergraduate selection process for first-time, degree/certificate-seeking students. This includes information about admissions considerations, admissions yields, and SAT and ACT test scores.
Enrollment
Because enrollment patterns differ greatly among the various types of postsecondary institutions, there is a need for both different measures of enrollment and several indicators of access. In IPEDS, the following enrollment-related data are collected:
Fall enrollment — Fall enrollment is the traditional measure of student access to higher education. Fall enrollment data can be looked at by race/ethnicity; gender; enrollment status (part-time or full-time); and or level of study (undergraduate or graduate).
Residence of first-time students — Data on the number of first-time freshmen by state of residence, along with data on the number who graduated from high school the previous year, serve to monitor the flow of students across state lines and calculate college-going rates by state. These data are
|
https://en.wikipedia.org/wiki/Northridge%20High%20School%20%28Indiana%29
|
Northridge High School is a secondary school in Middlebury, Indiana, serving grades 9-12 for the Middlebury Community Schools.
Statistics
In the 2020-21 school year, total enrollment is at 1,412 students.
In the 2020-21 school year the ethnicity breakdown is:
White - 83.9%
Hispanic - 11.1%
Asian - 1.3%
Black - 0.8%
American-Indian - 0.2%
Multi-racial - 2.5%
Athletics
Northridge High School is part of the Indiana High School Athletic Association, which is a voluntary, non-profit organization available for any school in the state of Indiana accredited by the Indiana Department of Education. Northridge competes in boys basketball, football, baseball, wrestling, cross-country, track, swimming, tennis, golf, and soccer. Women can participate in basketball, volleyball, swimming, cross-country, track, soccer, tennis, softball, cheerleading, and golf.
1988 IHSAA State Champions: Softball
2004 IHSAA State Champions: Boys Cross Country
Notable alumni
Eric Carpenter, soccer player
Jordon Hodges, actor
Joanna King, member of the Indiana House of Representatives
See also
List of high schools in Indiana
References
External links
http://www.mcsin-k12.org
Public high schools in Indiana
Schools in Elkhart County, Indiana
Educational institutions established in 1969
1969 establishments in Indiana
|
https://en.wikipedia.org/wiki/Grothendieck%20inequality
|
In mathematics, the Grothendieck inequality states that there is a universal constant with the following property. If Mij is an n × n (real or complex) matrix with
for all (real or complex) numbers si, tj of absolute value at most 1, then
for all vectors Si, Tj in the unit ball B(H) of a (real or complex) Hilbert space H, the constant being independent of n. For a fixed Hilbert space of dimension d, the smallest constant that satisfies this property for all n × n matrices is called a Grothendieck constant and denoted . In fact, there are two Grothendieck constants, and , depending on whether one works with real or complex numbers, respectively.
The Grothendieck inequality and Grothendieck constants are named after Alexander Grothendieck, who proved the existence of the constants in a paper published in 1953.
Motivation and the operator formulation
Let be an matrix. Then defines a linear operator between the normed spaces and for . The -norm of is the quantity
If , we denote the norm by .
One can consider the following question: For what value of and is maximized? Since is linear, then it suffices to consider such that contains as many points as possible, and also such that is as large as possible. By comparing for , one sees that for all .
One way to compute is by solving the following quadratic integer program:
To see this, note that , and taking the maximum over gives . Then taking the maximum over gives by the convexity of and by the triangle inequality. This quadratic integer program can be relaxed to the following semidefinite program:
It is known that exactly computing for is NP-hard, while exacting computing is NP-hard for .
One can then ask the following natural question: How well does an optimal solution to the semidefinite program approximate ? The Grothendieck inequality provides an answer to this question: There exists a fixed constant such that, for any , for any matrix , and for any Hilbert space ,
Bounds on the constants
The sequences and are easily seen to be increasing, and Grothendieck's result states that they are bounded, so they have limits.
Grothendieck proved that where is defined to be .
improved the result by proving that , conjecturing that the upper bound is tight. However, this conjecture was disproved by .
Grothendieck constant of order d
Boris Tsirelson showed that the Grothendieck constants play an essential role in the problem of quantum nonlocality: the Tsirelson bound of any full correlation bipartite Bell inequality for a quantum system of dimension d is upperbounded by .
Lower bounds
Some historical data on best known lower bounds of is summarized in the following table.
Upper bounds
Some historical data on best known upper bounds of :
Applications
Cut norm estimation
Given an real matrix , the cut norm of is defined by
The notion of cut norm is essential in designing efficient approximation algorithms for dense graphs and matrices. More gener
|
https://en.wikipedia.org/wiki/Calc
|
Calc or CALC may refer to:
Short for calculation, calculator, or calculus
Windows Calculator, also known by its filename
LibreOffice Calc, an open-source spreadsheet application similar to Microsoft Excel
The Anglo-Saxon rune ᛣ, representing /k/
The Calcarea class of calcareous sponges
The Latin American and Caribbean Unity Summit, known in Spanish as the Cumbre de América Latina y el Caribe (CALC)
The Canadian Association of Lutheran Congregations (CALC)
See also
Calque
Calc–silicate rock
|
https://en.wikipedia.org/wiki/Gauss%E2%80%93Lucas%20theorem
|
In complex analysis, a branch of mathematics, the Gauss–Lucas theorem gives a geometric relation between the roots of a polynomial and the roots of its derivative . The set of roots of a real or complex polynomial is a set of points in the complex plane. The theorem states that the roots of all lie within the convex hull of the roots of , that is the smallest convex polygon containing the roots of . When has a single root then this convex hull is a single point and when the roots lie on a line then the convex hull is a segment of this line. The Gauss–Lucas theorem, named after Carl Friedrich Gauss and Félix Lucas, is similar in spirit to Rolle's theorem.
Formal statement
If is a (nonconstant) polynomial with complex coefficients, all zeros of belong to the convex hull of the set of zeros of .
Special cases
It is easy to see that if is a second degree polynomial, the zero of is the average of the roots of . In that case, the convex hull is the line segment with the two roots as endpoints and it is clear that the average of the roots is the middle point of the segment.
For a third degree complex polynomial (cubic function) with three distinct zeros, Marden's theorem states that the zeros of are the foci of the Steiner inellipse which is the unique ellipse tangent to the midpoints of the triangle formed by the zeros of .
For a fourth degree complex polynomial (quartic function) with four distinct zeros forming a concave quadrilateral, one of the zeros of lies within the convex hull of the other three; all three zeros of lie in two of the three triangles formed by the interior zero of and two others zeros of .
In addition, if a polynomial of degree of real coefficients has distinct real zeros we see, using Rolle's theorem, that the zeros of the derivative polynomial are in the interval which is the convex hull of the set of roots.
The convex hull of the roots of the polynomial
particularly includes the point
Proof
See also
Marden's theorem
Bôcher's theorem
Sendov's conjecture
Routh–Hurwitz theorem
Hurwitz's theorem (complex analysis)
Descartes' rule of signs
Rouché's theorem
Properties of polynomial roots
Cauchy interlacing theorem
Notes
References
.
Craig Smorynski: MVT: A Most Valuable Theorem. Springer, 2017, ISBN 978-3-319-52956-1, pp. 411–414
External links
Lucas–Gauss Theorem by Bruce Torrence, the Wolfram Demonstrations Project.
Gauss-Lucas theorem as interactive illustration
Convex analysis
Articles containing proofs
Theorems in complex analysis
Theorems about polynomials
|
https://en.wikipedia.org/wiki/Vector%20measure
|
In mathematics, a vector measure is a function defined on a family of sets and taking vector values satisfying certain properties. It is a generalization of the concept of finite measure, which takes nonnegative real values only.
Definitions and first consequences
Given a field of sets and a Banach space a finitely additive vector measure (or measure, for short) is a function such that for any two disjoint sets and in one has
A vector measure is called countably additive if for any sequence of disjoint sets in such that their union is in it holds that
with the series on the right-hand side convergent in the norm of the Banach space
It can be proved that an additive vector measure is countably additive if and only if for any sequence as above one has
where is the norm on
Countably additive vector measures defined on sigma-algebras are more general than finite measures, finite signed measures, and complex measures, which are countably additive functions taking values respectively on the real interval the set of real numbers, and the set of complex numbers.
Examples
Consider the field of sets made up of the interval together with the family of all Lebesgue measurable sets contained in this interval. For any such set define
where is the indicator function of Depending on where is declared to take values, two different outcomes are observed.
viewed as a function from to the -space is a vector measure which is not countably-additive.
viewed as a function from to the -space is a countably-additive vector measure.
Both of these statements follow quite easily from the criterion () stated above.
The variation of a vector measure
Given a vector measure the variation of is defined as
where the supremum is taken over all the partitions
of into a finite number of disjoint sets, for all in Here, is the norm on
The variation of is a finitely additive function taking values in It holds that
for any in If is finite, the measure is said to be of bounded variation. One can prove that if is a vector measure of bounded variation, then is countably additive if and only if is countably additive.
Lyapunov's theorem
In the theory of vector measures, Lyapunov's theorem states that the range of a (non-atomic) finite-dimensional vector measure is closed and convex. In fact, the range of a non-atomic vector measure is a zonoid (the closed and convex set that is the limit of a convergent sequence of zonotopes). It is used in economics, in ("bang–bang") control theory, and in statistical theory.
Lyapunov's theorem has been proved by using the Shapley–Folkman lemma, which has been viewed as a discrete analogue of Lyapunov's theorem.
See also
References
Bibliography
Kluvánek, I., Knowles, G, Vector Measures and Control Systems, North-Holland Mathematics Studies 20, Amsterdam, 1976.
Control theory
Functional analysis
Measures (measure theory)
|
https://en.wikipedia.org/wiki/Tesseractic%20honeycomb
|
In four-dimensional euclidean geometry, the tesseractic honeycomb is one of the three regular space-filling tessellations (or honeycombs), represented by Schläfli symbol {4,3,3,4}, and constructed by a 4-dimensional packing of tesseract facets.
Its vertex figure is a 16-cell. Two tesseracts meet at each cubic cell, four meet at each square face, eight meet on each edge, and sixteen meet at each vertex.
It is an analog of the square tiling, {4,4}, of the plane and the cubic honeycomb, {4,3,4}, of 3-space. These are all part of the hypercubic honeycomb family of tessellations of the form {4,3,...,3,4}. Tessellations in this family are Self-dual.
Coordinates
Vertices of this honeycomb can be positioned in 4-space in all integer coordinates (i,j,k,l).
Sphere packing
Like all regular hypercubic honeycombs, the tesseractic honeycomb corresponds to a sphere packing of edge-length-diameter spheres centered on each vertex, or (dually) inscribed in each cell instead. In the hypercubic honeycomb of 4 dimensions, vertex-centered 3-spheres and cell-inscribed 3-spheres will both fit at once, forming the unique regular body-centered cubic lattice of equal-sized spheres (in any number of dimensions). Since the tesseract is radially equilateral, there is exactly enough space in the hole between the 16 vertex-centered 3-spheres for another edge-length-diameter 3-sphere. (This 4-dimensional body centered cubic lattice is actually the union of two tesseractic honeycombs, in dual positions.)
This is the same densest known regular 3-sphere packing, with kissing number 24, that is also seen in the other two regular tessellations of 4-space, the 16-cell honeycomb and the 24-cell-honeycomb. Each tesseract-inscribed 3-sphere kisses a surrounding shell of 24 3-spheres, 16 at the vertices of the tesseract and 8 inscribed in the adjacent tesseracts. These 24 kissing points are the vertices of a 24-cell of radius (and edge length) 1/2.
Constructions
There are many different Wythoff constructions of this honeycomb. The most symmetric form is regular, with Schläfli symbol {4,3,3,4}. Another form has two alternating tesseract facets (like a checkerboard) with Schläfli symbol {4,3,31,1}. The lowest symmetry Wythoff construction has 16 types of facets around each vertex and a prismatic product Schläfli symbol {∞}4. One can be made by stericating another.
Related polytopes and tessellations
The 24-cell honeycomb is similar, but in addition to the vertices at integers (i,j,k,l), it has vertices at half integers (i+1/2,j+1/2,k+1/2,l+1/2) of odd integers only. It is a half-filled body centered cubic (a checkerboard in which the red 4-cubes have a central vertex but the black 4-cubes do not).
The tesseract can make a regular tessellation of the 4-sphere, with three tesseracts per face, with Schläfli symbol {4,3,3,3}, called an order-3 tesseractic honeycomb. It is topologically equivalent to the regular polytope penteract in 5-space.
The tesseract can make a regular tessel
|
https://en.wikipedia.org/wiki/Weighted%20matroid
|
In combinatorics, a branch of mathematics, a weighted matroid is a matroid endowed with function with respect to which one can perform a greedy algorithm.
A weight function for a matroid assigns a strictly positive weight to each element of . We extend the function to subsets of by summation; is the sum of over in . A matroid with an associated weight function is called a weighted matroid.
Spanning forest algorithms
As a simple example, say we wish to find the maximum spanning forest of a graph. That is, given a graph and a weight for each edge, find a forest containing every vertex and maximizing the total weight of the edges in the tree. This problem arises in some clustering applications. If we look at the definition of the forest matroid above, we see that the maximum spanning forest is simply the independent set with largest total weight — such a set must span the graph, for otherwise we can add edges without creating cycles. But how do we find it?
Finding a basis
There is a simple algorithm for finding a basis:
Initially let be the empty set.
For each in
if is independent, then set to .
The result is clearly an independent set. It is a maximal independent set because if is not independent for some subset of , then is not independent either (the contrapositive follows from the hereditary property). Thus if we pass up an element, we'll never have an opportunity to use it later. We will generalize this algorithm to solve a harder problem.
Extension to optimal
An independent set of largest total weight is called an optimal set. Optimal sets are always bases, because if an edge can be added, it should be; this only increases the total weight. As it turns out, there is a trivial greedy algorithm for computing an optimal set of a weighted matroid. It works as follows:
Initially let be the empty set.
For each in , taken in (monotonically) decreasing order by weight
if is independent, then set to .
This algorithm finds a basis, since it is a special case of the above algorithm. It always chooses the element of largest weight that it can while preserving independence (thus the term "greedy"). This always produces an optimal set: suppose that it produces and that . Now for any with , consider open sets and . Since is smaller than , there is some element of which can be put into with the result still being independent. However is an element of maximal weight that can be added to to maintain independence. Thus is of no smaller weight than some element of , and hence is of at least a large a weight as . As this is true for all , is weightier than .
Complexity analysis
The easiest way to traverse the members of in the desired order is to sort them. This requires time using a comparison sorting algorithm. We also need to test for each whether is independent; assuming independence tests require time, the total time for the algorithm is .
If we want to find a minimum spanning tree instead, we simply "invert"
|
https://en.wikipedia.org/wiki/Pinwheel%20tiling
|
In geometry, pinwheel tilings are non-periodic tilings defined by Charles Radin and based on a construction due to John Conway.
They are the first known non-periodic tilings to each have the property that their tiles appear in infinitely many orientations.
Conway's tessellation
Let be the right triangle with side length , and .
Conway noticed that can be divided in five isometric copies of its image by the dilation of factor .
By suitably rescaling and translating/rotating, this operation can be iterated to obtain an infinite increasing sequence of growing triangles all made of isometric copies of .
The union of all these triangles yields a tiling of the whole plane by isometric copies of .
In this tiling, isometric copies of appear in infinitely many orientations (this is due to the angles and of each being algebraically independent to over the reals.).
Despite this, all the vertices have rational coordinates.
The pinwheel tilings
Radin relied on the above construction of Conway to define pinwheel tilings.
Formally, the pinwheel tilings are the tilings whose tiles are isometric copies of , in which a tile may intersect another tile only either on a whole side or on half the length side, and such that the following property holds.
Given any pinwheel tiling , there is a pinwheel tiling which, once each tile is divided in five following the Conway construction and the result is dilated by a factor , is equal to .
In other words, the tiles of any pinwheel tilings can be grouped in sets of five into homothetic tiles, so that these homothetic tiles form (up to rescaling) a new pinwheel tiling.
The tiling constructed by Conway is a pinwheel tiling, but there are uncountably many other different pinwheel tilings.
They are all locally undistinguishable (i.e., they have the same finite patches).
They all share with the Conway tiling the property that tiles appear in infinitely many orientations (and vertices have rational coordinates).
The main result proven by Radin is that there is a finite (though very large) set of so-called prototiles, with each being obtained by coloring the sides of , so that the pinwheel tilings are exactly the tilings of the plane by isometric copies of these prototiles, with the condition that whenever two copies intersect in a point, they have the same color in this point.
In terms of symbolic dynamics, this means that the pinwheel tilings form a sofic subshift.
Generalizations
Radin and Conway proposed a three-dimensional analogue which was dubbed the quaquaversal tiling. There are other variants and generalizations of the original idea.
One gets a fractal by iteratively dividing in five isometric copies, following the Conway construction, and discarding the middle triangle (ad infinitum). This "pinwheel fractal" has Hausdorff dimension .
Use in architecture
Federation Square, a building complex in Melbourne, Australia, features the pinwheel tiling. In the project, the tiling pattern is used to create
|
https://en.wikipedia.org/wiki/Ali%20Moustafa%20Mosharafa
|
Dr. Ali Moustafa Mosharafa () (11 July 1898 – 16 January 1950) was an Egyptian theoretical physicist. He was professor of applied mathematics in the Faculty of Science at Cairo University, and also served as its first dean. He contributed to the development of quantum theory as well as the theory of relativity.
Biography
Birth and early life
Mosharafa obtained his primary certificate in 1910 ranking second nationwide. He obtained his Baccalaureate at the age of 16, becoming the youngest student at that time to be awarded such a certificate, and again ranking second. He preferred to enroll in the Teachers' College rather than the faculties of Medicine or Engineering due to his deep interest in mathematics.
He graduated in 1917. Due to his excellence in mathematics, the Egyptian Ministry of Education sent him to England where he obtained a BSc (Honors) from the University of Nottingham in 1920. The Egyptian University consented to grant Mosharafa another scholarship to complete his doctoral thesis. During his stay in London, he was published many times in prominent science magazines. He obtained a PhD in 1923 from King's College London in the shortest possible time permissible according to the regulations there. In 1924 Mosharafa was awarded the degree of Doctor of Science, the first Egyptian and 11th scientist in the entire world to obtain such a degree.
Academic career
He became a teacher in the Higher Teachers' college in Cairo University, he became an associate professor of mathematics in the Faculty of Science because he was under the age of 30, the minimum age required for fulfilling the post of a professor. In 1926 his promotion to professor was raised in the Parliament, then chaired by Saad Zaghloul. The Parliament lauded his qualifications and merits which surpassed those of the English dean of the faculty and he was promoted to professor.
He was the first Egyptian professor of applied mathematics in the Faculty of Science. He became dean of the faculty in 1936, at the age of 38. He remained in office as a dean of the Faculty of Science until he died in 1950.
Scientific achievements
During the 1920s and 1930s, he studied Maxwell's equations and the special relativity and he had correspondence with Albert Einstein.
Mosharafa published 25 original papers in distinguished scientific journals about quantum theory, the theory of relativity, and the relation between radiation and matter. He published around 12 scientific books about relativity and mathematics. His books, on the theory of relativity, were translated into English, French, German and Polish. He also translated 10 books of astronomy and mathematics into Arabic.
Mosharafa was interested in the history of science, especially in studying the contributions of Arab scientists in the Middle Ages. With his student M. Morsi Ahmad, he published al-Khwārizmī's book The Compendious Book on Calculation by Completion and Balancing (Kitab al-Jabr wa-l-Muqabala).
He also was interest
|
https://en.wikipedia.org/wiki/Tricorn%20%28mathematics%29
|
In mathematics, the tricorn, sometimes called the Mandelbar set, is a fractal defined in a similar way to the Mandelbrot set, but using the mapping instead of used for the Mandelbrot set. It was introduced by W. D. Crowe, R. Hasson, P. J. Rippon, and P. E. D. Strain-Clark. John Milnor found tricorn-like sets as a prototypical configuration in the parameter space of real cubic polynomials, and in various other families of rational maps.
The characteristic three-cornered shape created by this fractal repeats with variations at different scales, showing the same sort of self-similarity as the Mandelbrot set. In addition to smaller tricorns, smaller versions of the Mandelbrot set are also contained within the tricorn fractal.
Formal definition
The tricorn is defined by a family of quadratic antiholomorphic polynomials
given by
where is a complex parameter. For each , one looks at the forward orbit
of the critical point of the antiholomorphic polynomial . In analogy with the Mandelbrot set, the tricorn is defined as the set of all parameters for which the forward orbit of the critical point is bounded. This is equivalent to saying that the tricorn is the connectedness locus of the family of quadratic antiholomorphic polynomials; i.e. the set of all parameters for which the Julia set is connected.
The higher degree analogues of the tricorn are known as the multicorns. These are the connectedness loci of the family of antiholomorphic polynomials .
Basic properties
The tricorn is compact, and connected. In fact, Nakane modified Douady and Hubbard's proof of the connectedness of the Mandelbrot set to construct a dynamically defined real-analytic diffeomorphism from the exterior of the tricorn onto the exterior of the closed unit disc in the complex plane. One can define external parameter rays of the tricorn as the inverse images of radial lines under this diffeomorphism.
Every hyperbolic component of the tricorn is simply connected.
The boundary of every hyperbolic component of odd period of the tricorn contains real-analytic arcs consisting of quasi-conformally equivalent but conformally distinct parabolic parameters. Such an arc is called a parabolic arc of the tricorn. This is in stark contrast with the corresponding situation for the Mandelbrot set, where parabolic parameters of a given period are known to be isolated.
The boundary of every odd period hyperbolic component consists only of parabolic parameters. More precisely, the boundary of every hyperbolic component of odd period of the tricorn is a simple closed curve consisting of exactly three parabolic cusp points as well as three parabolic arcs, each connecting two parabolic cusps.
Every parabolic arc of period k has, at both ends, an interval of positive length across which bifurcation from a hyperbolic component of odd period k to a hyperbolic component of period 2k occurs.
Image gallery of various zooms
Much like the Mandelbrot set, the tricorn has many complex and
|
https://en.wikipedia.org/wiki/Coclass
|
In mathematics, the coclass of a finite p-group of order pn is n − c, where c is the class.
The coclass conjectures
The coclass conjectures were introduced by and proved by and . They are:
Conjecture A: Every p-group has a normal subgroup of class 2 with index depending only on p and its coclass.
Conjecture B: The solvable length of a p-group can be bounded in terms of p and the coclass.
Conjecture C: A pro p-group of finite coclass is solvable.
Conjecture D: There are only finitely many pro p-groups of given coclass.
Conjecture E: There are only finitely many solvable pro p-groups of given coclass.
See also
Descendant tree (group theory)
References
P-groups
|
https://en.wikipedia.org/wiki/16-cell%20honeycomb
|
In four-dimensional Euclidean geometry, the 16-cell honeycomb is one of the three regular space-filling tessellations (or honeycombs), represented by Schläfli symbol {3,3,4,3}, and constructed by a 4-dimensional packing of 16-cell facets, three around every face.
Its dual is the 24-cell honeycomb. Its vertex figure is a 24-cell. The vertex arrangement is called the B4, D4, or F4 lattice.
Alternate names
Hexadecachoric tetracomb/honeycomb
Demitesseractic tetracomb/honeycomb
Coordinates
Vertices can be placed at all integer coordinates (i,j,k,l), such that the sum of the coordinates is even.
D4 lattice
The vertex arrangement of the 16-cell honeycomb is called the D4 lattice or F4 lattice. The vertices of this lattice are the centers of the 3-spheres in the densest known packing of equal spheres in 4-space; its kissing number is 24, which is also the same as the kissing number in R4, as proved by Oleg Musin in 2003.
The related D lattice (also called D) can be constructed by the union of two D4 lattices, and is identical to the C4 lattice:
∪ = =
The kissing number for D is 23 = 8, (2n – 1 for n < 8, 240 for n = 8, and 2n(n – 1) for n > 8).
The related D lattice (also called D and C) can be constructed by the union of all four D4 lattices, but it is identical to the D4 lattice: It is also the 4-dimensional body centered cubic, the union of two 4-cube honeycombs in dual positions.
∪ ∪ ∪ = = ∪ .
The kissing number of the D lattice (and D4 lattice) is 24 and its Voronoi tessellation is a 24-cell honeycomb, , containing all rectified 16-cells (24-cell) Voronoi cells, or .
Symmetry constructions
There are three different symmetry constructions of this tessellation. Each symmetry can be represented by different arrangements of colored 16-cell facets.
Related honeycombs
It is related to the regular hyperbolic 5-space 5-orthoplex honeycomb, {3,3,3,4,3}, with 5-orthoplex facets, the regular 4-polytope 24-cell, {3,4,3} with octahedral (3-orthoplex) cell, and cube {4,3}, with (2-orthoplex) square faces.
It has a 2-dimensional analogue, {3,6}, and as an alternated form (the demitesseractic honeycomb, h{4,3,3,4}) it is related to the alternated cubic honeycomb.
See also
Regular and uniform honeycombs in 4-space:
Tesseractic honeycomb
24-cell honeycomb
Truncated 24-cell honeycomb
Snub 24-cell honeycomb
5-cell honeycomb
Truncated 5-cell honeycomb
Omnitruncated 5-cell honeycomb
Notes
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition,
pp. 154–156: Partial truncation or alternation, represented by h prefix: h{4,4} = {4,4}; h{4,3,4} = {31,1,4}, h{4,3,3,4} = {3,3,4,3}, ...
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (
|
https://en.wikipedia.org/wiki/Cayley%20plane
|
In mathematics, the Cayley plane (or octonionic projective plane) P2(O) is a projective plane over the octonions.
The Cayley plane was discovered in 1933 by Ruth Moufang, and is named after Arthur Cayley for his 1845 paper describing the octonions.
Properties
In the Cayley plane, lines and points may be defined in a natural way so that it becomes a 2-dimensional projective space, that is, a projective plane. It is a non-Desarguesian plane, where Desargues' theorem does not hold.
More precisely, as of 2005, there are two objects called Cayley planes, namely the real and the complex Cayley plane.
The real Cayley plane is the symmetric space F4/Spin(9), where F4 is a compact form of an exceptional Lie group and Spin(9) is the spin group of nine-dimensional Euclidean space (realized in F4). It admits a cell decomposition into three cells, of dimensions 0, 8 and 16.
The complex Cayley plane is a homogeneous space under the complexification of the group E6 by a parabolic subgroup P1. It is the closed orbit in the projectivization of the minimal complex representation of E6. The complex Cayley plane consists of two complex F4-orbits: the closed orbit is a quotient of the complexified F4 by a parabolic subgroup, the open orbit is the complexification of the real Cayley plane, retracting to it.
See also
Rosenfeld projective plane
Notes
References
Helmut Salzmann et al. "Compact projective planes. With an introduction to octonion geometry"; de Gruyter Expositions in Mathematics, 21. Walter de Gruyter & Co., Berlin, 1995. xiv+688 pp.
Projective geometry
|
https://en.wikipedia.org/wiki/Natural%20exponential%20family
|
In probability and statistics, a natural exponential family (NEF) is a class of probability distributions that is a special case of an exponential family (EF).
Definition
Univariate case
The natural exponential families (NEF) are a subset of the exponential families. A NEF is an exponential family in which the natural parameter η and the natural statistic T(x) are both the identity. A distribution in an exponential family with parameter θ can be written with probability density function (PDF)
where and are known functions.
A distribution in a natural exponential family with parameter θ can thus be written with PDF
[Note that slightly different notation is used by the originator of the NEF, Carl Morris. Morris uses ω instead of η and ψ instead of A.]
General multivariate case
Suppose that , then a natural exponential family of order p has density or mass function of the form:
where in this case the parameter
Moment and cumulant generating functions
A member of a natural exponential family has moment generating function (MGF) of the form
The cumulant generating function is by definition the logarithm of the MGF, so it is
Examples
The five most important univariate cases are:
normal distribution with known variance
Poisson distribution
gamma distribution with known shape parameter α (or k depending on notation set used)
binomial distribution with known number of trials, n
negative binomial distribution with known
These five examples – Poisson, binomial, negative binomial, normal, and gamma – are a special subset of NEF, called NEF with quadratic variance function (NEF-QVF) because the variance can be written as a quadratic function of the mean. NEF-QVF are discussed below.
Distributions such as the exponential, Bernoulli, and geometric distributions are special cases of the above five distributions. For example, the Bernoulli distribution is a binomial distribution with n = 1 trial, the exponential distribution is a gamma distribution with shape parameter α = 1 (or k = 1 ), and the geometric distribution is a special case of the negative binomial distribution.
Some exponential family distributions are not NEF. The lognormal and Beta distribution are in the exponential family, but not the natural exponential family.
The gamma distribution with two parameters is an exponential family but not a NEF and the chi-squared distribution is a special case of the gamma distribution with fixed scale
parameter, and thus is also an exponential family but not a NEF (note that only a gamma distribution with fixed shape
parameter is a NEF).
The inverse Gaussian distribution is a NEF with a cubic variance function.
The parameterization of most of the above distributions has been written differently from the parameterization commonly used in textbooks and the above linked pages. For example, the above parameterization differs from the parameterization in the linked article in the Poisson case. The two parameterizations are related by ,
|
https://en.wikipedia.org/wiki/Carl%20Morris%20%28statistician%29
|
Carl Neracher Morris was a professor in the Statistics Department of Harvard University and spent several years as a researcher for the RAND Corporation working on the RAND Health Insurance Experiment.
Early life
Carl Morris had received his BS in Aeronautical Engineering from the California Institute of Technology in 1960 and then attended Indiana University until 1962. He obtained his Ph.D. in statistics from Stanford University under advisor Charles Stein in 1966.
Since 1990, Morris has been at Harvard Statistics Department and Harvard Medical School Department of Health Care Policy. He served as the chair of the Harvard Statistics Department from 1994 to 2000.
Morris has also been a professor at the University of California, Santa Cruz, Frederick S. Pardee RAND Graduate School, Stanford University, and the University of Texas at Austin where he served as Director of the Center for Statistical Sciences.
Morris is a Fellow of the American Statistical Association, Institute of Mathematical Statistics, and Royal Statistical Society, and an elected member of ISI. Morris was an editor of both the Theory and Methods, the Journal of the American Statistical Association (1983–1985) and Statistical Science (1989–1991).
Morris is best known for his work on natural exponential families with quadratic variance functions (NEF-QVF), a theory which classifies the most common statistical distributions. Morris is also well known for his work in sports statistics.
References
MathSciNet reference
Statistics Department homepage
"Natural Exponential Families with Quadratic Variance Functions," Breakthroughs in Statistics, 1997, (3), 374-394. Reprinted from Annals of Statistics (1982).
Living people
American statisticians
Year of birth missing (living people)
California Institute of Technology alumni
Indiana University alumni
Stanford University alumni
Harvard University faculty
Stanford University faculty
University of California, Santa Cruz faculty
University of Texas at Austin faculty
Fellows of the American Statistical Association
RAND Corporation people
|
https://en.wikipedia.org/wiki/Pullback
|
In mathematics, a pullback is either of two different, but related processes: precomposition and fiber-product. Its dual is a pushforward.
Precomposition
Precomposition with a function probably provides the most elementary notion of pullback: in simple terms, a function of a variable where itself is a function of another variable may be written as a function of This is the pullback of by the function
It is such a fundamental process that it is often passed over without mention.
However, it is not just functions that can be "pulled back" in this sense. Pullbacks can be applied to many other objects such as differential forms and their cohomology classes; see
Pullback (differential geometry)
Pullback (cohomology)
Fiber-product
The pullback bundle is an example that bridges the notion of a pullback as precomposition, and the notion of a pullback as a Cartesian square. In that example, the base space of a fiber bundle is pulled back, in the sense of precomposition, above. The fibers then travel along with the points in the base space at which they are anchored: the resulting new pullback bundle looks locally like a Cartesian product of the new base space, and the (unchanged) fiber. The pullback bundle then has two projections: one to the base space, the other to the fiber; the product of the two becomes coherent when treated as a fiber product.
Generalizations and category theory
The notion of pullback as a fiber-product ultimately leads to the very general idea of a categorical pullback, but it has important special cases: inverse image (and pullback) sheaves in algebraic geometry, and pullback bundles in algebraic topology and differential geometry.
See also:
Pullback (category theory)
Fibred category
Inverse image sheaf
Functional analysis
When the pullback is studied as an operator acting on function spaces, it becomes a linear operator, and is known as the transpose or composition operator. Its adjoint is the push-forward, or, in the context of functional analysis, the transfer operator.
Relationship
The relation between the two notions of pullback can perhaps best be illustrated by sections of fiber bundles: if is a section of a fiber bundle over and then the pullback (precomposition) of s with is a section of the pullback (fiber-product) bundle over
See also
References
Mathematical analysis
|
https://en.wikipedia.org/wiki/Paley%E2%80%93Zygmund%20inequality
|
In mathematics, the Paley–Zygmund inequality bounds the
probability that a positive random variable is small, in terms of
its first two moments. The inequality was
proved by Raymond Paley and Antoni Zygmund.
Theorem: If Z ≥ 0 is a random variable with
finite variance, and if , then
Proof: First,
The first addend is at most , while the second is at most by the Cauchy–Schwarz inequality. The desired inequality then follows. ∎
Related inequalities
The Paley–Zygmund inequality can be written as
This can be improved. By the Cauchy–Schwarz inequality,
which, after rearranging, implies that
This inequality is sharp; equality is achieved if Z almost surely equals a positive constant.
In turn, this implies another convenient form (known as Cantelli's inequality) which is
where and .
This follows from the substitution valid when .
A strengthened form of the Paley-Zygmund inequality states that if Z is a non-negative random variable then
for every .
This inequality follows by applying the usual Paley-Zygmund inequality to the conditional distribution of Z given that it is positive and noting that the various factors of cancel.
Both this inequality and the usual Paley-Zygmund inequality also admit versions: If Z is a non-negative random variable and then
for every . This follows by the same proof as above but using Hölder's inequality in place of the Cauchy-Schwarz inequality.
See also
Cantelli's inequality
Second moment method
Concentration inequality – a summary of tail-bounds on random variables.
References
Further reading
Probabilistic inequalities
|
https://en.wikipedia.org/wiki/Hall%27s%20universal%20group
|
In algebra, Hall's universal group is
a countable locally finite group, say U, which is uniquely
characterized by the following properties.
Every finite group G admits a monomorphism to U.
All such monomorphisms are conjugate by inner automorphisms of U.
It was defined by Philip Hall in 1959, and has the universal property that all countable locally finite groups embed into it.
Construction
Take any group of order .
Denote by the group
of permutations of elements of , by
the group
and so on. Since a group acts faithfully on itself by permutations
according to Cayley's theorem, this gives a chain of monomorphisms
A direct limit (that is, a union) of all
is Hall's universal group U.
Indeed, U then contains a symmetric group of arbitrarily large order, and any
group admits a monomorphism to a group of permutations, as explained above.
Let G be a finite group admitting two embeddings to U.
Since U is a direct limit and G is finite, the
images of these two embeddings belong to
. The group
acts on
by permutations, and conjugates all possible embeddings
.
References
Infinite group theory
Permutation groups
|
https://en.wikipedia.org/wiki/Stan%20Gibilisco
|
Stanley Gibilisco (1955 - 3 May 2020) was a nonfiction writer. He authored books in the fields of electronics, general science, mathematics, and computing.
Biography
Gibilisco began his career in 1977 as a radio technician and editorial assistant at the headquarters of the American Radio Relay League in Newington, Connecticut. Later he worked as a radio-frequency design engineer and technical writer for industry.
In 1982, Stan began writing for TAB Books with editorial offices in Blue Ridge Summit, Pennsylvania. One of the books that he compiled for TAB, the Encyclopedia of Electronics, was named by the American Library Association (ALA) in its list of "Best References of the 1980s." Another of his books, the McGraw-Hill Encyclopedia of Personal Computing, was named as a "Best Reference of 1996" by the ALA.
Stan produced instructional, technical, and general interest videos on YouTube. Subjects include electronics, computers, physics, mathematics, alternative energy, and amateur radio.
Stan lived in Lead, South Dakota, home of the Sanford Underground Research Facility. He was an active amateur radio operator and used the call sign W1GV. He died on 3 May 2020 in Lead, SD.
External links
Official website
Stan's YouTube channel
American textbook writers
Living people
People from Lead, South Dakota
Amateur radio people
1955 births
|
https://en.wikipedia.org/wiki/Order-5%20square%20tiling
|
In geometry, the order-5 square tiling is a regular tiling of the hyperbolic plane. It has Schläfli symbol of {4,5}.
Related polyhedra and tiling
This tiling is topologically related as a part of sequence of regular polyhedra and tilings with vertex figure (4n).
This hyperbolic tiling is related to a semiregular infinite skew polyhedron with the same vertex figure in Euclidean 3-space.
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
See also
Square tiling
Uniform tilings in hyperbolic plane
List of regular polytopes
Medial rhombic triacontahedron
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Isohedral tilings
Order-5 tilings
Regular tilings
Square tilings
|
https://en.wikipedia.org/wiki/Order-4%20pentagonal%20tiling
|
In geometry, the order-4 pentagonal tiling is a regular tiling of the hyperbolic plane. It has Schläfli symbol of {5,4}. It can also be called a pentapentagonal tiling in a bicolored quasiregular form.
Symmetry
This tiling represents a hyperbolic kaleidoscope of 5 mirrors meeting as edges of a regular pentagon. This symmetry by orbifold notation is called *22222 with 5 order-2 mirror intersections. In Coxeter notation can be represented as [5*,4], removing two of three mirrors (passing through the pentagon center) in the [5,4] symmetry.
The kaleidoscopic domains can be seen as bicolored pentagons, representing mirror images of the fundamental domain. This coloring represents the uniform tiling t1{5,5} and as a quasiregular tiling is called a pentapentagonal tiling.
Related polyhedra and tiling
This tiling is topologically related as a part of sequence of regular polyhedra and tilings with pentagonal faces, starting with the dodecahedron, with Schläfli symbol {5,n}, and Coxeter diagram , progressing to infinity.
This tiling is also topologically related as a part of sequence of regular polyhedra and tilings with four faces per vertex, starting with the octahedron, with Schläfli symbol {n,4}, and Coxeter diagram , with n progressing to infinity.
This tiling is topologically related as a part of sequence of regular polyhedra and tilings with vertex figure (4n).
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
, invited lecture, ICM, Amsterdam, 1954.
See also
Square tiling
Tilings of regular polygons
List of uniform planar tilings
List of regular polytopes
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Isohedral tilings
Order-4 tilings
Pentagonal tilings
Regular tilings
|
https://en.wikipedia.org/wiki/Truncated%20heptagonal%20tiling
|
In geometry, the truncated heptagonal tiling is a semiregular tiling of the hyperbolic plane. There are one triangle and two tetradecagons on each vertex. It has Schläfli symbol of t{7,3}. The tiling has a vertex configuration of 3.14.14.
Dual tiling
The dual tiling is called an order-7 triakis triangular tiling, seen as an order-7 triangular tiling with each triangle divided into three by a center point.
Related polyhedra and tilings
This hyperbolic tiling is topologically related as a part of sequence of uniform truncated polyhedra with vertex configurations (3.2n.2n), and [n,3] Coxeter group symmetry.
From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are eight forms.
See also
Truncated hexagonal tiling
Heptagonal tiling
Tilings of regular polygons
List of uniform tilings
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Heptagonal tilings
Hyperbolic tilings
Isogonal tilings
Semiregular tilings
Truncated tilings
|
https://en.wikipedia.org/wiki/Rhombitriheptagonal%20tiling
|
In geometry, the rhombitriheptagonal tiling is a semiregular tiling of the
hyperbolic plane. At each vertex of the tiling there is one triangle and one heptagon, alternating between two squares. The tiling has Schläfli symbol rr{7, 3}. It can be seen as constructed as a rectified triheptagonal tiling, r{7,3}, as well as an expanded heptagonal tiling or expanded order-7 triangular tiling.
Dual tiling
The dual tiling is called a deltoidal triheptagonal tiling, and consists of congruent kites. It is formed by overlaying an order-3 heptagonal tiling and an order-7 triangular tiling.
Related polyhedra and tilings
From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms.
Symmetry mutations
This tiling is topologically related as a part of sequence of cantellated polyhedra with vertex figure (3.4.n.4), and continues as tilings of the hyperbolic plane. These vertex-transitive figures have (*n32) reflectional symmetry.
See also
Rhombitrihexagonal tiling
Order-3 heptagonal tiling
Tilings of regular polygons
List of uniform tilings
Kagome lattice
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Semiregular tilings
|
https://en.wikipedia.org/wiki/Snub%20triheptagonal%20tiling
|
In geometry, the order-3 snub heptagonal tiling is a semiregular tiling of the hyperbolic plane. There are four triangles and one heptagon on each vertex. It has Schläfli symbol of sr{7,3}. The snub tetraheptagonal tiling is another related hyperbolic tiling with Schläfli symbol sr{7,4}.
Images
Drawn in chiral pairs, with edges missing between black triangles:
Dual tiling
The dual tiling is called an order-7-3 floret pentagonal tiling, and is related to the floret pentagonal tiling.
Related polyhedra and tilings
This semiregular tiling is a member of a sequence of snubbed polyhedra and tilings with vertex figure (3.3.3.3.n) and Coxeter–Dynkin diagram . These figures and their duals have (n32) rotational symmetry, being in the Euclidean plane for n=6, and hyperbolic plane for any higher n. The series can be considered to begin with n=2, with one set of faces degenerated into digons.
From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms.
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
See also
Snub hexagonal tiling
Floret pentagonal tiling
Order-3 heptagonal tiling
Tilings of regular polygons
List of uniform planar tilings
Kagome lattice
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Chiral figures
Hyperbolic tilings
Isogonal tilings
Semiregular tilings
Snub tilings
|
https://en.wikipedia.org/wiki/Truncated%20order-7%20triangular%20tiling
|
In geometry, the order-7 truncated triangular tiling, sometimes called the hyperbolic soccerball, is a semiregular tiling of the hyperbolic plane. There are two hexagons and one heptagon on each vertex, forming a pattern similar to a conventional soccer ball (truncated icosahedron) with heptagons in place of pentagons. It has Schläfli symbol of t{3,7}.
Hyperbolic soccerball (football)
This tiling is called a hyperbolic soccerball (football) for its similarity to the truncated icosahedron pattern used on soccer balls. Small portions of it as a hyperbolic surface can be constructed in 3-space.
Dual tiling
The dual tiling is called a heptakis heptagonal tiling, named for being constructible as a heptagonal tiling with every heptagon divided into seven triangles by the center point.
Related tilings
This hyperbolic tiling is topologically related as a part of sequence of uniform truncated polyhedra with vertex configurations (n.6.6), and [n,3] Coxeter group symmetry.
From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms.
In popular culture
This tiling features prominently in HyperRogue.
See also
Triangular tiling
Order-3 heptagonal tiling
Order-7 triangular tiling
Tilings of regular polygons
List of uniform tilings
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Geometric explorations on the hyperbolic football by Frank Sottile
Hyperbolic tilings
Isogonal tilings
Order-7 tilings
Semiregular tilings
Triangular tilings
Truncated tilings
|
https://en.wikipedia.org/wiki/Supersingular%20K3%20surface
|
In algebraic geometry, a supersingular K3 surface is a K3 surface over a field k of characteristic p > 0 such that the slopes of Frobenius on the crystalline cohomology H2(X,W(k)) are all equal to 1. These have also been called Artin supersingular K3 surfaces. Supersingular K3 surfaces can be considered the most special and interesting of all K3 surfaces.
Definitions and main results
More generally, a smooth projective variety X over a field of characteristic p > 0 is called supersingular if all slopes of Frobenius on the crystalline cohomology Ha(X,W(k)) are equal to a/2, for all a. In particular, this gives the standard notion
of a supersingular abelian variety. For a variety X over a finite field Fq, it is equivalent to say that the eigenvalues of Frobenius on the l-adic cohomology Ha(X,Ql) are equal to qa/2 times roots of unity. It follows that any variety in positive characteristic whose l-adic cohomology is generated by algebraic cycles is supersingular.
A K3 surface whose l-adic cohomology is generated by algebraic cycles is sometimes called a Shioda supersingular K3 surface. Since the second Betti number of a K3 surface is always 22, this property means that the surface has 22 independent elements in its Picard group (ρ = 22). From what we have said, a K3 surface with Picard number 22 must be supersingular.
Conversely, the Tate conjecture would imply that every supersingular K3 surface over an algebraically closed field has Picard number 22. This is now known in every characteristic p except 2, since the Tate conjecture was proved for all K3 surfaces in characteristic p at least 3 by Nygaard-Ogus (1985), , , and .
To see that K3 surfaces with Picard number 22 exist only in positive characteristic, one can use Hodge theory to prove that the Picard number of a K3 surface in characteristic zero is at most 20. In fact the Hodge diamond for any complex K3 surface is the same (see classification), and the middle row reads 1, 20, 1. In other words, h2,0 and h0,2 both take the value 1, with h1,1 = 20. Therefore, the dimension of the space spanned by the algebraic cycles is at most 20 in characteristic zero; surfaces with this maximum value are sometimes called singular K3 surfaces.
Another phenomenon which can only occur in positive characteristic is that a K3 surface may be unirational. Michael Artin observed that every unirational K3 surface over an algebraically closed field must have Picard number 22. (In particular, a unirational K3 surface must be supersingular.) Conversely, Artin conjectured that every K3 surface with Picard number 22 must be unirational. Artin's conjecture was proved in characteristic 2 by . Proofs in every characteristic p at least 5 were claimed by and , but later refuted by .
History
The first example of a K3 surface with Picard number 22 was given by , who observed that the Fermat quartic
w4 + x4 + y4 + z4 = 0
has Picard number 22 over algebraically closed fields of characteristic 3 mod 4. Then Shioda showe
|
https://en.wikipedia.org/wiki/Snub%20%28geometry%29
|
In geometry, a snub is an operation applied to a polyhedron. The term originates from Kepler's names of two Archimedean solids, for the snub cube () and snub dodecahedron (). In general, snubs have chiral symmetry with two forms: with clockwise or counterclockwise orientation. By Kepler's names, a snub can be seen as an expansion of a regular polyhedron: moving the faces apart, twisting them about their centers, adding new polygons centered on the original vertices, and adding pairs of triangles fitting between the original edges.
The terminology was generalized by Coxeter, with a slightly different definition, for a wider set of uniform polytopes.
Conway snubs
John Conway explored generalized polyhedron operators, defining what is now called Conway polyhedron notation, which can be applied to polyhedra and tilings. Conway calls Coxeter's operation a semi-snub.
In this notation, snub is defined by the dual and gyro operators, as s = dg, and it is equivalent to an alternation of a truncation of an ambo operator. Conway's notation itself avoids Coxeter's alternation (half) operation since it only applies for polyhedra with only even-sided faces.
In 4-dimensions, Conway suggests the snub 24-cell should be called a semi-snub 24-cell because, unlike 3-dimensional snub polyhedra are alternated omnitruncated forms, it is not an alternated omnitruncated 24-cell. It is instead actually an alternated truncated 24-cell.
Coxeter's snubs, regular and quasiregular
Coxeter's snub terminology is slightly different, meaning an alternated truncation, deriving the snub cube as a snub cuboctahedron, and the snub dodecahedron as a snub icosidodecahedron. This definition is used in the naming of two Johnson solids: the snub disphenoid and the snub square antiprism, and of higher dimensional polytopes, such as the 4-dimensional snub 24-cell, with extended Schläfli symbol s{3,4,3}, and Coxeter diagram .
A regular polyhedron (or tiling), with Schläfli symbol , and Coxeter diagram , has truncation defined as , and , and has snub defined as an alternated truncation , and . This alternated construction requires q to be even.
A quasiregular polyhedron, with Schläfli symbol or r{p,q}, and Coxeter diagram or , has quasiregular truncation defined as or tr{p,q}, and or , and has quasiregular snub defined as an alternated truncated rectification or htr{p,q} = sr{p,q}, and or .
For example, Kepler's snub cube is derived from the quasiregular cuboctahedron, with a vertical Schläfli symbol , and Coxeter diagram , and so is more explicitly called a snub cuboctahedron, expressed by a vertical Schläfli symbol , and Coxeter diagram . The snub cuboctahedron is the alternation of the truncated cuboctahedron, , and .
Regular polyhedra with even-order vertices can also be snubbed as alternated truncations, like the snub octahedron, as , , is the alternation of the truncated octahedron, , and . The snub octahedron represents the pseudoicosahedron, a regular icosahedron with
|
https://en.wikipedia.org/wiki/Carleman%27s%20condition
|
In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure satisfies Carleman's condition, there is no other measure having the same moments as The condition was discovered by Torsten Carleman in 1922.
Hamburger moment problem
For the Hamburger moment problem (the moment problem on the whole real line), the theorem states the following:
Let be a measure on such that all the moments
are finite. If
then the moment problem for is determinate; that is, is the only measure on with as its sequence of moments.
Stieltjes moment problem
For the Stieltjes moment problem, the sufficient condition for determinacy is
Notes
References
Chapter 3.3, Durrett, Richard. Probability: Theory and Examples. 5th ed. Cambridge Series in Statistical and Probabilistic Mathematics 49. Cambridge ; New York, NY: Cambridge University Press, 2019.
Mathematical analysis
Moment (mathematics)
Probability theory
Theorems in approximation theory
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.