source
stringlengths 31
168
| text
stringlengths 51
3k
|
|---|---|
https://en.wikipedia.org/wiki/Pedro%20Gual%20Municipality
|
Pedro Gual is one of the 21 municipalities (municipios) that makes up the Venezuelan state of Miranda and, according to a 2007 population estimate by the National Institute of Statistics of Venezuela, the municipality has a population of 22,579. The town of Cúpira is the municipal seat of the Pedro Gual Municipality. The municipality is named for 19th century Venezuelan President Pedro Gual Escandón.
Demographics
The Pedro Gual Municipality, according to a 2007 population estimate by the National Institute of Statistics of Venezuela, has a population of 22,579 (up from 19,379 in 2000). This amounts to 0.8% of the state's population. The municipality's population density is .
Government
The mayor of the Pedro Gual Municipality is Manuel Alvarez, elected on October 31, 2004, with 47% of the vote. He replaced Lenis Landaeta shortly after the elections. The municipality is divided into two parishes; Cúpira and Machurucuto.
References
External links
pedrogual-miranda.gob.ve
Municipalities of Miranda (state)
|
https://en.wikipedia.org/wiki/Plaza%20Municipality
|
Plaza is one of the 21 municipalities (municipios) that makes up the Venezuelan state of Miranda and, according to a 2016 population estimate by the National Institute of Statistics of Venezuela, the municipality has a population of 238,750. The city of Guarenas is the administrative centre of the Plaza Municipality.
History
The city of Guarenas was established in 1621 as Nuestra Señora de Copacabana de los Guarenas. On February 27, 1989, a morning protest in this city over the recent nationwide hike in bus fares, spread to Caracas, the capital of Venezuela, which resulted in several days of rioting known today as the Caracazo. Today, Guarenas has virtually merged with its neighbor, Guatire.
The Curupao Power Plant, which was inaugurated in 1933, still provides electricity to Guarenas and Guatire.
Demographics
The Plaza Municipality, according to a 2016 population estimate by the National Institute of Statistics of Venezuela, has a population of 238,750 (up from 203,590 in 2000). This amounts to 8.3% of the state's population. The municipality's population density is .
Government
The mayor of the Plaza Municipality is Willian Eduardo Páez Sosa, re-elected on October 31, 2004, with 41% of the vote. The municipality is divided into one parish (Guarenas).
References
Municipalities of Miranda (state)
|
https://en.wikipedia.org/wiki/OpenEpi
|
OpenEpi is a free, web-based, open source, operating system-independent series of programs for use in epidemiology, biostatistics, public health, and medicine, providing a number of epidemiologic and statistical tools for summary data. OpenEpi was developed in JavaScript and HTML, and can be run in modern web browsers. The program can be run from the OpenEpi website or downloaded and run without a web connection. The source code and documentation is downloadable and freely available for use by other investigators. OpenEpi has been reviewed, both by media organizations and in research journals.
The OpenEpi developers have had extensive experience in the development and testing of Epi Info, a program developed by the Centers for Disease Control and Prevention (CDC) and widely used around the world for data entry and analysis. OpenEpi was developed to perform analyses found in the DOS version of Epi Info modules StatCalc and EpiTable, to improve upon the types of analyses provided by these modules, and to provide a number of tools and calculations not currently available in Epi Info. It is the first step toward an entirely web-based set of epidemiologic software tools. OpenEpi can be thought of as an important companion to Epi Info and to other programs such as SAS, PSPP, SPSS, Stata, SYSTAT, Minitab, Epidata, and R (see the R programming language). Another functionally similar Windows-based program is Winpepi. See also list of statistical packages and comparison of statistical packages. Both OpenEpi and Epi Info were developed with the goal of providing tools for low and moderate resource areas of the world. The initial development of OpenEpi was supported by a grant from the Bill and Melinda Gates Foundation to Emory University.
Types
The types of calculations currently performed by OpenEpi include:
Various confidence intervals for proportions, rates, standardized mortality ratio, mean, median, percentiles
2x2 crude and stratified tables for count and rate data
Matched case-control analysis
Test for trend with count data
Independent t-test and one-way ANOVA
Diagnostic and screening test analyses with receiver operating characteristic (ROC) curves
Sample size for proportions, cross-sectional surveys, unmatched case-control, cohort, randomized controlled trials, and comparison of two means
Power calculations for proportions (unmatched case-control, cross-sectional, cohort, randomized controlled trials) and for the comparison of two means
Random number generator
For epidemiologists and other health researchers, OpenEpi performs a number of calculations based on tables not found in most epidemiologic and statistical packages. For example, for a single 2x2 table, in addition to the results presented in other programs, OpenEpi provides estimates for:
Etiologic or prevented fraction in the population and in exposed with confidence intervals, based on risk, odds, or rate data
The cross-product and MLE odds ratio estimate
Mid-p exact p-
|
https://en.wikipedia.org/wiki/Manuel%20Redondo
|
Manuel Redondo García (born 11 January 1985 in Seville, Andalusia) is a Spanish former professional footballer who played as a left-back.
Career statistics
Honours
Sevilla B
Segunda División B: 2006–07
Sevilla
Copa del Rey: 2009–10
Oviedo
Segunda División B: 2014–15
References
External links
1985 births
Living people
Spanish men's footballers
Footballers from Seville
Men's association football defenders
Segunda División players
Segunda División B players
Tercera División players
Sevilla FC C players
Sevilla Atlético players
Sevilla FC players
SD Ponferradina players
CE Sabadell FC footballers
Xerez CD footballers
Real Oviedo players
Coria CF players
Manuel Redondo
Cypriot First Division players
Doxa Katokopias FC players
Spanish expatriate men's footballers
Expatriate men's footballers in Thailand
Expatriate men's footballers in Cyprus
Spanish expatriate sportspeople in Thailand
Spanish expatriate sportspeople in Cyprus
|
https://en.wikipedia.org/wiki/Peter%20McCullagh
|
Peter McCullagh (born 8 January 1952) is a Northern Irish-born American statistician and John D. MacArthur Distinguished Service Professor in the Department of Statistics at the University of Chicago.
Education
McCullagh is from Plumbridge, Northern Ireland. He attended the University of Birmingham and completed his PhD at Imperial College London, supervised by David Cox and Anthony Atkinson.
Research
McCullagh is the coauthor with John Nelder of Generalized Linear Models (1983, Chapman and Hall – second edition 1989), a seminal text on the subject of generalized linear models (GLMs) with more than 23,000 citations. He also wrote "Tensor Methods in Statistics", published originally in 1987.
Awards and honours
McCullagh is a Fellow of the Royal Society and the American Academy of Arts and Sciences. He won the COPSS Presidents' Award in 1990. He was the recipient of the Royal Statistical Society's Guy Medal in Bronze in 1983 and in Silver in 2005.
He was also the recipient of the inaugural Karl Pearson Prize of the International Statistical Institute, with John Nelder, "for their monograph Generalized Linear Models (1983)". He won a Notable Alumni Award in 2007 from his grammar school, St Columb's College.
References
Fellows of the Royal Society
Fellows of the American Academy of Arts and Sciences
Irish statisticians
People from County Tyrone
Alumni of the University of Birmingham
University of Chicago faculty
Fellows of the American Statistical Association
Living people
People educated at St Columb's College
1952 births
Mathematical statisticians
|
https://en.wikipedia.org/wiki/Primes%20in%20arithmetic%20progression
|
In number theory, primes in arithmetic progression are any sequence of at least three prime numbers that are consecutive terms in an arithmetic progression. An example is the sequence of primes (3, 7, 11), which is given by for .
According to the Green–Tao theorem, there exist arbitrarily long sequences of primes in arithmetic progression. Sometimes the phrase may also be used about primes which belong to an arithmetic progression which also contains composite numbers. For example, it can be used about primes in an arithmetic progression of the form , where a and b are coprime which according to Dirichlet's theorem on arithmetic progressions contains infinitely many primes, along with infinitely many composites.
For integer k ≥ 3, an AP-k (also called PAP-k) is any sequence of k primes in arithmetic progression. An AP-k can be written as k primes of the form a·n + b, for fixed integers a (called the common difference) and b, and k consecutive integer values of n. An AP-k is usually expressed with n = 0 to k − 1. This can always be achieved by defining b to be the first prime in the arithmetic progression.
Properties
Any given arithmetic progression of primes has a finite length. In 2004, Ben J. Green and Terence Tao settled an old conjecture by proving the Green–Tao theorem: The primes contain arbitrarily long arithmetic progressions. It follows immediately that there are infinitely many AP-k for any k.
If an AP-k does not begin with the prime k, then the common difference is a multiple of the primorial k# = 2·3·5·...·j, where j is the largest prime ≤ k.
Proof: Let the AP-k be a·n + b for k consecutive values of n. If a prime p does not divide a, then modular arithmetic says that p will divide every pth term of the arithmetic progression. (From H.J. Weber, Cor.10 in ``Exceptional Prime Number Twins, Triplets and Multiplets," arXiv:1102.3075[math.NT]. See also Theor.2.3 in ``Regularities of Twin, Triplet and Multiplet Prime Numbers," arXiv:1103.0447[math.NT], Global J.P.A.Math 8(2012), in press.) If the AP is prime for k consecutive values, then a must therefore be divisible by all primes p ≤ k.
This also shows that an AP with common difference a cannot contain more consecutive prime terms than the value of the smallest prime that does not divide a.
If k is prime then an AP-k can begin with k and have a common difference which is only a multiple of (k−1)# instead of k#. (From H. J. Weber, ``Less Regular Exceptional and Repeating Prime Number Multiplets," arXiv:1105.4092[math.NT], Sect.3.) For example, the AP-3 with primes {3, 5, 7} and common difference 2# = 2, or the AP-5 with primes {5, 11, 17, 23, 29} and common difference 4# = 6. It is conjectured that such examples exist for all primes k. , the largest prime for which this is confirmed is k = 19, for this AP-19 found by Wojciech Iżykowski in 2013:
19 + 4244193265542951705·17#·n, for n = 0 to 18.
It follows from widely believed conjectures, such as Dickson's conjecture and some variant
|
https://en.wikipedia.org/wiki/Capone%20%28footballer%29
|
Carlos Alberto de Oliveira (born 23 May 1972), known as Capone, is a former Brazilian footballer.
Club statistics
Honours
Mogi Mirim
Campeonato Paulista Série A2: 1 (1995)
Juventude
Campeonato Gaúcho: 1 (1998)
Copa do Brasil: 1 (1999)
Galatasaray
Turkish Cup: 1 (1999–2000)
UEFA Cup: 1 (1999–2000)
Turkish Super League: 2 (1999–2000, 2001–02)
UEFA Super Cup: 1 (2000)
Corinthians
Campeonato Paulista: 1 (2003)
External links
sports.geocities.jp
1972 births
Living people
Brazilian men's footballers
Brazilian expatriate men's footballers
Associação Atlética Ponte Preta players
Mogi Mirim Esporte Clube players
São Paulo FC players
Kyoto Sanga FC players
Esporte Clube Juventude players
Galatasaray S.K. footballers
Kocaelispor footballers
Sport Club Corinthians Paulista players
Club Athletico Paranaense players
Beitar Jerusalem F.C. players
Grêmio Foot-Ball Porto Alegrense players
Associação Atlética Portuguesa (Santos) players
Londrina Esporte Clube players
J1 League players
Expatriate men's footballers in Israel
Expatriate men's footballers in Japan
Brazilian expatriate sportspeople in Turkey
Expatriate men's footballers in Turkey
Süper Lig players
UEFA Cup winning players
Men's association football defenders
Footballers from Campinas
|
https://en.wikipedia.org/wiki/Messenger%20of%20Mathematics
|
The Messenger of Mathematics is a defunct British mathematics journal. The founding editor-in-chief was William Allen Whitworth with Charles Taylor and volumes 1–58 were published between 1872 and 1929. James Whitbread Lee Glaisher was the editor-in-chief after Whitworth. In the nineteenth century, foreign contributions represented 4.7% of all pages of mathematics in the journal.
History
The journal was originally titled Oxford, Cambridge and Dublin Messenger of Mathematics. It was supported by mathematics students and governed by a board of editors composed of members of the universities of Oxford, Cambridge and Dublin (the last being its sole constituent college, Trinity College Dublin). Volumes 1–5 were published between 1862 and 1871. It merged with The Quarterly Journal of Pure and Applied Mathematics to form the Quarterly Journal of Mathematics.
References
Further reading
External links
Messenger of Mathematics, vols. 1–30 (1871–1901) digitized by the Center for Retrospective Digitization.
Defunct journals of the United Kingdom
English-language journals
Mathematics education in the United Kingdom
Mathematics journals
Publications established in 1862
Publications disestablished in 1929
1862 establishments in England
|
https://en.wikipedia.org/wiki/Infinite
|
Infinite may refer to:
Mathematics
Infinite set, a set that is not a finite set
Infinity, an abstract concept describing something without any limit
Music
Infinite (group), a South Korean boy band
Infinite (EP), debut EP of American musician Haywyre, released in 2012
Infinite (Eminem album), the debut album of American rapper Eminem, released in 1996
Infinite (Eminem song), the debut song of American rapper Eminem, released in 1996
Infinite (Stratovarius album), a studio album by power metal band Stratovarius, released in 2000
The Infinite (album), by trumpeter Dave Douglas, released in 2002
"Infinite...", a 2004 single by Japanese singer Beni Arashiro
Infinite (Notaker song), a 2016 single by American electronic producer Notaker
Infinite (rapper), a Canadian rapper
Infinite (Sam Concepcion album), the second studio album by Filipino singer Sam Concepcion
Infinite (Deep Purple album), the twentieth studio album by Deep Purple
"Infinite", a 1990 song by Forbidden from Twisted into Form
"Infinite", a 2017 song by Tyler Smyth and Andy Bane from Sonic Forces
Other uses
Infinite (film), a 2021 science fiction film
"The Infinites", a 1953 science fiction short story by Philip K. Dick
The Infinites, a fictional group of cosmic beings in the Avengers Infinity comic book series
Infinite, a character in the video game Sonic Forces
Infinite Flight, a flight simulator released on 2011
Halo Infinite, 2021 video game
See also
Infinity (disambiguation)
|
https://en.wikipedia.org/wiki/Superprocess
|
An -superprocess, , within mathematics probability theory is a stochastic process on that is usually constructed as a special limit of near-critical branching diffusions.
Informally, it can be seen as a branching process where each particle splits and dies at infinite rates, and evolves according to a diffusion equation, and we follow the rescaled population of particles, seen as a measure on .
Scaling limit of a discrete branching process
Simplest setting
For any integer , consider a branching Brownian process defined as follows:
Start at with independent particles distributed according to a probability distribution .
Each particle independently move according to a Brownian motion.
Each particle independently dies with rate .
When a particle dies, with probability it gives birth to two offspring in the same location.
The notation means should be interpreted as: at each time , the number of particles in a set is . In other words, is a measure-valued random process.
Now, define a renormalized process:
Then the finite-dimensional distributions of converge as to those of a measure-valued random process , which is called a -superprocess, with initial value , where and where is a Brownian motion (specifically, where is a measurable space, is a filtration, and under has the law of a Brownian motion started at ).
As will be clarified in the next section, encodes an underlying branching mechanism, and encodes the motion of the particles. Here, since is a Brownian motion, the resulting object is known as a Super-brownian motion.
Generalization to (ξ, ϕ)-superprocesses
Our discrete branching system can be much more sophisticated, leading to a variety of superprocesses:
Instead of , the state space can now be any Lusin space .
The underlying motion of the particles can now be given by , where is a càdlàg Markov process (see, Chapter 4, for details).
A particle dies at rate
When a particle dies at time , located in , it gives birth to a random number of offspring . These offspring start to move from . We require that the law of depends solely on , and that all are independent. Set and define the associated probability-generating function:
Add the following requirement that the expected number of offspring is bounded:Define as above, and define the following crucial function:Add the requirement, for all , that is Lipschitz continuous with respect to uniformly on , and that converges to some function as uniformly on .
Provided all of these conditions, the finite-dimensional distributions of converge to those of a measure-valued random process which is called a -superprocess, with initial value .
Commentary on ϕ
Provided , that is, the number of branching events becomes infinite, the requirement that converges implies that, taking a Taylor expansion of , the expected number of offspring is close to 1, and therefore that the process is near-critical.
Generalization to Dawson-Watanabe superprocesses
T
|
https://en.wikipedia.org/wiki/Truncated%20normal%20distribution
|
In probability and statistics, the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above (or both). The truncated normal distribution has wide applications in statistics and econometrics.
Definitions
Suppose has a normal distribution with mean and variance and lies within the interval . Then conditional on has a truncated normal distribution.
Its probability density function, , for , is given by
and by otherwise.
Here,
is the probability density function of the standard normal distribution and is its cumulative distribution function
By definition, if , then , and similarly, if , then .
The above formulae show that when the scale parameter of the truncated normal distribution is allowed to assume negative values. The parameter is in this case imaginary, but the function is nevertheless real, positive, and normalizable. The scale parameter of the untruncated normal distribution must be positive because the distribution would not be normalizable otherwise. The doubly truncated normal distribution, on the other hand, can in principle have a negative scale parameter (which is different from the variance, see summary formulae), because no such integrability problems arise on a bounded domain. In this case the distribution cannot be interpreted as an untruncated normal conditional on , of course, but can still be interpreted as a maximum-entropy distribution with first and second moments as constraints, and has an additional peculiar feature: it presents two local maxima instead of one, located at and .
Properties
The truncated normal is one of two possible maximum entropy probability distributions for a fixed mean and variance constrained to the interval [a,b], the other being the truncated U. Truncated normals with fixed support form an exponential family.
Nielsen reported closed-form formula for calculating the Kullback-Leibler divergence and the Bhattacharyya distance between two truncated normal distributions with the support of the first distribution nested into the support of the second distribution.
Moments
If the random variable has been truncated only from below, some probability mass has been shifted to higher values, giving a first-order stochastically dominating distribution and hence increasing the mean to a value higher than the mean of the original normal distribution. Likewise, if the random variable has been truncated only from above, the truncated distribution has a mean less than
Regardless of whether the random variable is bounded above, below, or both, the truncation is a mean-preserving contraction combined with a mean-changing rigid shift, and hence the variance of the truncated distribution is less than the variance of the original normal distribution.
Two sided truncation
Let and . Then:
and
Care must be taken in the numerical evaluation of these formulas, which can
|
https://en.wikipedia.org/wiki/Kn%C3%B6del%20number
|
In number theory, an n-Knödel number for a given positive integer n is a composite number m with the property that each i < m coprime to m satisfies . The concept is named after Walter Knödel.
The set of all n-Knödel numbers is denoted Kn.
The special case K1 is the Carmichael numbers. There are infinitely many n-Knödel numbers for a given n.
Due to Euler's theorem every composite number m is an n-Knödel number for where is Euler's totient function.
Examples
References
Literature
Eponymous numbers in mathematics
Number theory
|
https://en.wikipedia.org/wiki/Heun%27s%20method
|
In mathematics and computational science, Heun's method may refer to the improved or modified Euler's method (that is, the explicit trapezoidal rule), or a similar two-stage Runge–Kutta method. It is named after Karl Heun and is a numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. Both variants can be seen as extensions of the Euler method into two-stage second-order Runge–Kutta methods.
The procedure for calculating the numerical solution to the initial value problem:
by way of Heun's method, is to first calculate the intermediate value and then the final approximation at the next integration point.
where is the step size and .
Description
Euler's method is used as the foundation for Heun's method. Euler's method uses the line tangent to the function at the beginning of the interval as an estimate of the slope of the function over the interval, assuming that if the step size is small, the error will be small. However, even when extremely small step sizes are used, over a large number of steps the error starts to accumulate and the estimate diverges from the actual functional value.
Where the solution curve is concave up, its tangent line will underestimate the vertical coordinate of the next point and vice versa for a concave down solution. The ideal prediction line would hit the curve at its next predicted point. In reality, there is no way to know whether the solution is concave-up or concave-down, and hence if the next predicted point will overestimate or underestimate its vertical value. The concavity of the curve cannot be guaranteed to remain consistent either and the prediction may overestimate and underestimate at different points in the domain of the solution.
Heun's Method addresses this problem by considering the interval spanned by the tangent line segment as a whole. Taking a concave-up example, the left tangent prediction line underestimates the slope of the curve for the entire width of the interval from the current point to the next predicted point. If the tangent line at the right end point is considered (which can be estimated using Euler's Method), it has the opposite problem.
The points along the tangent line of the left end point have vertical coordinates which all underestimate those that lie on the solution curve, including the right end point of the interval under consideration. The solution is to make the slope greater by some amount. Heun's Method considers the tangent lines to the solution curve at both ends of the interval, one which overestimates, and one which underestimates the ideal vertical coordinates. A prediction line must be constructed based on the right end point tangent's slope alone, approximated using Euler's Method. If this slope is passed through the left end point of the interval, the result is evidently too steep to be used as an ideal prediction line and overestimates the ideal point. Therefore, the ideal point lies approximately halfway be
|
https://en.wikipedia.org/wiki/Yamabe%20problem
|
The Yamabe problem refers to a conjecture in the mathematical field of differential geometry, which was resolved in the 1980s. It is a statement about the scalar curvature of Riemannian manifolds:
By computing a formula for how the scalar curvature of relates to that of , this statement can be rephrased in the following form:
The mathematician Hidehiko Yamabe, in the paper , gave the above statements as theorems and provided a proof; however, discovered an error in his proof. The problem of understanding whether the above statements are true or false became known as the Yamabe problem. The combined work of Yamabe, Trudinger, Thierry Aubin, and Richard Schoen provided an affirmative resolution to the problem in 1984.
It is now regarded as a classic problem in geometric analysis, with the proof requiring new methods in the fields of differential geometry and partial differential equations. A decisive point in Schoen's ultimate resolution of the problem was an application of the positive energy theorem of general relativity, which is a purely differential-geometric mathematical theorem first proved (in a provisional setting) in 1979 by Schoen and Shing-Tung Yau.
There has been more recent work due to Simon Brendle, Marcus Khuri, Fernando Codá Marques, and Schoen, dealing with the collection of all positive and smooth functions such that, for a given Riemannian manifold , the metric has constant scalar curvature. Additionally, the Yamabe problem as posed in similar settings, such as for complete noncompact Riemannian manifolds, is not yet fully understood.
The Yamabe problem in special cases
Here, we refer to a "solution of the Yamabe problem" on a Riemannian manifold as a Riemannian metric on for which there is a positive smooth function with
On a closed Einstein manifold
Let be a smooth Riemannian manifold. Consider a positive smooth function so that is an arbitrary element of the smooth conformal class of A standard computation shows
Taking the -inner product with results in
If is assumed to be Einstein, then the left-hand side vanishes. If is assumed to be closed, then one can do an integration by parts, recalling the Bianchi identity to see
If has constant scalar curvature, then the right-hand side vanishes. The consequent vanishing of the left-hand side proves the following fact, due to Obata (1971):
Obata then went on to prove that, except in the case of the standard sphere with its usual constant-sectional-curvature metric, the only constant-scalar-curvature metrics in the conformal class of an Einstein metric (on a closed manifold) are constant multiples of the given metric. The proof proceeds by showing that the gradient of the conformal factor is actually a conformal Killing field. If the conformal factor is not constant, following flow lines of this gradient field, starting at a minimum of the conformal factor, then allows one to show that the manifold is conformally related to the cylinder , and hence has v
|
https://en.wikipedia.org/wiki/Ray%20Gordon
|
Ray Gordon (born 1965) is a former NBL Melbourne Tigers player who was a member of the Tigers inaugural NBL championship team in 1993 and 1997. Ray Gordon has a Maths and science degree and has a master in Law. Being 57 years old Ray Gordon is a Lawyer and owns a Pub in Melbourne, Australia as of 2022.
References
Ray Gordon's profile at Basketpedya.com
1965 births
Living people
Melbourne Tigers players
Place of birth missing (living people)
Date of birth missing (living people)
20th-century Australian people
|
https://en.wikipedia.org/wiki/Curvature%20of%20a%20measure
|
In mathematics, the curvature of a measure defined on the Euclidean plane R2 is a quantification of how much the measure's "distribution of mass" is "curved". It is related to notions of curvature in geometry. In the form presented below, the concept was introduced in 1995 by the mathematician Mark S. Melnikov; accordingly, it may be referred to as the Melnikov curvature or Menger-Melnikov curvature. Melnikov and Verdera (1995) established a powerful connection between the curvature of measures and the Cauchy kernel.
Definition
Let μ be a Borel measure on the Euclidean plane R2. Given three (distinct) points x, y and z in R2, let R(x, y, z) be the radius of the Euclidean circle that joins all three of them, or +∞ if they are collinear. The Menger curvature c(x, y, z) is defined to be
with the natural convention that c(x, y, z) = 0 if x, y and z are collinear. It is also conventional to extend this definition by setting c(x, y, z) = 0 if any of the points x, y and z coincide. The Menger-Melnikov curvature c2(μ) of μ is defined to be
More generally, for α ≥ 0, define c2α(μ) by
One may also refer to the curvature of μ at a given point x:
in which case
Examples
The trivial measure has zero curvature.
A Dirac measure δa supported at any point a has zero curvature.
If μ is any measure whose support is contained within a Euclidean line L, then μ has zero curvature. For example, one-dimensional Lebesgue measure on any line (or line segment) has zero curvature.
The Lebesgue measure defined on all of R2 has infinite curvature.
If μ is the uniform one-dimensional Hausdorff measure on a circle Cr or radius r, then μ has curvature 1/r.
Relationship to the Cauchy kernel
In this section, R2 is thought of as the complex plane C. Melnikov and Verdera (1995) showed the precise relation of the boundedness of the Cauchy kernel to the curvature of measures. They proved that if there is some constant C0 such that
for all x in C and all r > 0, then there is another constant C, depending only on C0, such that
for all ε > 0. Here cε denotes a truncated version of the Menger-Melnikov curvature in which the integral is taken only over those points x, y and z such that
Similarly, denotes a truncated Cauchy integral operator: for a measure μ on C and a point z in C, define
where the integral is taken over those points ξ in C with
References
Curvature (mathematics)
Measure theory
|
https://en.wikipedia.org/wiki/Baum%E2%80%93Connes%20conjecture
|
In mathematics, specifically in operator K-theory, the Baum–Connes conjecture suggests a link between the K-theory of the reduced C*-algebra of a group and the K-homology of the classifying space of proper actions of that group. The conjecture sets up a correspondence between different areas of mathematics, with the K-homology of the classifying space being related to geometry, differential operator theory, and homotopy theory, while the K-theory of the group's reduced C*-algebra is a purely analytical object.
The conjecture, if true, would have some older famous conjectures as consequences. For instance, the surjectivity part implies the Kadison–Kaplansky conjecture for discrete torsion-free groups, and the injectivity is closely related to the Novikov conjecture.
The conjecture is also closely related to index theory, as the assembly map is a sort of index, and it plays a major role in Alain Connes' noncommutative geometry program.
The origins of the conjecture go back to Fredholm theory, the Atiyah–Singer index theorem and the interplay of geometry with operator K-theory as expressed in the works of Brown, Douglas and Fillmore, among many other motivating subjects.
Formulation
Let Γ be a second countable locally compact group (for instance a countable discrete group). One can define a morphism
called the assembly map, from the equivariant K-homology with -compact supports of the classifying space of proper actions to the K-theory of the reduced C*-algebra of Γ. The subscript index * can be 0 or 1.
Paul Baum and Alain Connes introduced the following conjecture (1982) about this morphism:
Baum-Connes Conjecture. The assembly map is an isomorphism.
As the left hand side tends to be more easily accessible than the right hand side, because there are hardly any general structure theorems of the -algebra, one usually views the conjecture as an "explanation" of the right hand side.
The original formulation of the conjecture was somewhat different, as the notion of equivariant K-homology was not yet common in 1982.
In case is discrete and torsion-free, the left hand side reduces to the non-equivariant K-homology with compact supports of the ordinary classifying space of .
There is also more general form of the conjecture, known as Baum–Connes conjecture with coefficients, where both sides have coefficients in the form of a -algebra on which acts by -automorphisms. It says in KK-language that the assembly map
is an isomorphism, containing the case without coefficients as the case
However, counterexamples to the conjecture with coefficients were found in 2002 by Nigel Higson, Vincent Lafforgue and Georges Skandalis. However, the conjecture with coefficients remains an active area of research, since it is, not unlike the classical conjecture, often seen as a statement concerning particular groups or class of groups.
Examples
Let be the integers . Then the left hand side is the K-homology of which is the circle. The -algebra of the
|
https://en.wikipedia.org/wiki/List%20of%20German%20skeleton%20champions
|
This is the List of German Skeleton Champions since 1914.
Men
Women
Statistics
bold - still active athletes
Men
Women
External links
Statistics at the BSD-Site
Statistics at Sport-komplett
Statistics at Eiskanal
List of champions
Champions, list
|
https://en.wikipedia.org/wiki/Concordance%20correlation%20coefficient
|
In statistics, the concordance correlation coefficient measures the agreement between two variables, e.g., to evaluate reproducibility or for inter-rater reliability.
Definition
The form of the concordance correlation coefficient as
where and are the means for the two variables and and are the corresponding variances. is the correlation coefficient between the two variables.
This follows from its definition as
When the concordance correlation coefficient is computed on a -length data set (i.e., paired data values , for ), the form is
where the mean is computed as
and the variance
and the covariance
Whereas the ordinary correlation coefficient (Pearson's) is immune to whether the biased or unbiased versions for estimation of the variance is used, the concordance correlation coefficient is not. In the original article Lin suggested the 1/N normalization,
while in another article Nickerson appears to have used the 1/(N-1),
i.e., the concordance correlation coefficient may be computed slightly differently between implementations.
Relation to other measures of correlation
The concordance correlation coefficient is nearly identical to some of the measures called intra-class correlations. Comparisons of the concordance correlation coefficient with an "ordinary" intraclass correlation on different data sets found only small differences between the two correlations, in one case on the third decimal. It has also been stated that the ideas for concordance correlation coefficient "are quite similar to results already published by Krippendorff
in 1970".
In the original article Lin suggested a form for multiple classes (not just 2). Over ten years later a correction to this form was issued.
One example of the use of the concordance correlation coefficient is in a comparison of analysis method for functional magnetic resonance imaging brain scans.
External links
Statistical Calculator. Provided by NIWA, it is an online version of Lin’s concordance used to assess the degree of agreement between two continuous variables, such as chemical or microbiological concentrations. It calculates the value of Lin’s concordance correlation coefficient. Values of ±1 denote perfect concordance and discordance; a value of zero denotes its complete absence. Statistical testing procedures for Cohen's kappa and for Lin’s concordance correlation coefficient are included in the calculator. These procedures guard against the risk of claiming good agreement when that has happened merely by "good luck".
References
For a small Excel and VBA implementation by Peter Urbani see here
Covariance and correlation
Inter-rater reliability
|
https://en.wikipedia.org/wiki/Weyl%20integral
|
In mathematics, the Weyl integral (named after Hermann Weyl) is an operator defined, as an example of fractional calculus, on functions f on the unit circle having integral 0 and a Fourier series. In other words there is a Fourier series for f of the form
with a0 = 0.
Then the Weyl integral operator of order s is defined on Fourier series by
where this is defined. Here s can take any real value, and for integer values k of s the series expansion is the expected k-th derivative, if k > 0, or (−k)th indefinite integral normalized by integration from θ = 0.
The condition a0 = 0 here plays the obvious role of excluding the need to consider division by zero. The definition is due to Hermann Weyl (1917).
See also
Sobolev space
References
Fourier series
Fractional calculus
|
https://en.wikipedia.org/wiki/Allan%20Stewart%20%28ice%20hockey%29
|
Allan Stewart (born January 31, 1964) is a Canadian former professional ice hockey left winger. He played for the New Jersey Devils and Boston Bruins.
Career statistics
External links
1964 births
Boston Bruins players
Canadian ice hockey left wingers
Ice hockey people from British Columbia
Living people
Maine Mariners (AHL) players
Moncton Hawks players
New Jersey Devils draft picks
New Jersey Devils players
People from Fort St. John, British Columbia
Prince Albert Raiders players
Prince Albert Raiders (SJHL) players
Utica Devils players
|
https://en.wikipedia.org/wiki/Tim%20Eriksson
|
Tim Eriksson (born February 5, 1982) is a professional Swedish ice hockey player. He has played a long time for Linköpings HC, but after the 07/08 season he went to Djurgårdens IF.
Career statistics
Regular season and playoffs
International
External links
1982 births
Djurgårdens IF Hockey players
Linköping HC players
Living people
Los Angeles Kings draft picks
Swedish ice hockey left wingers
Ice hockey people from Södertälje
|
https://en.wikipedia.org/wiki/Polar%20angle
|
In geometry, the polar angle may be
2D polar angle, the angular coordinate of a two-dimensional polar coordinate system
3D polar angle, one of the angular coordinates of a three-dimensional spherical coordinate system
|
https://en.wikipedia.org/wiki/Fourier%20algebra
|
Fourier and related algebras occur naturally in the harmonic analysis of locally compact groups. They play an important role in the duality theories of these groups. The Fourier–Stieltjes algebra and the Fourier–Stieltjes transform on the Fourier algebra of a locally compact group were introduced by Pierre Eymard in 1964.
Definition
Informal
Let G be a locally compact abelian group, and Ĝ the dual group of G. Then is the space of all functions on Ĝ which are integrable with respect to the Haar measure on Ĝ, and it has a Banach algebra structure where the product of two functions is convolution. We define to be the set of Fourier transforms of functions in , and it is a closed sub-algebra of , the space of bounded continuous complex-valued functions on G with pointwise multiplication. We call the Fourier algebra of G.
Similarly, we write for the measure algebra on Ĝ, meaning the space of all finite regular Borel measures on Ĝ. We define to be the set of Fourier-Stieltjes transforms of measures in . It is a closed sub-algebra of , the space of bounded continuous complex-valued functions on G with pointwise multiplication. We call the Fourier-Stieltjes algebra of G. Equivalently, can be defined as the linear span of the set of continuous positive-definite functions on G.
Since is naturally included in , and since the Fourier-Stieltjes transform of an function is just the Fourier transform of that function, we have that . In fact, is a closed ideal in .
Formal
Let be a Fourier–Stieltjes algebra and be a Fourier algebra such that the locally compact group is abelian. Let be the measure algebra of finite measures on and let be the convolution algebra of integrable functions on , where is the character group of the Abelian group .
The Fourier–Stieltjes transform of a finite measure on is the function on defined by
The space of these functions is an algebra under pointwise multiplication is isomorphic to the measure algebra . Restricted to , viewed as a subspace of , the Fourier–Stieltjes transform is the Fourier transform on and its image is, by definition, the Fourier algebra . The generalized Bochner theorem states that a measurable function on is equal, almost everywhere, to the Fourier–Stieltjes transform of a non-negative finite measure on if and only if it is positive definite. Thus, can be defined as the linear span of the set of continuous positive-definite functions on . This definition is still valid when is not Abelian.
Helson–Kahane–Katznelson–Rudin theorem
Let A(G) be the Fourier algebra of a compact group G. Building upon the work of Wiener, Lévy, Gelfand, and Beurling, in 1959 Helson, Kahane, Katznelson, and Rudin proved that, when G is compact and abelian, a function f defined on a closed convex subset of the plane operates in A(G) if and only if f is real analytic. In 1969 Dunkl proved the result holds when G is compact and contains an infinite abelian subgroup.
References
"Functions that Opera
|
https://en.wikipedia.org/wiki/Double%20complex
|
In mathematics, specifically Homological algebra, a double complex is a generalization of a chain complex where instead of having a -grading, the objects in the bicomplex have a -grading. The most general definition of a double complex, or a bicomplex, is given with objects in an additive category . A bicomplex is a sequence of objects with two differentials, the horizontal differentialand the vertical differentialwhich have the compatibility relationHence a double complex is a commutative diagram of the formwhere the rows and columns form chain complexes.
Some authors instead require that the squares anticommute. That is
This eases the definition of Total Complexes. By setting , we can switch between having commutativity and anticommutativity. If the commutative definition is used, this alternating sign will have to show up in the definition of Total Complexes.
Examples
There are many natural examples of bicomplexes that come up in nature. In particular, for a Lie groupoid, there is a bicomplex associated to itpg 7-8 which can be used to construct its de-Rham complex.
Another common example of bicomplexes are in Hodge theory, where on an almost complex manifold there's a bicomplex of differential forms whose components are linear or anti-linear. For example, if are the complex coordinates of and are the complex conjugate of these coordinates, a -form is of the form
See also
Chain complex
Derived algebraic geometry
Additional applications
https://web.archive.org/web/20210708183754/http://www.dma.unifi.it/~vezzosi/papers/tou.pdf
Homological algebra
Additive categories
|
https://en.wikipedia.org/wiki/Adel%20El%20Hadi
|
Adel El Hadi (born 18 January 1980) is an Algerian former football player.
National team statistics
Honours
Top scorer of the Algerian league in 2003/2004 with 17 goals for USM Annaba
Top scorer of the Algerian second division in 2006/2007 with 19 goals for USM Annaba
Has 5 caps for the Algerian National Team
References
External links
1980 births
Living people
Algerian men's footballers
Algeria men's international footballers
Algerian Ligue Professionnelle 1 players
Algerian Ligue 2 players
Algeria men's under-23 international footballers
CA Batna players
CA Bordj Bou Arréridj players
CR Belouizdad players
ES Sétif players
JSM Béjaïa players
People from Biskra
USM Annaba players
US Biskra players
Competitors at the 2001 Mediterranean Games
Men's association football forwards
Mediterranean Games competitors for Algeria
21st-century Algerian people
|
https://en.wikipedia.org/wiki/Samir%20Zazou
|
Samir Zazou (born March 24, 1980 in Sidi Bel Abbès) is an Algerian footballer who is currently playing as a defender for ASO Chlef in the Algerian Ligue Professionnelle 1.
National team statistics
Honours
Won the Algerian Ligue Professionnelle 1 three times:
Once with CR Belouizdad in 2001
Once with JS Kabylie in 2006
Once with ASO Chlef in 2011
Has 5 caps for the Algerian National Team
References
External links
Living people
Algerian men's footballers
Algeria men's international footballers
1980 births
JS Kabylie players
CR Belouizdad players
ASO Chlef players
Algerian Ligue Professionnelle 1 players
Algeria men's A' international footballers
USM Annaba players
2011 African Nations Championship players
USM Bel Abbès players
People from Sidi Bel Abbès
Competitors at the 2001 Mediterranean Games
Men's association football defenders
Mediterranean Games competitors for Algeria
21st-century Algerian people
|
https://en.wikipedia.org/wiki/Halbert%20L.%20Dunn
|
Halbert L. Dunn, M.D. (1896–1975) was the leading figure in establishing a national vital statistics system in the United States and is known as the "father of the wellness movement".
Early life
Born in New Paris, Ohio, he attended the University of Minnesota where he earned his M.D. in 1922 and his Ph.D. in 1923. He served as an assistant in medicine at Presbyterian Hospital of New York City 1923-1924 and as fellow in medicine at the Mayo Clinic in Rochester, Minnesota (1924–1925).
Work in statistics
In 1929, he was the first biostatistician hired by the Mayo Clinic and established its computer coding system for deriving medical statistics. He was Chief of the National Office of Vital Statistics from 1935 through 1960, first as part of the Bureau of the Census and later under the Department of Health, Education and Welfare, where it eventually became the National Center for Health Statistics in 1960. In his final year with the U.S. Public Health Service he was Assistant Surgeon General for aging.
He was one of the founders of the National Association for Public Health Statistics and Information Systems (NAPHSIS) and of the Inter-American Statistics Institute (IASI). He was Secretary General of the IASI from 1941 to 1952. The Halbert L. Dunn Award, named in his honor, has been presented since 1981 by NAPHSIS in recognition of outstanding and lasting contributions to the field of vital and health statistics.
Wellness
Dunn is known as the "father" of the wellness movement. He distinguished between good health—not being ill—and what he termed high-level wellness, which he defined as "a condition of change in which the individual moves forward, climbing toward a higher potential of functioning". He introduced the concept in a series of twenty-nine lectures at the Unitarian Church in Arlington County, Virginia in the late 1950s, which provided the basis for his book, High Level Wellness, published in 1961. The book was reissued in a number of editions but did not have a great deal of immediate impact. It did, however, come into the hands of a number of the future leaders of the wellness and holistic health movement that bloomed more than a decade later, such as Don B. Ardell, Robert Russell, John Travis, and Elizabeth Neilson.
Four events in the mid-1970s broadened the impact of Dunn's ideas. First, John Travis opened the first US wellness center (Mill Valley, CA, 1975). This center and other organizations were then described in Don Ardell's 1977 book, using Dunn's title (giving Dunn due credit for his origination of the title and concept). Then Elizabeth Neilson founded the journal Health Values: Achieving High-Level Wellness (renamed the American Journal of Health Promotion in 1996), which was dedicated to Dunn and reprinted one of his papers in its first edition. Lastly, the publisher of Health Values, Charles B. Slack, Inc., published a reprint edition of Dunn's High-Level Wellness that achieved a wider distribution and impact.
References
|
https://en.wikipedia.org/wiki/Supersymmetry%20algebras%20in%201%20%2B%201%20dimensions
|
A two dimensional Minkowski space, i.e. a flat space with one time and one spatial dimension, has a two-dimensional Poincaré group IO(1,1) as its symmetry group. The respective Lie algebra is called the Poincaré algebra. It is possible to extend this algebra to a supersymmetry algebra, which is a -graded Lie superalgebra. The most common ways to do this are discussed below.
algebra
Let the Lie algebra of IO(1,1) be generated by the following generators:
is the generator of the time translation,
is the generator of the space translation,
is the generator of Lorentz boosts.
For the commutators between these generators, see Poincaré algebra.
The supersymmetry algebra over this space is a supersymmetric extension of this Lie algebra with the four additional generators (supercharges) , which are odd elements of the Lie superalgebra. Under Lorentz transformations the generators and transform as left-handed Weyl spinors, while and transform as right-handed Weyl spinors. The algebra is given by the Poincaré algebra plus
where all remaining commutators vanish, and and are complex central charges. The supercharges are related via . , , and are Hermitian.
Subalgebras of the algebra
The and subalgebras
The subalgebra is obtained from the algebra by removing the generators and . Thus its anti-commutation relations are given by
plus the commutation relations above that do not involve or . Both generators are left-handed Weyl spinors.
Similarly, the subalgebra is obtained by removing and and fulfills
Both supercharge generators are right-handed.
The subalgebra
The subalgebra is generated by two generators and given by
for two real numbers and .
By definition, both supercharges are real, i.e. . They transform as Majorana-Weyl spinors under Lorentz transformations. Their anti-commutation relations are given by
where is a real central charge.
The and subalgebras
These algebras can be obtained from the subalgebra by removing resp. from the generators.
See also
Supersymmetry
Super-Poincaré algebra (in 1+3 dimensions)
References
K. Schoutens, Supersymmetry and factorized scattering, Nucl.Phys. B344, 665–695, 1990
T.J. Hollowood, E. Mavrikis, The N = 1 supersymmetric bootstrap and Lie algebras, Nucl. Phys. B484, 631–652, 1997, arXiv:hep-th/9606116
Supersymmetry
Mathematical physics
Lie algebras
|
https://en.wikipedia.org/wiki/Center%20%28category%20theory%29
|
In category theory, a branch of mathematics, the center (or Drinfeld center, after Soviet-American mathematician Vladimir Drinfeld) is a variant of the notion of the center of a monoid, group, or ring to a category.
Definition
The center of a monoidal category , denoted , is the category whose objects are pairs (A,u) consisting of an object A of and an isomorphism which is natural in satisfying
and
(this is actually a consequence of the first axiom).
An arrow from (A,u) to (B,v) in consists of an arrow in such that
.
This definition of the center appears in . Equivalently, the center may be defined as
i.e., the endofunctors of C which are compatible with the left and right action of C on itself given by the tensor product.
Braiding
The category becomes a braided monoidal category with the tensor product on objects defined as
where , and the obvious braiding.
Higher categorical version
The categorical center is particularly useful in the context of higher categories. This is illustrated by the following example: the center of the (abelian) category of R-modules, for a commutative ring R, is again. The center of a monoidal ∞-category C can be defined, analogously to the above, as
.
Now, in contrast to the above, the center of the derived category of R-modules (regarded as an ∞-category) is given by the derived category of modules over the cochain complex encoding the Hochschild cohomology, a complex whose degree 0 term is R (as in the abelian situation above), but includes higher terms such as (derived Hom).
The notion of a center in this generality is developed by . Extending the above-mentioned braiding on the center of an ordinary monoidal category, the center of a monoidal ∞-category becomes an -monoidal category. More generally, the center of a -monoidal category is an algebra object in -monoidal categories and therefore, by Dunn additivity, an -monoidal category.
Examples
has shown that the Drinfeld center of the category of sheaves on an orbifold X is the category of sheaves on the inertia orbifold of X. For X being the classifying space of a finite group G, the inertia orbifold is the stack quotient G/G, where G acts on itself by conjugation. For this special case, Hinich's result specializes to the assertion that the center of the category of G-representations (with respect to some ground field k) is equivalent to the category consisting of G-graded k-vector spaces, i.e., objects of the form
for some k-vector spaces, together with G-equivariant morphisms, where G acts on itself by conjugation.
In the same vein, have shown that Drinfeld center of the derived category of quasi-coherent sheaves on a perfect stack X is the derived category of sheaves on the loop stack of X.
Related notions
Centers of monoid objects
The center of a monoid and the Drinfeld center of a monoidal category are both instances of the following more general concept. Given a monoidal category C and a monoid object A in C, the center o
|
https://en.wikipedia.org/wiki/Noncentral%20hypergeometric%20distributions
|
In statistics, the hypergeometric distribution is the discrete probability distribution generated by picking colored balls at random from an urn without replacement.
Various generalizations to this distribution exist for cases where the picking of colored balls is biased so that balls of one color are more likely to be picked than balls of another color.
This can be illustrated by the following example. Assume that an opinion poll is conducted by calling random telephone numbers. Unemployed people are more likely to be home and answer the phone than employed people are. Therefore, unemployed respondents are likely to be over-represented in the sample. The probability distribution of employed versus unemployed respondents in a sample of n respondents can be described as a noncentral hypergeometric distribution.
The description of biased urn models is complicated by the fact that there is more than one noncentral hypergeometric distribution. Which distribution one gets depends on whether items (e.g., colored balls) are sampled one by one in a manner in which there is competition between the items or they are sampled independently of one another. The name noncentral hypergeometric distribution has been used for both of these cases. The use of the same name for two different distributions came about because they were studied by two different groups of scientists with hardly any contact with each other.
Agner Fog (2007, 2008) suggested that the best way to avoid confusion is to use the name Wallenius' noncentral hypergeometric distribution for the distribution of a biased urn model in which a predetermined number of items are drawn one by one in a competitive manner and to use the name Fisher's noncentral hypergeometric distribution for one in which items are drawn independently of each other, so that the total number of items drawn is known only after the experiment. The names refer to Kenneth Ted Wallenius and R. A. Fisher, who were the first to describe the respective distributions.
Fisher's noncentral hypergeometric distribution had previously been given the name extended hypergeometric distribution, but this name is rarely used in the scientific literature, except in handbooks that need to distinguish between the two distributions.
Wallenius' noncentral hypergeometric distribution
Wallenius' distribution can be explained as follows.
Assume that an urn contains red balls and white balls, totalling balls. balls are drawn at random from the urn one by one without replacement. Each red ball has the weight , and each white ball has the weight . We assume that the probability of taking a particular ball is proportional to its weight. The physical property that determines the odds may be something else than weight, such as size or slipperiness or some other factor, but it is convenient to use the word weight for the odds parameter.
The probability that the first ball picked is red is equal to the weight fraction of red balls:
The probabili
|
https://en.wikipedia.org/wiki/Siegel%27s%20lemma
|
In mathematics, specifically in transcendental number theory and Diophantine approximation, Siegel's lemma refers to bounds on the solutions of linear equations obtained by the construction of auxiliary functions. The existence of these polynomials was proven by Axel Thue; Thue's proof used Dirichlet's box principle. Carl Ludwig Siegel published his lemma in 1929. It is a pure existence theorem for a system of linear equations.
Siegel's lemma has been refined in recent years to produce sharper bounds on the estimates given by the lemma.
Statement
Suppose we are given a system of M linear equations in N unknowns such that N > M, say
where the coefficients are rational integers, not all 0, and bounded by B. The system then has a solution
with the Xs all rational integers, not all 0, and bounded by
gave the following sharper bound for the X'''s:
where D is the greatest common divisor of the M × M minors of the matrix A, and AT is its transpose. Their proof involved replacing the pigeonhole principle by techniques from the geometry of numbers.
See also
Diophantine approximation
References
Wolfgang M. Schmidt. Diophantine approximation. Lecture Notes in Mathematics 785. Springer. (1980 [1996 with minor corrections]) (Pages 125-128 and 283-285)
Wolfgang M. Schmidt. "Chapter I: Siegel's Lemma and Heights" (pages 1–33). Diophantine approximations and Diophantine equations'', Lecture Notes in Mathematics, Springer Verlag 2000.
Lemmas
Diophantine approximation
Diophantine geometry
|
https://en.wikipedia.org/wiki/Richard%20Dudley
|
Richard Dudley may refer to:
Richard Dudley (1518–1593), miner
Richard M. Dudley (1938–2020), professor of mathematics
Richard Houston Dudley (1836–1914), American politician, Confederate soldier and businessman
Dick Dudley (1915–2000), American radio and television announcer
|
https://en.wikipedia.org/wiki/Tureia%20Airport
|
Tureia Airport is an airport on Tureia in French Polynesia .
Tureia Airport was inaugurated in 1985.
Airlines and destinations
Passenger
No scheduled flights as of May 2019.
Statistics
References
Airports in French Polynesia
|
https://en.wikipedia.org/wiki/List%20of%20urban%20areas%20in%20the%20Nordic%20countries
|
This is a list of urban areas in the Nordic countries by population. Urban areas in the Nordic countries are measured at national level, independently by each country's statistical office. Statistics Sweden uses the term tätort (urban settlement), Statistics Finland also uses tätort in Swedish and taajama in Finnish, Statistics Denmark uses byområde (city), while Statistics Norway uses tettsted (urban settlement).
A common statistical definition between the Nordic countries was agreed in 1960, which defines an urban area as a contiguous built-up area with a population of at least 200 and where the maximum distance between dwellings is 200 metres, excluding roads, car parks, parks, sports grounds and cemeteries - regardless of the boundaries of the municipality, district or county. Despite the common definition, the different statistical offices have different approaches to carrying out these measurements, resulting in slight differences between countries.
The Nordic definition is unique to these countries and should not be confused with international concepts of metropolitan area or urban areas in general. In 2010, Finland (stat.fi) changed its definition. This means that, according to official statistics, the land area covered by urban areas is three times larger in Finland than in Norway, although the total urban population is about the same (ssb.no). It also means that the population of a Danish 'byområder' is usually less than half the population of the 'functional urban area' as defined by Eurostat, whereas the population of a Finnish 'tätort' is usually around 80% of the respective 'functional urban area' as defined by Eurostat. For example, in 2013 the 'functional urban area' of Aarhus had a population of 845,971, while the 'functional urban area' of Tampere had a population of 364,992. However, according to official statistics, the "tätort" Tampere is larger than the "byområde" Aarhus (eurostat.ec). This suggests that direct comparisons between Finland and the other Nordic countries may be problematic.
List
Note that the population numbers from the countries are from different years, as Statistics Finland, Statistics Norway and Statistics Denmark release the statistic yearly (albeit at different times of the year), Statistics Sweden only release the figures every five years. The Norwegian data is from 2013 and 2018, the Danish data is from 2014, the Swedish is from 2010 and the Finnish is from 2017.
Also note that some of the statistics have been updated since the first note was made, so some statistics may be from 2018, while others from 2013, etc.
See also
Urban areas in the Nordic countries
List of the most populated municipalities in the Nordic countries
List of metropolitan areas in Sweden
List of urban areas in Sweden by population
List of urban areas in Denmark by population
List of urban areas in Norway by population
List of urban areas in Finland by population
List of cities in Iceland
List of cities in the Baltic states
Li
|
https://en.wikipedia.org/wiki/Scaling%20and%20root%20planing
|
Scaling and root planing, also known as conventional periodontal therapy, non-surgical periodontal therapy or deep cleaning, is a procedure involving removal of dental plaque and calculus (scaling or debridement) and then smoothing, or planing, of the (exposed) surfaces of the roots, removing cementum or dentine that is impregnated with calculus, toxins, or microorganisms, the agents that cause inflammation. It is a part of non-surgical periodontal therapy. This helps to establish a periodontium that is in remission of periodontal disease. Periodontal scalers and periodontal curettes are some of the tools involved.
A regular, non-deep teeth cleaning includes tooth scaling, tooth polishing, and debridement if too much tartar has accumulated, but does not include root planing.
Plaque
Plaque is a soft yellow-grayish substance that adheres to the tooth surfaces including removable and fixed restorations. It is an organised biofilm that is primarily composed of bacteria in a matrix of glycoproteins and extracellular polysaccharides. This matrix makes it impossible to remove the plaque by rinsing or using sprays. Materia alba is similar to plaque but it lacks the organized structure of plaque and hence easily displaced with rinses and sprays.
Although everyone has a tendency to develop plaque and materia alba, through regular brushing and flossing these organized colonies of bacteria are disturbed and eliminated from the oral cavity. In general, the more effective one's brushing, flossing, and other oral homecare practices, the less plaque will accumulate on the teeth.
However, if, after 24 hours in the oral environment, biofilm remains undisturbed by brushing or flossing, it begins to absorb the mineral content of saliva. Through this absorption of calcium and phosphorus from the saliva, oral biofilm is transformed from the soft, easily removable form into a hard substance known as calculus. Commonly known as 'tartar', calculus provides a base for new layers of plaque biofilm to settle on and builds up over time. Calculus cannot be removed by brushing or flossing.
Plaque build up and bone loss
Plaque accumulation tends to be thickest along the gumline. Because of the proximity of this area to the gum tissue, the bacterial plaque begins to irritate and infect the gums. This infection of the gum causes the gum disease known as gingivitis, which literally means inflammation of the gingiva, or gums. Gingivitis is characterized by swelling, redness and bleeding gums. It is the first step in the decline of periodontal health, and the only step which can be fully reversed to restore one's oral health.
As the gingival tissue swells, it no longer provides an effective seal between the tooth and the outside environment. Vertical space is created between the tooth and the gum, allowing new bacterial plaque biofilm to begin to migrate into the sulcus, or space between the gum and the tooth. In healthy individuals, the sulcus is no more than 3 mm deep when
|
https://en.wikipedia.org/wiki/Auxiliary%20function
|
In mathematics, auxiliary functions are an important construction in transcendental number theory. They are functions that appear in most proofs in this area of mathematics and that have specific, desirable properties, such as taking the value zero for many arguments, or having a zero of high order at some point.
Definition
Auxiliary functions are not a rigorously defined kind of function, rather they are functions which are either explicitly constructed or at least shown to exist and which provide a contradiction to some assumed hypothesis, or otherwise prove the result in question. Creating a function during the course of a proof in order to prove the result is not a technique exclusive to transcendence theory, but the term "auxiliary function" usually refers to the functions created in this area.
Explicit functions
Liouville's transcendence criterion
Because of the naming convention mentioned above, auxiliary functions can be dated back to their source simply by looking at the earliest results in transcendence theory. One of these first results was Liouville's proof that transcendental numbers exist when he showed that the so called Liouville numbers were transcendental. He did this by discovering a transcendence criterion which these numbers satisfied. To derive this criterion he started with a general algebraic number α and found some property that this number would necessarily satisfy. The auxiliary function he used in the course of proving this criterion was simply the minimal polynomial of α, which is the irreducible polynomial f with integer coefficients such that f(α) = 0. This function can be used to estimate how well the algebraic number α can be estimated by rational numbers p/q. Specifically if α has degree d at least two then he showed that
and also, using the mean value theorem, that there is some constant depending on α, say c(α), such that
Combining these results gives a property that the algebraic number must satisfy; therefore any number not satisfying this criterion must be transcendental.
The auxiliary function in Liouville's work is very simple, merely a polynomial that vanishes at a given algebraic number. This kind of property is usually the one that auxiliary functions satisfy. They either vanish or become very small at particular points, which is usually combined with the assumption that they do not vanish or can't be too small to derive a result.
Fourier's proof of the irrationality of e
Another simple, early occurrence is in Fourier's proof of the irrationality of e, though the notation used usually disguises this fact. Fourier's proof used the power series of the exponential function:
By truncating this power series after, say, N + 1 terms we get a polynomial with rational coefficients of degree N which is in some sense "close" to the function ex. Specifically if we look at the auxiliary function defined by the remainder:
then this function—an exponential polynomial—should take small values for
|
https://en.wikipedia.org/wiki/Menger%20curvature
|
In mathematics, the Menger curvature of a triple of points in n-dimensional Euclidean space Rn is the reciprocal of the radius of the circle that passes through the three points. It is named after the Austrian-American mathematician Karl Menger.
Definition
Let x, y and z be three points in Rn; for simplicity, assume for the moment that all three points are distinct and do not lie on a single straight line. Let Π ⊆ Rn be the Euclidean plane spanned by x, y and z and let C ⊆ Π be the unique Euclidean circle in Π that passes through x, y and z (the circumcircle of x, y and z). Let R be the radius of C. Then the Menger curvature c(x, y, z) of x, y and z is defined by
If the three points are collinear, R can be informally considered to be +∞, and it makes rigorous sense to define c(x, y, z) = 0. If any of the points x, y and z are coincident, again define c(x, y, z) = 0.
Using the well-known formula relating the side lengths of a triangle to its area, it follows that
where A denotes the area of the triangle spanned by x, y and z.
Another way of computing Menger curvature is the identity
where is the angle made at the y-corner of the triangle spanned by x,y,z.
Menger curvature may also be defined on a general metric space. If X is a metric space and x,y, and z are distinct points, let f be an isometry from into . Define the Menger curvature of these points to be
Note that f need not be defined on all of X, just on {x,y,z}, and the value cX (x,y,z) is independent of the choice of f.
Integral Curvature Rectifiability
Menger curvature can be used to give quantitative conditions for when sets in may be rectifiable. For a Borel measure on a Euclidean space define
A Borel set is rectifiable if , where denotes one-dimensional Hausdorff measure restricted to the set .
The basic intuition behind the result is that Menger curvature measures how straight a given triple of points are (the smaller is, the closer x,y, and z are to being collinear), and this integral quantity being finite is saying that the set E is flat on most small scales. In particular, if the power in the integral is larger, our set is smoother than just being rectifiable
Let , be a homeomorphism and . Then if .
If where , and , then is rectifiable in the sense that there are countably many curves such that . The result is not true for , and for .:
In the opposite direction, there is a result of Peter Jones:
If , , and is rectifiable. Then there is a positive Radon measure supported on satisfying for all and such that (in particular, this measure is the Frostman measure associated to E). Moreover, if for some constant C and all and r>0, then . This last result follows from the Analyst's Traveling Salesman Theorem.
Analogous results hold in general metric spaces:
See also
Menger-Melnikov curvature of a measure
External links
References
Curvature (mathematics)
Multi-dimensional geometry
|
https://en.wikipedia.org/wiki/Bloch%27s%20theorem%20%28complex%20variables%29
|
In complex analysis, a branch of mathematics, Bloch's theorem describes the behaviour of holomorphic functions defined on the unit disk. It gives a lower bound on the size of a disk in which an inverse to a holomorphic function exists. It is named after André Bloch.
Statement
Let f be a holomorphic function in the unit disk |z| ≤ 1 for which
Bloch's Theorem states that there is a disk S ⊂ D on which f is biholomorphic and f(S) contains a disk with radius 1/72.
Landau's theorem
If f is a holomorphic function in the unit disk with the property |f′(0)| = 1, then let Lf be the radius of the largest disk contained in the image of f.
Landau's theorem states that there is a constant L defined as the infimum of Lf over all such functions f, and that L is greater than Bloch's constant L ≥ B.
This theorem is named after Edmund Landau.
Valiron's theorem
Bloch's theorem was inspired by the following theorem of Georges Valiron:
Theorem. If f is a non-constant entire function then there exist disks D of arbitrarily large radius and analytic functions φ in D such that f(φ(z)) = z for z in D.
Bloch's theorem corresponds to Valiron's theorem via the so-called Bloch's Principle.
Proof
Landau's theorem
We first prove the case when f(0) = 0, f′(0) = 1, and |f′(z)| ≤ 2 in the unit disk.
By Cauchy's integral formula, we have a bound
where γ is the counterclockwise circle of radius r around z, and 0 < r < 1 − |z|.
By Taylor's theorem, for each z in the unit disk, there exists 0 ≤ t ≤ 1 such that f(z) = z + z2f″(tz) / 2.
Thus, if |z| = 1/3 and |w| < 1/6, we have
By Rouché's theorem, the range of f contains the disk of radius 1/6 around 0.
Let D(z0, r) denote the open disk of radius r around z0. For an analytic function g : D(z0, r) → C such that g(z0) ≠ 0, the case above applied to (g(z0 + rz) − g(z0)) / (rg′(0)) implies that the range of g contains D(g(z0), |g′(0)|r / 6).
For the general case, let f be an analytic function in the unit disk such that |f′(0)| = 1, and z0 = 0.
If |f′(z)| ≤ 2|f′(z0)| for |z − z0| < 1/4, then by the first case, the range of f contains a disk of radius |f′(z0)| / 24 = 1/24.
Otherwise, there exists z1 such that |z1 − z0| < 1/4 and |f′(z1)| > 2|f′(z0)|.
If |f′(z)| ≤ 2|f′(z1)| for |z − z1| < 1/8, then by the first case, the range of f contains a disk of radius |f′(z1)| / 48 > |f′(z0)| / 24 = 1/24.
Otherwise, there exists z2 such that |z2 − z1| < 1/8 and |f′(z2)| > 2|f′(z1)|.
Repeating this argument, we either find a disk of radius at least 1/24 in the range of f, proving the theorem, or find an infinite sequence (zn) such that |zn − zn−1| < 1/2n+1 and |f′(zn)| > 2|f′(zn−1)|.
In the latter case the sequence is in D(0, 1/2), so f′ is unbounded in D(0, 1/2), a contradiction.
Bloch's Theorem
In the proof of Landau's Theorem above, Rouché's theorem implies that not only can we find a disk D of radius at least 1/24 in the range of f, but there is also a small disk D0 inside the unit disk such that for every w ∈ D there is
|
https://en.wikipedia.org/wiki/Ryszard%20Engelking
|
Ryszard Engelking (born 1935-11-16 in Sosnowiec) is a Polish mathematician. He was working mainly on general topology and dimension theory. He is author of several influential monographs in this field. The 1989 edition of his General Topology is nowadays a standard reference for topology.
Scientific work
Apart from his books, Ryszard Engelking is known, among other things, for a generalization to an arbitrary topological space of the "Alexandroff double circle", for works on completely metrizable spaces, suborderable spaces and generalized ordered spaces. The Engelking–Karlowicz theorem, proved together with Monica Karlowicz, is a statement about the existence of a family of functions from to with topological and set-theoretical applications.
In addition to research papers authored just by himself, he also published jointly with Kazimierz Kuratowski, Roman Sikorski, Aleksander Pełczyński and others. He has published about 60 scientific works reviewed by MathSciNet and Zentralblatt.
Translation works
Apart from mathematics he is also interested in literature. He translated into Polish French authors: Flaubert's Madame Bovary, and works of Baudelaire, Gérard de Nerval, Auguste de Villiers de L'Isle-Adam, Nicolas Restif de la Bretonne. For these activities he was awarded by Literatura na Świecie (World Literature).
Bibliography
Notes
External links
1935 births
20th-century Polish mathematicians
Topologists
Living people
Translators of Charles Baudelaire
Translators of Gérard de Nerval
|
https://en.wikipedia.org/wiki/John%20ellipsoid
|
In mathematics, the John ellipsoid or Löwner-John ellipsoid E(K) associated to a convex body K in n-dimensional Euclidean space Rn can refer to the n-dimensional ellipsoid of maximal volume contained within K or the ellipsoid of minimal volume that contains K.
Often, the minimal volume ellipsoid is called the Löwner ellipsoid, and the maximal volume ellipsoid is called the John ellipsoid (although John worked with the minimal volume ellipsoid in its original paper). One can also refer to the minimal volume circumscribed ellipsoid as the outer Löwner-John ellipsoid, and the maximum volume inscribed ellipsoid as the inner Löwner-John ellipsoid.
Properties
The John ellipsoid is named after the German-American mathematician Fritz John, who proved in 1948 that each convex body in Rn is circumscribed by a unique ellipsoid of minimal volume and that the dilation of this ellipsoid by factor 1/n is contained inside the convex body.
The inner Löwner-John ellipsoid E(K) of a convex body K ⊂ Rn is a closed unit ball B in Rn if and only if B ⊆ K and there exists an integer m ≥ n and, for i = 1, ..., m, real numbers ci > 0 and unit vectors ui ∈ Sn−1 ∩ ∂K such that
and, for all x ∈ Rn
Applications
The computation of Löwner-John ellipsoids (and in more general, the computation of minimal-volume polynomial level sets enclosing a set) has found many applications in control and robotics. In particular, computing Löwner-John ellipsoids has applications in obstacle collision detection for robotic systems, where the distance between a robot and its surrounding environment is estimated using a best ellipsoid fit.
Löwner-John ellipsoids has also been used to approximate the optimal policy in portfolio optimization problems with transaction costs.
See also
Steiner inellipse, the special case of the inner Löwner-John ellipsoid for a triangle.
Fat object, related to radius of largest contained ball.
References
Convex geometry
Multi-dimensional geometry
Quadrics
|
https://en.wikipedia.org/wiki/Bonnet%20theorem
|
In the mathematical field of differential geometry, the fundamental theorem of surface theory deals with the problem of prescribing the geometric data of a submanifold of Euclidean space. Originally proved by Pierre Ossian Bonnet in 1867, it has since been extended to higher dimensions and non-Euclidean contexts.
Bonnet's theorem
Any surface in three-dimensional Euclidean space has a first and second fundamental form, which automatically are interrelated by the Gauss–Codazzi equations. Bonnet's theorem asserts a local converse to this result.
Given an open region in , let and be symmetric 2-tensors on , with additionally required to be positive-definite. If these are smooth and satisfy the Gauss–Codazzi equations, then Bonnet's theorem says that is covered by open sets which can be smoothly embedded into with first fundamental form and second fundamental form (relative to one of the two choices of unit normal vector field) . Furthermore, each of these embeddings is uniquely determined up to a rigid motion of .
Bonnet's theorem is a corollary of the Frobenius theorem, upon viewing the Gauss–Codazzi equations as a system of first-order partial differential equations for the two coordinate derivatives of the position vector of an embedding, together with the normal vector.
General formulations
Bonnet's theorem can be naturally formulated for hypersurfaces in a Euclidean space of any dimension, and the result remains true in this context. Furthermore, the theorem can be extended from Bonnet's local formulation to a global formulation, allowing to be any connected and simply-connected smooth manifold, with the result asserting the existence and uniqueness (up to a rigid motion) of a smooth immersion of as a hypersurface of Euclidean space with first fundamental form and second fundamental form . The idea of the proof is to use the existence theory from the local formulation to construct the immersion along arbitrary curves emanating from a single point. Simple-connectedness is used to say that any two such curves with a common endpoint are homotopic (through paths fixing the endpoints), and uniqueness from the local formulation implies that the value of the immersion at the endpoint must be fixed through the homotopy, so that an immersion results which is well-defined on the entire manifold.
In this global formulation, existence would not hold in general if the condition of simple-connectedness were removed. This can be seen from the nonexistence of a hypersurface immersion of the torus whose first fundamental form is flat and whose second fundamental form is zero.
The theorem can also be extended, beyond the context of hypersurfaces, to the theory of submanifolds of arbitrary codimension. This is more complicated to formulate, because in addition to the first and second fundamental forms, there is also the (generally nontrivial) connection in the normal bundle which must be taken into account. In this generality, the fundamental theor
|
https://en.wikipedia.org/wiki/Andreas%20Blass
|
Andreas Raphael Blass (born October 27, 1947) is a mathematician, currently a professor at the University of Michigan. He works in mathematical logic, particularly set theory, and theoretical computer science.
Blass graduated from the University of Detroit, where he was a Putnam Fellow in 1965, in 1966 with a B.S. in physics. He received his Ph.D. in 1970 from Harvard University, with a thesis on Orderings of Ultrafilters written under the supervision of Frank Wattenberg. Since 1970 he has been employed by the University of Michigan, first as a T.H. Hildebrandt Research Instructor (1970–72), then assistant professor (1972–76), associate professor (1976–84) and since 1984 he has been a full professor there.
In 2014, he became a Fellow of the American Mathematical Society.
Selected publications and results
In 1984 Blass proved that the existence of a basis for every vector space is equivalent to the axiom of choice. He made important contributions in the development of the set theory of the reals and forcing.
Blass was the first to point out connections between game semantics and linear logic.
He has authored more than 200 research articles in mathematical logic and theoretical computer science, including:
References
External links
Blass's page at UM
Living people
20th-century German mathematicians
21st-century American mathematicians
Set theorists
University of Detroit Mercy alumni
Harvard University alumni
University of Michigan faculty
Putnam Fellows
1947 births
Emigrants from West Germany to the United States
Fellows of the American Mathematical Society
|
https://en.wikipedia.org/wiki/Krieger%E2%80%93Nelson%20Prize
|
The Krieger–Nelson Prize is presented by the Canadian Mathematical Society in recognition of an outstanding woman in mathematics. It was first
awarded in 1995. The award is named after Cecilia Krieger and Evelyn Nelson, both known for their contributions to mathematics in Canada.
Recipients
While the award has largely been awarded to a female mathematician working at a Canadian University, it has also been awarded to Canadian-born or -educated women working outside of the country. For example, Cathleen Morawetz, past president of the American Mathematical Society, and a faculty member at the Courant Institute of Mathematical Sciences (a division of New York University) was awarded the Krieger–Nelson Prize in 1997. (Morawetz was educated at the University of Toronto in Toronto, Canada). According to the call for applications, the award winner should be a "member of the Canadian mathematical community".
The recipient of the Krieger–Nelson Prize delivers a lecture to the Canadian Mathematical Society, typically during its summer meeting.
1995 Nancy Reid
1996 Olga Kharlampovich
1997 Cathleen Synge Morawetz
1998 Catherine Sulem
1999 Nicole Tomczak-Jaegermann
2000 Kanta Gupta
2001 Lisa Jeffrey
2002 Cindy Greenwood
2003 Leah Keshet
2004 Not Awarded
2005 Barbara Keyfitz
2006 Penny Haxell
2007 Pauline van den Driessche
2008 Izabella Łaba
2009 Yael Karshon
2010 Lia Bronsard
2011 Rachel Kuske
2012 Ailana Fraser
2013 Chantal David
2014 Gail Wolkowicz
2015 Jane Ye
2016 Malabika Pramanik
2017 Stephanie van Willigenburg
2018 Megumi Harada
2019 Julia Gordon
2020 Sujatha Ramdorai
2021 Anita Layton
2022 Matilde Lalín
2023 Johanna G. Nešlehová
See also
List of mathematics awards
References
External links
Krieger–Nelson Prize, Canadian Mathematical Society.
Awards of the Canadian Mathematical Society
Science awards honoring women
Awards established in 1995
Lists of women scientists
Lists of mathematicians by award
Women in mathematics
|
https://en.wikipedia.org/wiki/Raman%20Parimala
|
Raman Parimala (born 21 November 1948) is an Indian mathematician known for her contributions to algebra. She is the Arts & Sciences Distinguished Professor of mathematics at Emory University. For many years, she was a professor at Tata Institute of Fundamental Research (TIFR), Mumbai. She has been on the Mathematical Sciences jury for the Infosys Prize from 2019 and is on the Abel prize selection Committee 2021/2022.
Background
Parimala was born and raised in Tamil Nadu, India. She studied in Saradha Vidyalaya Girls' High School and Stella Maris College at Chennai. She received her M.Sc. from Madras University (1970) and Ph.D. from the
University of Mumbai (1976); her advisor was R. Sridharan from TIFR.
Selected publications
Failure of a quadratic analogue of Serre's conjecture, Bulletin of the AMS, vol. 82, 1976, pp. 962–964
Quadratic spaces over polynomial extensions of regular rings of dimension 2, Mathematische Annalen, vol. 261, 1982, pp. 287–292
Galois cohomology of the Classical groups over fields of cohomological dimension≦2, E Bayer-Fluckiger, R Parimala - Inventiones mathematicae, 1995 - Springer
Hermitian analogue of a theorem of Springer, R Parimala, R. Sridharan, V Suresh - Journal of Algebra, 2001 - Elsevier
Classical groups and the Hasse principle, E Bayer-Fluckiger, R Parimala - Annals of Mathematics, 1998 - jstor.org
Honors
On National Science Day in 2020, Smriti Irani, head of the Ministry of Women and Child Development of the Government of India, announced the establishment of chairs at institutes across India in the names of Raman Parimala and other ten Indian women scientists.
Parimala was an invited speaker at the International Congress of Mathematicians in Zurich in 1994 and gave a talk Study of quadratic forms — some connections with geometry . She gave a plenary address Arithmetic of linear algebraic groups over two dimensional fields at the Congress in Hyderabad in 2010.
Fellow of the Indian Academy of Sciences
Fellow of Indian National Science Academy
Bhatnagar Award in 1987
Honorary doctorate from the University of Lausanne in 1999
Srinivasa Ramanujan Birth Centenary Award in 2003.
TWAS Prize for Mathematics (2005).
Fellow of the American Mathematical Society (2012)
Notes
External links
Home page at Emory
Parimala's biography in the Agnes Scott College database of women mathematicians
1948 births
Indian women mathematicians
Emory University faculty
Algebraists
Living people
Tamil scholars
Tata Institute of Fundamental Research alumni
Fellows of the American Mathematical Society
Scientists from Tamil Nadu
University of Madras alumni
Indian women science writers
Indian scientific authors
20th-century Indian women writers
20th-century Indian mathematicians
20th-century Indian women scientists
21st-century Indian mathematicians
21st-century Indian women scientists
21st-century Indian women writers
Women writers from Tamil Nadu
TWAS laureates
20th-century Indian non-fiction writers
21st-cen
|
https://en.wikipedia.org/wiki/Collision%20problem
|
The r-to-1 collision problem is an important theoretical problem in complexity theory, quantum computing, and computational mathematics. The collision problem most often refers to the 2-to-1 version: given even and a function , we are promised that f is either 1-to-1 or 2-to-1. We are only allowed to make queries about the value of for any . The problem then asks how many such queries we need to make to determine with certainty whether f is 1-to-1 or 2-to-1.
Classical solutions
Deterministic
Solving the 2-to-1 version deterministically requires queries, and in general distinguishing r-to-1 functions from 1-to-1 functions requires queries.
This is a straightforward application of the pigeonhole principle: if a function is r-to-1, then after queries we are guaranteed to have found a collision. If a function is 1-to-1, then no collision exists. Thus, queries suffice. If we are unlucky, then the first queries could return distinct answers, so queries is also necessary.
Randomized
If we allow randomness, the problem is easier. By the birthday paradox, if we choose (distinct) queries at random, then with high probability we find a collision in any fixed 2-to-1 function after queries.
Quantum solution
The BHT algorithm, which uses Grover's algorithm, solves this problem optimally by only making queries to f.
References
Algorithms
Polynomial-time problems
|
https://en.wikipedia.org/wiki/Arena%20Theatre%2C%20Wolverhampton
|
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [
-2.1269059181213383,
52.58727601762145
]
}
}
]
}
The Arena Theatre is situated on Wulfruna Street in Wolverhampton and is part of the University of Wolverhampton's city campus. The venue's main auditorium seats 150 people and is used for both professional touring shows and for local community groups.
History
In 1967, Philip Tilstone, the first lecturer in drama at the University of Wolverhampton which was then the Wolverhampton College of Technology, wanted to establish the subject not just at the university but in Wolverhampton too. He was committed to provide a range of performance events for both students and the local community. Alongside his colleague, the late Dr. Percy Young, the director of music at the college, Tilstone gave the music students the opportunity to perform and these performance events would justify the provision of a fully equipped theatre/workshop venue, the Arena Theatre, with shared access for students and visiting performers. In 1989, Kevin O'Sullivan became the administrator for the Arena Theatre and then the theatre manager until his retirement in 2013. Student work was frequently performed at the Arena Theatre and local audiences continued to benefit from the range and quality of its professional programming.
The Arena Theatre continued to act as an essential resource for drama and a first class performance venue for the region. Students from surrounding colleges and schools, members of local drama groups and arts organisations all made extensive use of the theatre. As well as the success of the venue in the local community, the Arena Theatre became a hotspot for touring theatre. Numerous prestigious companies touring shows to the Arena during this period included Kneehigh Theatre, Royal National Theatre, Royal Shakespeare Company, People Show, Tara Arts, Shared Experience, Forced Entertainment, Volcano, Hull Truck Theatre, Gay Sweatshop, Cheek by Jowl, Market Theatre (Johannesburg), Trestle Theatre, Complicite, Kathakali Dance, Black Theatre Co-operative, Red Shift Theatre, ATC Theatre, Snarling Beasties and The Right Size. In addition to these, the Arena Theatre welcomed local professional touring companies from the West Midlands, Foursight Theatre, Theatre Foundry and Pentabus. As well as these, there were dance performances, live art and music concerts.
After 20 years, the theatre had outgrown its cramped and inaccessible home, so with investment from the University of Wolverhampton and a grant from the National Lottery, an ambitious £2 million refurbishment began. Architects Marsh and Grochoski made use of the space available and the old gym was transformed into the Tilstone Studio.
Current status
After 18 months of building work, the Arena Theatre re-opened in October 1999. With greatly improved
|
https://en.wikipedia.org/wiki/Halbert%20L.%20Dunn%20Award
|
The Halbert L. Dunn Award is the most prestigious award presented by the National Association for Public Health Statistics and Information Systems (NAPHSIS). The award has been presented since 1981 providing national recognition of outstanding and lasting contributions to the field of vital and health statistics at the national, state, or local level.
The award was established in honor of the late Halbert L. Dunn, M.D., Director of the National Office of Vital Statistics from 1936 to 1960. Dr. Dunn was highly instrumental in encouraging the states to establish state vital statistics associations and played a major role in developing NAPHSIS. The award is presented at the Hal Dunn Awards Luncheon during the association’s annual meeting.
The winners of the Halbert L. Dunn Award have been:
Source: NAPHSIS
1981 Deane Huxtable
1982 Loren Chancellor
1983 Vito Logrillo
1984 Carl Erhardt
1985 Irvin Franzen
1986 W. D. "Don" Carroll
1987 Margaret Shackelford
1988 John Brockert, State Registrar, Utah
1989 Margaret Watts
1990 John Patterson
1991 Patricia Potrzebowski, State Registrar, Pennsylvania
1992 Rose Trasatti, National Association for Public Health Statistics and Information Systems (NAPHSIS)
1993 Garland Land, State Registrar, Missouri
1994 George Van Amburg
1995 Jack Smith
1996 no award
1997 Ray Nashold
1998 Iwao Moriyama
1999 no award
2000 George Gay
2001 Dorothy Harshbarger, State Registrar, Alabama
2002 Lorne Phillips, State Registrar, Kansas
2003 Mary Anne Freedman, Director of the Division of Vital Statistics, NCHS
2004 no award
2005 Joe Carney
2006 Dan Friedman
2007 Harry Rosenberg, National Center for Health Statistics
2008 Alvin T. Onaka, Registrar, Hawaii
2009 Marshall Evans, National Center for Health Statistics
2010 Steven Schwartz, Registrar, New York City
2011 Charles Rothwell, Director, National Center for Health Statistics
2012 no award
2013 Stephanie Ventura, Director of Reproductive Statistics Branch, National Center for Health Statistics
2014 Bruce Cohen, Director of Research, MA Department of Health
2015 Isabelle Horon, State Registrar, Maryland
2016 Rose Trasatti Heim, NAPHSIS
2017 Jennifer Woodward, State Registrar, Oregon
2018 Glenn Copeland, State Registrar, Michigan
2019 Delton Atkinson, National Center for Health Statistics
2020 no award
2021 no award
2022 Jeff Duncan, State Registrar, Michigan
See also
List of mathematics awards
List of medicine awards
References
Vital statistics (government records)
Medicine awards
Statistical awards
Awards established in 1981
|
https://en.wikipedia.org/wiki/Shelling
|
Shelling may refer to:
Shell (projectile), explosive used in wars
Searching for seashells
Shelling (topology)
Wheelset deformation, that occur when the wheel has been worn out
Shelling (fishing), a fishing strategy used by dolphins.
See also
|
https://en.wikipedia.org/wiki/Approximate%20Bayesian%20computation
|
Approximate Bayesian computation (ABC) constitutes a class of computational methods rooted in Bayesian statistics that can be used to estimate the posterior distributions of model parameters.
In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate.
ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection.
ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences, e.g. in population genetics, ecology, epidemiology, systems biology, and in radio propagation.
History
The first ABC-related ideas date back to the 1980s. Donald Rubin, when discussing the interpretation of Bayesian statements in 1984, described a hypothetical sampling mechanism that yields a sample from the posterior distribution. This scheme was more of a conceptual thought experiment to demonstrate what type of manipulations are done when inferring the posterior distributions of parameters. The description of the sampling mechanism coincides exactly with that of the ABC-rejection scheme, and this article can be considered to be the first to describe approximate Bayesian computation. However, a two-stage quincunx was constructed by Francis Galton in the late 1800s that can be seen as a physical implementation of an ABC-rejection scheme for a single unknown (parameter) and a single observation. Another prescient point was made by Rubin when he argued that in Bayesian inference, applied statisticians should not settle for analytically tractable models only, but instead consider computational methods that allow them to estimate the posterior distribution of interest. This way, a wider range of models can be considered. These arguments are particularly relevant in the context of ABC.
In 1984, Peter Diggle and Richard Gratton suggested using a systematic simulation scheme to approximate the likelihood function in situations where its analytic form is intractable. Their method was based on defining a grid in the parameter space and using it to approximate the likelihood by running several simulations for each grid point. The approximation was then improved b
|
https://en.wikipedia.org/wiki/List%20of%20Sheffield%20United%20F.C.%20records%20and%20statistics
|
This article lists honours and records associated with Sheffield United F.C..
Club honours and best performances
Major domestic competitions
Leagues
First Division / Premier League (level 1)
Champions (1): 1897–98
Runners up (2): 1896–97, 1899–1900
Second Division / First Division / Championship (level 2)
Champions (1): 1952–53
Runners-up (7): 1892–93, 1938–39, 1960–61, 1970–71, 1989–90, 2005–06, 2018–19, 2022–23 (as The Championship)
Third Division / Third Division South / Second Division / League One (level 3)
Champions (1): 2016–17 (as League One)
Runners-up (1): 1988–89
Promoted in third place (1): 1983–84
Fourth Division / Third Division / League Two (level 4)
Champions (1): 1981–82
Football League North
Champions (1): 1945–46
Cups
FA Cup
Winners (4): 1899, 1902, 1915, 1925
Runners-up (2): 1901, 1936
Football League Cup
Best performance: Semi-final – 2002–03, 2014–15
Football League Trophy
Best performance: North quarter-final – 2011–12, 2012–13, 2014–15, 2015–16
Club records
League
Record League Win: 10–0 away v Port Vale, Division Two, 10 December 1892 (Goals scored by Drummond, Wallace, Hammond (4), Watson (2) & Davies (2)) and 10–0 home v Burnley, Division One, 19 January 1929 (Goals scored by Harry Johnson 8th, 11th, 49th, 64th, Fred Tunstall 59th, 67th pen, Tom Phillipson 68th, 87th, Billy Gillespie 77th & Sid Gibson 86th)
Record League defeat: 0–8 home v Newcastle United, Premier League, 24 September 2023
Most League Points in a Season (2 points for a win): 60 in Division Two, 1952–53
Most League Points in a Season (3 points for a win): 100 in League One, 2016–17
Most League Goals: 102 in Division One, 1925–26
Most League Wins in a Season (46 games): 30 in League One, 2016–17
Most League Wins in a Season (42 games): 26 in Division Two, 1960–61
Highest Percentage of League Wins in a Season: 72.7% (16 wins from 22 games) in Division Two,1892–93
Most Home League Wins in a Season (46 games): 17 in League One, 2016–17
Most Home League Wins in a Season (42 games): 16 in Division Two, 1936–37, 1958–59, 1960–61 & 1969–70
Highest Percentage of Home League Wins in a Season: 90.9% (10 wins from 11 games) in Division Two,1892–93
Most Away League Wins in a Season (46 games): 13 in League One, 2016–17
Highest Percentage of Away League Wins in a Season: 56.5% in (13 wins from 23 games) in League One, 2016–17
Most Home League Goals in a Season (46 games): 57 in Division Three, 1988–89
Most Away League Goals in a Season (46 games): 50 in League One, 2016–17
Highest Percentage of League Doubles in a Season: 47.8% in (11 doubles from 23 opponents) in League One, 2016–17
Successive League Wins: 8 in 1892–93; 1903–04; 1945–46; 1957–58; 1960–61; 2005–06 & 2016-17 / 2017-18
Successive Home League Wins: 11 in Division Two, 1960–61
Successive Away League Wins: 6 in Division Two, 1892–93
Successive League Games Without Defeat: 22 in Division One, 1899–1900
Successive Home League Games Without Defeat: 26 in Division Four and
|
https://en.wikipedia.org/wiki/Algebraic%20Logic%20Functional%20programming%20language
|
Algebraic Logic Functional (ALF) programming language combines functional and logic programming techniques. Its foundation is Horn clause logic with equality, which consists of predicates and Horn clauses for logic programming, and functions and equations for functional programming.
ALF was designed to be genuine integration of both programming paradigms, and thus any functional expression can be used in a goal literal and arbitrary predicates can occur in conditions of equations. ALF's operational semantics is based on the resolution rule to solve literals and narrowing to evaluate functional expressions. To reduce the number of possible narrowing steps, a leftmost-innermost basic narrowing strategy is used which, it is claimed, can be efficiently implemented. Terms are simplified by rewriting before a narrowing step is applied and equations are rejected if the two sides have different constructors at the top. Rewriting and rejection are supposed to result in a large reduction of the search tree and produce an operational semantics that is more efficient than Prolog's resolution strategy. Similarly to Prolog, ALF uses a backtracking strategy corresponding to a depth-first search in the derivation tree.
The ALF system was designed to be an efficient implementation of the combination of resolution, narrowing, rewriting, and rejection. ALF programs are compiled into instructions of an abstract machine, which is based on the Warren Abstract Machine (WAM) with several extensions to implement narrowing and rewriting. In the current ALF implementation programs of this abstract machine are executed by an emulator written in C.
In the Carnegie Mellon University Artificial Intelligence Repository, ALF is included as an AI programming language, more so as a functional/logic programming language Prolog implementation. A user manual describing the language and the use of the system is available. The ALF System runs on Unix and is available under a custom proprietary software license that grants the right to use for "evaluation, research and teaching purposes" but not commercial or military use.
References
External links
Publications of Michael Hanus, including many articles relevant to the design and theory of ALF
Information about getting and installing the ALF system
Functional logic programming languages
compilers and interpreters
Logic programming languages
Programming languages created in the 1990s
|
https://en.wikipedia.org/wiki/Octacube%20%28sculpture%29
|
The Octacube is a large, stainless steel sculpture displayed in the mathematics department of Pennsylvania State University in State College, PA. The sculpture represents a mathematical object called the 24-cell or "octacube". Because a real 24-cell is four-dimensional, the artwork is actually a projection into the three-dimensional world.
Octacube has very high intrinsic symmetry, which matches features in chemistry (molecular symmetry) and physics (quantum field theory).
The sculpture was designed by , a mathematics professor at Pennsylvania State University. The university's machine shop spent over a year completing the intricate metal-work. Octacube was funded by an alumna in memory of her husband, Kermit Anderson, who died in the September 11 attacks.
Artwork
The Octacube's metal skeleton measures about in all three dimensions. It is a complex arrangement of unpainted, tri-cornered flanges. The base is a high granite block, with some engraving.
The artwork was designed by Adrian Ocneanu, a Penn State mathematics professor. He supplied the specifications for the sculpture's 96 triangular pieces of stainless steel and for their assembly. Fabrication was done by Penn State's machine shop, led by Jerry Anderson. The work took over a year, involving bending and welding as well as cutting. Discussing the construction, Ocneanu said: It's very hard to make 12 steel sheets meet perfectly—and conformally—at each of the 23 vertices, with no trace of welding left. The people who built it are really world-class experts and perfectionists—artists in steel.
Because of the reflective metal at different angles, the appearance is pleasantly strange. In some cases, the mirror-like surfaces create an illusion of transparency by showing reflections from unexpected sides of the structure. The sculpture's mathematician creator commented: When I saw the actual sculpture, I had quite a shock. I never imagined the play of light on the surfaces. There are subtle optical effects that you can feel but can't quite put your finger on.
Interpretation
Regular shapes
The Platonic solids are three-dimensional shapes with special, high, symmetry. They are the next step up in dimension from the two-dimensional regular polygons (squares, equilateral triangles, etc.). The five Platonic solids are the tetrahedron (4 faces), cube (6 faces), octahedron (8 faces), dodecahedron (12 faces), and icosahedron (20 faces). They have been known since the time of the Ancient Greeks and valued for their aesthetic appeal and philosophical, even mystical, import. (See also the Timaeus, a dialogue of Plato.)
In higher dimensions, the counterparts of the Platonic solids are the regular polytopes. These shapes were first described in the mid-19th century by a Swiss mathematician, Ludwig Schläfli. In four dimensions, there are six of them: the pentachoron (5-cell), tesseract (8-cell), hexadecachoron (16-cell), octacube (24-cell), hecatonicosachoron (120-cell), and the hexacosichoron (6
|
https://en.wikipedia.org/wiki/Alexander%20MacAuley
|
Alexander MacAuley may refer to:
Alexander McAulay (1863–1931), professor of mathematics and physics at the University of Tasmania
Alexander MacAuley (footballer), Scottish footballer
|
https://en.wikipedia.org/wiki/State%20Statistics%20Service%20of%20Ukraine
|
State Statistics Committee of Ukraine (, Derzhavnyi Komitet Statystyky Ukrainy) is the government agency responsible for collection and dissemination of statistics in Ukraine. For brevity, it was also referred to as Derzhkomstat. In 2010, the committee was transformed into the State Service of Statistics under the Ministry of Economic Development and Trade.
Institutions
Science and Research Institute of Statistics, keeps track of the Classification of objects of the administrative-territorial system of Ukraine
See also
Ukrainian Census (2001), Censuses in Ukraine
External links
Official website (Ukrainian, Russian, English)
2001 Ukraine Census
Presidential decree #1085/2010 "For optimization the system of central bodies of executive power (Ukrainian)
Statistics
Ukraine
Ministry of Economic Development, Trade and Agriculture
Central executive bodies of Ukraine
|
https://en.wikipedia.org/wiki/Number%20sentence
|
In mathematics education, a number sentence is an equation or inequality expressed using numbers and mathematical symbols. The term is used in primary level mathematics teaching in the US, Canada, UK, Australia, New Zealand and South Africa.
Usage
The term is used as means of asking students to write down equations using simple mathematical symbols (numerals, the four main basic mathematical operators, equality symbol). Sometimes boxes or shapes are used to indicate unknown values. As such, number sentences are used to introduce students to notions of structure and elementary algebra prior to a more formal treatment of these concepts.
A number sentence without unknowns is equivalent to a logical proposition expressed using the notation of arithmetic.
Examples
A valid number sentence that is true: 83 + 19 = 102.
A valid number sentence that is false: 1 + 1 = 3.
A valid number sentence using a 'less than' symbol: 3 + 6 < 10.
A valid number sentence using a 'more than' symbol: 3 + 9 > 11.
An example from a lesson plan:
Some students will use a direct computational approach. They will carry out the addition 26 + 39 = 65, put 65 = 26 + , and then find that = 39.
See also
Expression (mathematics)
Equation
Inequality (mathematics)
Open sentence
Sentence (mathematical logic)
References
Mathematics education
|
https://en.wikipedia.org/wiki/Thomas%20J.R.%20Hughes
|
Thomas Joseph Robert Hughes (born 1943) is a Professor of Aerospace Engineering and Engineering Mechanics and currently holds the Computational and Applied Mathematics Chair (III) at the Oden Institute at The University of Texas at Austin.
Hughes has been listed as an ISI Highly Cited Author in Engineering by the ISI Web of Knowledge, Thomson Scientific Company.
A leading expert in computational mechanics, Hughes has received numerous academic distinctions and awards for his work. He is a research fellow of the National Academy of Sciences, National Academy of Engineering, American Academy of Arts & Sciences, the American Academy of Mechanics, the American Society of Mechanical Engineers (ASME), the U.S. Association for Computational Mechanics (USACM), the International Association for Computational Mechanics (IACM), the American Association for the Advancement of Science, and has been elected as a foreign member of The Royal Society. He is a founder and past President of USACM and IACM, and past chairman of the Applied Mechanics Division of ASME.
Career
Hughes began his career as a mechanical design engineer at Grumman Aerospace, subsequently joining General Dynamics as a research and development engineer. After receiving his Ph.D. from University of California, Berkeley, he joined the Berkeley faculty, eventually moving to California Institute of Technology. He then moved to Stanford University before joining The University of Texas at Austin. At Stanford, he served as chairman of the Division of Applied Mechanics, chairman of the Department of Mechanical Engineering, and chairman of the Division of Mechanics and Computation, where Hughes occupied the Mary and Gordon Crary Chair of Engineering. While at Stanford, he served as a member of International Advisory Committee, ICTACEM (2001).
Hughes has developed computational methods for understanding solid, structural and fluid mechanics. He recently has applied this expertise to develop customized models of blood flow for patients using their individual imaging records such as CT scans and MRIs.
Hughes was elected to the National Academy of Engineering in 1995 for contributions to the development of finite element methods for solid-structural and fluid mechanics.
Books
Thomas J. R. Hughes and Jerrold E. Marsden, A Short Course in Fluid Mechanics, Mathematics lecture series, v. 6, Boston: Publish or Perish, 1976.
Thomas J. R. Hughes, D. Gartling, Robert L. Spilker, Applied Mechanics Division, Vol. 44: New Concepts in Finite Element Analysis, ASME, 1981.
Thomas J. R. Hughes, A. Pifko, A. Jay, Applied Mechanics Division, Vol. 48: Nonlinear Finite Element Analysis of Plates and Shells, ASME, 1981.
Thomas J. R. Hughes, Stress-point algorithm for a pressure-sensitive multiple-yield-surface plasticity theory, Unknown Binding, Available from National Technical Information Service, 1982.
Computational methods for transient analysis, edited by Ted Belytschko and Thomas J.R. Hughes, Computational m
|
https://en.wikipedia.org/wiki/Biometrics%20%28journal%29
|
Biometrics is a journal that publishes articles on the application of statistics and mathematics to the biological sciences. It is published by the International Biometric Society (IBS). Originally published in 1945 under the title Biometrics Bulletin, the journal adopted the shorter title in 1947. A notable contributor to the journal was R.A. Fisher, for whom a memorial edition was published in 1964. In a survey of statistics researchers' opinions, it was ranked fifth overall among 40 statistics journals, and it was second only to the Journal of the American Statistical Association in the ranking provided by biometrics specialists.
References
External links
Publisher website (Wiley)
International Biometric Society (IBS)
Biometry by m2sys.com
Biostatistics journals
Academic journals established in 1945
Wiley-Blackwell academic journals
English-language journals
Quarterly journals
Academic journals associated with international learned and professional societies
|
https://en.wikipedia.org/wiki/Eugeniu%20Cebotaru
|
Eugeniu Cebotaru (born 16 October 1984) is a Moldovan professional football coach and a former player. He serves as an assistant coach for Liga I club Petrolul Ploiești.
Career statistics
International stats
International goals
Scores and results list Moldova's goal tally first.
Honours
Zimbru Chișinău
Moldovan Cup: 2003–04
Ceahlăul Piatra Neamț
Liga II: 2008–09, 2010–11
Petrolul Ploiești
Liga II: 2021–22
References
External links
1984 births
Living people
Footballers from Chișinău
Moldovan people of Romanian descent
Moldovan men's footballers
Moldova men's international footballers
Men's association football midfielders
Moldovan Super Liga players
FC Zimbru Chișinău players
Liga I players
CSM Ceahlăul Piatra Neamț players
LPS HD Clinceni players
Liga II players
FC Petrolul Ploiești players
Russian Premier League players
PFC Spartak Nalchik players
FC Sibir Novosibirsk players
Moldovan expatriate men's footballers
Moldovan expatriate sportspeople in Romania
Expatriate men's footballers in Romania
Moldovan expatriate sportspeople in Russia
Expatriate men's footballers in Russia
|
https://en.wikipedia.org/wiki/Polyad
|
In mathematics, polyad is a concept of category theory introduced by Jean Bénabou in generalising monads. A polyad in a bicategory D is a bicategory morphism Φ from a locally punctual bicategory C to D, . (A bicategory C is called locally punctual if all hom-categories C(X,Y) consist of one object and one morphism only.) Monads are polyads where C has only one object.
Notes
Bibliography
Category theory
|
https://en.wikipedia.org/wiki/Giac
|
Giac or GIAC may refer to:
Global Information Assurance Certification, an information security certification entity.
Giac (software), a C++ library that is part of the Xcas computer algebra system
|
https://en.wikipedia.org/wiki/Wallenius%27%20noncentral%20hypergeometric%20distribution
|
In probability theory and statistics, Wallenius' noncentral hypergeometric distribution (named after Kenneth Ted Wallenius) is a generalization of the hypergeometric distribution where items are sampled with bias.
This distribution can be illustrated as an urn model with bias. Assume, for example, that an urn contains m1 red balls and m2 white balls, totalling N = m1 + m2 balls. Each red ball has the weight ω1 and each white ball has the weight ω2. We will say that the odds ratio is ω = ω1 / ω2. Now we are taking n balls, one by one, in such a way that the probability of taking a particular ball at a particular draw is equal to its proportion of the total weight of all balls that lie in the urn at that moment. The number of red balls x1 that we get in this experiment is a random variable with Wallenius' noncentral hypergeometric distribution.
The matter is complicated by the fact that there is more than one noncentral hypergeometric distribution. Wallenius' noncentral hypergeometric distribution is obtained if balls are sampled one by one in such a way that there is competition between the balls. Fisher's noncentral hypergeometric distribution is obtained if the balls are sampled simultaneously or independently of each other. Unfortunately, both distributions are known in the literature as "the" noncentral hypergeometric distribution. It is important to be specific about which distribution is meant when using this name.
The two distributions are both equal to the (central) hypergeometric distribution when the odds ratio is 1.
The difference between these two probability distributions is subtle. See the Wikipedia entry on noncentral hypergeometric distributions for a more detailed explanation.
Univariate distribution
Wallenius' distribution is particularly complicated because each ball has a probability of being taken that depends not only on its weight, but also on the total weight of its competitors. And the weight of the competing balls depends on the outcomes of all preceding draws.
This recursive dependency gives rise to a difference equation with a solution that is given in open form by the integral in the expression of the probability mass function in the table above.
Closed form expressions for the probability mass function exist (Lyons, 1980), but they are not very useful for practical calculations because of extreme numerical instability, except in degenerate cases.
Several other calculation methods are used, including recursion, Taylor expansion and numerical integration (Fog, 2007, 2008).
The most reliable calculation method is recursive calculation of f(x,n) from f(x,n-1) and f(x-1,n-1) using the recursion formula given below under properties. The probabilities of all (x,n) combinations on all possible trajectories leading to the desired point are calculated, starting with f(0,0) = 1 as shown on the figure to the right. The total number of probabilities to calculate is n(x+1)-x2. Other calculation methods must be used when n
|
https://en.wikipedia.org/wiki/Mazur%27s%20lemma
|
In mathematics, Mazur's lemma is a result in the theory of normed vector spaces. It shows that any weakly convergent sequence in a normed space has a sequence of convex combinations of its members that converges strongly to the same limit, and is used in the proof of Tonelli's theorem.
Statement of the lemma
Let be a normed vector space and let be a sequence in that converges weakly to some in :
That is, for every continuous linear functional the continuous dual space of
Then there exists a function and a sequence of sets of real numbers
such that and
such that the sequence defined by the convex combination
converges strongly in to ; that is
See also
References
Banach spaces
Theorems involving convexity
Theorems in functional analysis
Lemmas in analysis
Compactness theorems
|
https://en.wikipedia.org/wiki/Tonelli%27s%20theorem%20%28functional%20analysis%29
|
In mathematics, Tonelli's theorem in functional analysis is a fundamental result on the weak lower semicontinuity of nonlinear functionals on Lp spaces. As such, it has major implications for functional analysis and the calculus of variations. Roughly, it shows that weak lower semicontinuity for integral functionals is equivalent to convexity of the integral kernel. The result is attributed to the Italian mathematician Leonida Tonelli.
Statement of the theorem
Let be a bounded domain in -dimensional Euclidean space and let be a continuous extended real-valued function. Define a nonlinear functional on functions by
Then is sequentially weakly lower semicontinuous on the space for and weakly-∗ lower semicontinuous on if and only if the function defined by
is convex.
See also
References
(Theorem 10.16)
Calculus of variations
Convex analysis
Function spaces
Measure theory
Theorems in functional analysis
Variational analysis
|
https://en.wikipedia.org/wiki/Polyconvex%20function
|
In mathematics, the notion of polyconvexity is a generalization of the notion of convexity for functions defined on spaces of matrices. Let Mm×n(K) denote the space of all m × n matrices over the field K, which may be either the real numbers R, or the complex numbers C. A function f : Mm×n(K) → R ∪ {±∞} is said to be polyconvex if
can be written as a convex function of the p × p subdeterminants of A, for 1 ≤ p ≤ min{m, n}.
Polyconvexity is a weaker property than convexity. For example, the function f given by
is polyconvex but not convex.
References
(Definition 10.25)
Convex analysis
Matrices
Types of functions
|
https://en.wikipedia.org/wiki/Pseudo-monotone%20operator
|
In mathematics, a pseudo-monotone operator from a reflexive Banach space into its continuous dual space is one that is, in some sense, almost as well-behaved as a monotone operator. Many problems in the calculus of variations can be expressed using operators that are pseudo-monotone, and pseudo-monotonicity in turn implies the existence of solutions to these problems.
Definition
Let (X, || ||) be a reflexive Banach space. A map T : X → X∗ from X into its continuous dual space X∗ is said to be pseudo-monotone if T is a bounded operator (not necessarily continuous) and if whenever
(i.e. uj converges weakly to u) and
it follows that, for all v ∈ X,
Properties of pseudo-monotone operators
Using a very similar proof to that of the Browder–Minty theorem, one can show the following:
Let (X, || ||) be a real, reflexive Banach space and suppose that T : X → X∗ is bounded, coercive and pseudo-monotone. Then, for each continuous linear functional g ∈ X∗, there exists a solution u ∈ X of the equation T(u) = g.
References
(Definition 9.56, Theorem 9.57)
Banach spaces
Calculus of variations
Operator theory
|
https://en.wikipedia.org/wiki/Browder%E2%80%93Minty%20theorem
|
In mathematics, the Browder–Minty theorem (sometimes called the Minty–Browder theorem) states that a bounded, continuous, coercive and monotone function T from a real, separable reflexive Banach space X into its continuous dual space X∗ is automatically surjective. That is, for each continuous linear functional g ∈ X∗, there exists a solution u ∈ X of the equation T(u) = g. (Note that T itself is not required to be a linear map.)
The theorem is named in honor of Felix Browder and George J. Minty, who independently proved it.
See also
Pseudo-monotone operator; pseudo-monotone operators obey a near-exact analogue of the Browder–Minty theorem.
References
(Theorem 10.49)
Banach spaces
Theorems in functional analysis
Operator theory
|
https://en.wikipedia.org/wiki/Timeline%20of%20algebra
|
The following is a timeline of key developments of algebra:
See also
History of algebra – Historical development of algebra
References
History of algebra
Algebra
|
https://en.wikipedia.org/wiki/Earth%20mover%27s%20distance
|
In computer science, the earth mover's distance (EMD) is a distance-like measure of dissimilarity between two frequency distributions, densities, or measures over a region D.
For probability distributions and normalized histograms, it reduces to the Wasserstein metric .
Informally, if the distributions are interpreted as two different ways of piling up earth (dirt) over the region D, the EMD captures the minimum cost of building the smaller pile using dirt taken from the larger, where cost is defined as the amount of dirt moved multiplied by the ground distance over which it is moved.
Theory
Assume that we have a set of points in (dimension ). Instead of assigning one distribution to the set of points, we can cluster them and represent the point set in terms of the clusters. Thus, each cluster is a single point in and the weight of the cluster is decided by the fraction of the distribution present in that cluster. This representation of a distribution by a set of clusters is called the signature. Two signatures can have different sizes, for example, a bimodal distribution has shorter signature (2 clusters) than complex ones. One cluster representation (mean or mode in ) can be thought of as a single feature in a signature. The distance between each of the features is called as ground distance.
The Earth Mover's Distance can be formulated and solved as a transportation problem. Suppose that several suppliers, each with a given amount of goods, are required to supply several consumers, each with a given limited capacity. For each supplier-consumer pair, the cost of transporting a single unit of goods is given. The transportation problem is then to find a least-expensive flow of goods from the suppliers to the consumers that satisfies the consumers' demand. Similarly, here the problem is transforming one signature() to another() with minimum work done.
Assume that signature has clusters with , where is the cluster representative and is the weight of the cluster. Similarly another signature has clusters. Let be the ground distance between clusters and .
We want to find a flow , with the flow between and , that minimizes the overall cost.
subject to the constraints:
The optimal flow is found by solving this linear optimization problem. The earth mover's distance is defined as the work normalized by the total flow:
On probability distributions
Suppose and represent probability distributions, i.e. they both have total weight 1. In this case, the flow can be interpreted as a joint probability distribution, the total flow is also 1, and the EMD equals the 1-Wasserstein distance:
where is the set of all joint distributions whose marginals are and .
By Kantorovich-Rubinstein duality, this can also be expressed as:
where the supremum is taken over all 1-Lipschitz continuous functions, i.e. .
General case
Let be the total weight of , and be the total weight of . We have:
where is the set of all measures whose projections are
|
https://en.wikipedia.org/wiki/Mary%20P.%20Dolciani
|
Mary P. Dolciani (1923–1985) was an American mathematician, known for her work with secondary-school mathematics teachers.
Education and career
Dolciani earned her Bachelor of Arts degree (B.A.) at Hunter College in New York City, and she completed her doctor of philosophy (Ph.D.) at Cornell University in 1947 with B. W. Jones as thesis advisor. She taught briefly at Vassar College before returning to Hunter, where she spent the next forty years. Dolciani taught mathematics there, and at times, she also served as a Dean or the Provost.
Contributions
Beginning in the 1960s Mary Dolciani wrote a series of high school mathematics textbook, Structure and Method, which in 2000 - 2010 has experienced a resurgence of popularity.
Shortly before her death in 1985, Dolciani also co-wrote (along with two other mathematics educators) Pre-Algebra: An Accelerated Course. This textbook was widely used in the later 1980s through the 1990s. In addition to teaching the pure mathematics, it emphasized the usefulness of algebra in various practical applications.
Although Dolciani is not well known by the general public, she was influential in developing the basic modern method used for teaching basic algebra in the United States (called "Dolciani algebra", which teaches it on the basis of drill like arithmetic, rather than on the basis of proofs as in Euclidean geometry). Dolciani also popularized the short-form names of the Properties that are familiar to many high school algebra students, e.g. the "Zero Property".
Legacy
The American Mathematical Society publishes a series of mathematical books named for her: The Dolciani Mathematical Expositions. Also, the Mathematical Association of America's headquarters building in Washington D.C. is named The Dolciani Mathematical Center in her honor. The Mathematical Association of America has given the Mary P. Dolciani Award annually since 2012 for distinguished contributions to teaching, and the American Mathematical Society has given a different award, the Mary P. Dolciani Prize for Excellence in Research, every other year beginning in 2019.
In 1982, Dr. Mary P. Dolciani Halloran, with her husband James J. Halloran and Eugene J. Callahan as Trustees, established the Mary P. Dolciani Halloran Foundation to further the study of mathematics and mathematics education.
References
External links
Hunter College alumni
American women mathematicians
20th-century American mathematicians
Cornell University alumni
Vassar College faculty
Hunter College faculty
1985 deaths
1923 births
20th-century women mathematicians
20th-century American women
|
https://en.wikipedia.org/wiki/Rod%20calculus
|
Rod calculus or rod calculation was the mechanical method of algorithmic computation with counting rods in China from the Warring States to Ming dynasty before the counting rods were increasingly replaced by the more convenient and faster abacus. Rod calculus played a key role in the development of Chinese mathematics to its height in Song Dynasty and Yuan Dynasty, culminating in the invention
of polynomial equations of up to four unknowns in the work of Zhu Shijie.
Hardware
The basic equipment for carrying out rod calculus is a bundle of counting rods and a counting board. The counting rods are usually made of bamboo sticks, about 12 cm- 15 cm in length, 2mm to 4 mm diameter, sometimes from animal bones, or ivory and jade (for well-heeled merchants). A counting board could be a table top, a wooden board with or without grid, on the floor or on sand.
In 1971 Chinese archaeologists unearthed a bundle of well-preserved animal bone counting rods stored in a silk pouch from a tomb in Qian Yang county in Shanxi province, dated back to the first half of Han dynasty (206 BC – 8AD). In 1975 a bundle of bamboo counting rods was unearthed.
The use of counting rods for rod calculus flourished in the Warring States, although no archaeological artefacts were found earlier than the Western Han Dynasty (the first half of Han dynasty; however, archaeologists did unearth software artefacts of rod calculus dated back to the Warring States); since the rod calculus software must have gone along with rod calculus hardware, there is no doubt that rod calculus was already flourishing during the Warring States more than 2,200 years ago.
Software
The key software required for rod calculus was a simple 45 phrase positional decimal multiplication table used in China since antiquity, called the nine-nine table, which were learned by heart by pupils, merchants, government officials and mathematicians alike.
Rod numerals
Displaying numbers
Rod numerals is the only numeric system that uses different placement combination of a single symbol to convey any number or fraction in the Decimal System. For numbers in the units place, every vertical rod represent 1. Two vertical rods represent 2, and so on, until 5 vertical rods, which represents 5. For number between 6 and 9, a biquinary system is used, in which a horizontal bar on top of the vertical bars represent 5. The first row are the number 1 to 9 in rod numerals, and the second row is the same numbers in horizontal form.
For numbers larger than 9, a decimal system is used. Rods placed one place to the left of the units place represent 10 times that number. For the hundreds place, another set of rods is placed to the left which represents 100 times of that number, and so on. As shown in the adjacent image, the number 231 is represented in rod numerals in the top row, with one rod in the units place representing 1, three rods in the tens place representing 30, and two rods in the hundreds place representing 200, with a s
|
https://en.wikipedia.org/wiki/Primitive%20part%20and%20content
|
In algebra, the content of a nonzero polynomial with integer coefficients (or, more generally, with coefficients in a unique factorization domain) is the greatest common divisor of its coefficients. The primitive part of such a polynomial is the quotient of the polynomial by its content. Thus a polynomial is the product of its primitive part and its content, and this factorization is unique up to the multiplication of the content by a unit of the ring of the coefficients (and the multiplication of the primitive part by the inverse of the unit).
A polynomial is primitive if its content equals 1. Thus the primitive part of a polynomial is a primitive polynomial.
Gauss's lemma for polynomials states that the product of primitive polynomials (with coefficients in the same unique factorization domain) also is primitive. This implies that the content and the primitive part of the product of two polynomials are, respectively, the product of the contents and the product of the primitive parts.
As the computation of greatest common divisors is generally much easier than polynomial factorization, the first step of a polynomial factorization algorithm is generally the computation of its primitive part–content factorization (see ). Then the factorization problem is reduced to factorize separately the content and the primitive part.
Content and primitive part may be generalized to polynomials over the rational numbers, and, more generally, to polynomials over the field of fractions of a unique factorization domain. This makes essentially equivalent the problems of computing greatest common divisors and factorization of polynomials over the integers and of polynomials over the rational numbers.
Over the integers
For a polynomial with integer coefficients, the content may be either the greatest common divisor of the coefficients or its additive inverse. The choice is arbitrary, and may depend on a further convention, which is commonly that the leading coefficient of the primitive part be positive.
For example, the content of may be either 2 or −2, since 2 is the greatest common divisor of −12, 30, and −20. If one chooses 2 as the content, the primitive part of this polynomial is
and thus the primitive-part-content factorization is
For aesthetic reasons, one often prefers choosing a negative content, here −2, giving the primitive-part-content factorization
Properties
In the remainder of this article, we consider polynomials over a unique factorization domain , which can typically be the ring of integers, or a polynomial ring over a field. In , greatest common divisors are well defined, and are unique up to multiplication by a unit of .
The content of a polynomial with coefficients in is the greatest common divisor of its coefficients, and, as such, is defined up to multiplication by a unit. The primitive part of is the quotient of by its content; it is a polynomial with coefficients in , which is unique up to multiplication by a unit. If the
|
https://en.wikipedia.org/wiki/Natural%20topology
|
In any domain of mathematics, a space has a natural topology if there is a topology on the space which is "best adapted" to its study within the domain in question. In many cases this imprecise definition means little more than the assertion that the topology in question arises naturally or canonically (see mathematical jargon) in the given context.
Note that in some cases multiple topologies seem "natural". For example, if Y is a subset of a totally ordered set X, then the induced order topology, i.e. the order topology of the totally ordered Y, where this order is inherited from X, is coarser than the subspace topology of the order topology of X.
"Natural topology" does quite often have a more specific meaning, at least given some prior contextual information: the natural topology is a topology which makes a natural map or collection of maps continuous. This is still imprecise, even once one has specified what the natural maps are, because there may be many topologies with the required property. However, there is often a finest or coarsest topology which makes the given maps continuous, in which case these are obvious candidates for the natural topology.
The simplest cases (which nevertheless cover many examples) are the initial topology and the final topology (Willard (1970)). The initial topology is the coarsest topology on a space X which makes a given collection of maps from X to topological spaces Xi continuous. The final topology is the finest topology on a space X which makes a given collection of maps from topological spaces Xi to X continuous.
Two of the simplest examples are the natural topologies of subspaces and quotient spaces.
The natural topology on a subset of a topological space is the subspace topology. This is the coarsest topology which makes the inclusion map continuous.
The natural topology on a quotient of a topological space is the quotient topology. This is the finest topology which makes the quotient map continuous.
Another example is that any metric space has a natural topology induced by its metric.
See also
Induced topology
References
(Recent edition published by Dover (2004) .)
Mathematical structures
Topology
|
https://en.wikipedia.org/wiki/Cartan%E2%80%93Hadamard%20theorem
|
In mathematics, the Cartan–Hadamard theorem is a statement in Riemannian geometry concerning the structure of complete Riemannian manifolds of non-positive sectional curvature. The theorem states that the universal cover of such a manifold is diffeomorphic to a Euclidean space via the exponential map at any point. It was first proved by Hans Carl Friedrich von Mangoldt for surfaces in 1881, and independently by Jacques Hadamard in 1898. Élie Cartan generalized the theorem to Riemannian manifolds in 1928 (; ; ). The theorem was further generalized to a wide class of metric spaces by Mikhail Gromov in 1987; detailed proofs were published by for metric spaces of non-positive curvature and by for general locally convex metric spaces.
Riemannian geometry
The Cartan–Hadamard theorem in conventional Riemannian geometry asserts that the universal covering space of a connected complete Riemannian manifold of non-positive sectional curvature is diffeomorphic to Rn. In fact, for complete manifolds of non-positive curvature, the exponential map based at any point of the manifold is a covering map.
The theorem holds also for Hilbert manifolds in the sense that the exponential map of a non-positively curved geodesically complete connected manifold is a covering map (; ). Completeness here is understood in the sense that the exponential map is defined on the whole tangent space of a point.
Metric geometry
In metric geometry, the Cartan–Hadamard theorem is the statement that the universal cover of a connected non-positively curved complete metric space X is a Hadamard space. In particular, if X is simply connected then it is a geodesic space in the sense that any two points are connected by a unique minimizing geodesic, and hence contractible.
A metric space X is said to be non-positively curved if every point p has a neighborhood U in which any two points are joined by a geodesic, and for any point z in U and constant speed geodesic γ in U, one has
This inequality may be usefully thought of in terms of a geodesic triangle Δ = zγ(0)γ(1). The left-hand side is the square distance from the vertex z to the midpoint of the opposite side. The right-hand side represents the square distance from the vertex to the midpoint of the opposite side in a Euclidean triangle having the same side lengths as Δ. This condition, called the CAT(0) condition is an abstract form of Toponogov's triangle comparison theorem.
Generalization to locally convex spaces
The assumption of non-positive curvature can be weakened , although with a correspondingly weaker conclusion. Call a metric space X convex if, for any two constant speed minimizing geodesics a(t) and b(t), the function
is a convex function of t. A metric space is then locally convex if every point has a neighborhood that is convex in this sense. The Cartan–Hadamard theorem for locally convex spaces states:
If X is a locally convex complete connected metric space, then the universal cover of X is a conve
|
https://en.wikipedia.org/wiki/Telegraph%20process
|
In probability theory, the telegraph process is a memoryless continuous-time stochastic process that shows two distinct values. It models burst noise (also called popcorn noise or random telegraph signal). If the two possible values that a random variable can take are and , then the process can be described by the following master equations:
and
where is the transition rate for going from state to state and is the transition rate for going from going from state to state . The process is also known under the names Kac process (after mathematician Mark Kac), and dichotomous random process.
Solution
The master equation is compactly written in a matrix form by introducing a vector ,
where
is the transition rate matrix. The formal solution is constructed from the initial condition (that defines that at , the state is ) by
.
It can be shown that
where is the identity matrix and is the average transition rate. As , the solution approaches a stationary distribution given by
Properties
Knowledge of an initial state decays exponentially. Therefore, for a time , the process will reach the following stationary values, denoted by subscript s:
Mean:
Variance:
One can also calculate a correlation function:
Application
This random process finds wide application in model building:
In physics, spin systems and fluorescence intermittency show dichotomous properties. But especially in single molecule experiments probability distributions featuring algebraic tails are used instead of the exponential distribution implied in all formulas above.
In finance for describing stock prices
In biology for describing transcription factor binding and unbinding.
See also
Markov chain
List of stochastic processes topics
Random telegraph signal
References
Stochastic differential equations
|
https://en.wikipedia.org/wiki/The%20New%20York%20Journal%20of%20Mathematics
|
The New York Journal of Mathematics is a peer-reviewed journal focusing on algebra, analysis, geometry and topology. Its editorial board, , consists of 17 university-affiliated scholars in addition to the Editor-in-chief. Articles in the New York Journal of Mathematics are published entirely electronically (on the World Wide Web). The journal uses the diamond open access model—that is, its full content is available to anyone via the Internet, without a subscription or fee.
History
The journal was founded in 1994 by Mark Steinberger who cited a 1993 letter by John Franks as inspiration. At the time of its launch, the New York Journal of Mathematics was the "first electronic general mathematics journal", predating the online versions of both Zentralblatt MATH and the journals in Mathematical Reviews. It was published by the State University of New York at Albany where Steinberger had been a professor since 1987.
Steinberger justified the stylistic choices of the journal by writing, "Some proponents of electronic publication have urged changes in style, citing the low price of disk space as a rationale for publishing articles more loquacious than those commonly acceptable in a print medium. We decided to eschew this route, on the grounds that the perceived quality of our publications would be reduced. We feel it is important to follow the standards of consensus in the field. If these standards change in the future, we will change
with them."
When the New York Journal of Mathematics was first published, it was made available via FTP and Gopher for users without a web browser. The papers, typeset in TeX, were originally downloadable in the PostScript format. PDF support was added in 1996. To incorporate hyperlinks within documents, the journal leveraged software that had been developed for the arXiv preprint server.
In 1998, the journal began including links to relevant reviews on MathSciNet with its published articles. It is listed in the Journals section of The Electronic Library of Mathematics. Articles from 2010 and later are available on Web of Science.
A paper on the greater male variability hypothesis by Theodore Hill and Sergei Tabachnikov was accepted but not published by The Mathematical Intelligencer; a later version authored by Hill alone was accepted by The New York Journal of Mathematics and retracted after publication. There was some controversy over the mathematical model and over the retraction of a paper that had passed peer review. This paper was accepted and republished in 2020 by the Journal of Interdisciplinary Mathematics.
Reception
In 2017, the journal had a Mathematical Citation Quotient of 0.56.
In a professional conference presentation, Renzo Piccinini said "An example of what I consider a good electronic journal is the New York Journal of Mathematics; this is a refereed journal--with referees not in the editor's board—with high quality papers and very fast publication time; last, but not least, it is free!"
See also
|
https://en.wikipedia.org/wiki/The%20Mathematics%20Educator
|
The Mathematics Educator (TME) is peer-reviewed journal within the field of mathematics education. TME is produced by students, and it is published by the Mathematics Education Student Association (MESA) in the Department of Mathematics Education at the University of Georgia. MESA is an affiliate of the National Council of Teachers of Mathematics (NCTM).
The journal first appeared in 1990, and it has appeared one or two times a year since then. It welcomes different types of manuscripts, like research reports, commentaries, literature reviews, theoretical articles, and critiques.
See also
List of scientific journals in mathematics education
References/Endnotes
Mathematics education journals
English-language journals
Academic journals established in 1990
|
https://en.wikipedia.org/wiki/2%CF%80%20theorem
|
In mathematics, the theorem of Gromov and Thurston states a sufficient condition for Dehn filling on a cusped hyperbolic 3-manifold to result in a negatively curved 3-manifold.
Let be a cusped hyperbolic 3-manifold. Disjoint horoball neighborhoods of each cusp can be selected. The boundaries of these neighborhoods are quotients of horospheres and thus have Euclidean metrics. A slope, i.e. unoriented isotopy class of simple closed curves on these boundaries, thus has a well-defined length by taking the minimal Euclidean length over all curves in the isotopy class. The theorem states: a Dehn filling of with each filling slope greater than results in a 3-manifold with a complete metric of negative sectional curvature. In fact, this metric can be selected to be identical to the original hyperbolic metric outside the horoball neighborhoods.
The basic idea of the proof is to explicitly construct a negatively curved metric inside each horoball neighborhood that matches the metric near the horospherical boundary. This construction, using cylindrical coordinates, works when the filling slope is greater than . See for complete details.
According to the geometrization conjecture, these negatively curved 3-manifolds must actually admit a complete hyperbolic metric. A horoball packing argument due to Thurston shows that there are at most 48 slopes to avoid on each cusp to get a hyperbolic 3-manifold. For one-cusped hyperbolic 3-manifolds, an improvement due to Colin Adams gives 24 exceptional slopes.
This result was later improved independently by and with the 6 theorem. The "6 theorem" states that Dehn filling along slopes of length greater than 6 results in a hyperbolike 3-manifold, i.e. an irreducible, atoroidal, non-Seifert-fibered 3-manifold with infinite word hyperbolic fundamental group. Yet again assuming the geometrization conjecture, these manifolds have a complete hyperbolic metric. An argument of Agol's shows that there are at most 12 exceptional slopes.
References
.
.
.
3-manifolds
Theorems in geometry
|
https://en.wikipedia.org/wiki/Pincherle
|
Pincherle is a surname. Notable people with the surname include:
Salvatore Pincherle (1853–1936), Italian mathematician
Pincherle derivative, in mathematics
Marc Pincherle (1888–1974), French musicologist, music critic
Alberto Pincherle (1907–1990), Italian novelist, better known by his pen name Alberto Moravia
Italian-language surnames
Surnames of Sephardic origin
|
https://en.wikipedia.org/wiki/List%20of%20career%20achievements%20by%20Gary%20Gait
|
This page details statistics, records, and other achievements pertaining to Gary Gait.
Professional career statistics and achievements
National Lacrosse League
Source: NLL.com
Major League Lacrosse
Source: majorleaguelacrosse.com
National Lacrosse League Achievements
7-time regular season leader, total goals (1995–99, 2003, 2004)
2-time regular season leader, total assists (1991, 1997)
7-time regular season leader, total points (1991, 1995, 1997–00, 2004)
Championship Game
played in 7 championship games (1 Detroit, 4 Philadelphia and 2 Baltimore)
does not hold any Championship Game records
Rank among NLL Championship Game leaders in other stats:
3rd, goals, career (21)
4th, assists, career (15)
Playoffs
Holds NLL Playoff Records for:
goals, career (65)
All-Star Game
selected 4 times
does not hold any All-Star Game records
Rank among NLL All-Star Game leaders in other stats:
2nd, goals, game (5)
Behind Mark Steenhuis (6)
2nd, points, game (8)
scored 5 goals and 3 assists for 8 points (1991 All-Star game)
Behind Paul Cantabene (10)
5th, goals, career (6)
Tied with Gavin Prout
Regular season
Holds NLL regular season records for:
MVP honors (6)
The only other player to win multiple MVP Awards is John Tavares (3)
consecutive MVP honors (5)
All-Pro Team honors (15)
consecutive All-Pro Team honors (15)
All-Pro First Team honors (14)
consecutive All-Pro First Team honors (14)
goals per game, career (3.425)
goals, game (10)
Set vs. the Toronto Rock on January 9, 1999
Shared with his brother, Paul
Rank among NLL regular season leaders in other stats:
2nd, goals, career (596)
3rd, assists, career (495)
7th, assists per game, career (2.845)
2nd, points, career (1,091)
3rd, points per game, career (6.270)
5th, loose balls, career (1076)
2nd, goals, season (61)
Also holds 3rd and 4th for this record
9th, assists, season (62)
3rd, points, season (112)
2nd, shots on goal, season (253)
Also holds 3rd place for this record
6th, loose balls, season (120)
3rd, points, game (14)
Philadelphia Wings franchise records
Holds Philadelphia Wings records for:
goals, 10-game season (43)
assists, 10-game season (32)
points, 10-game season (72)
points, 8-game season (48)
shots on goal, 10-game season (126)
shots on goal, 8-game season (132)
Rank among Philadelphia Wings leaders in other stats:
5th, goals, career (150)
7th, assists, career (102)
7th, points, career (252)
6th, goals, season (43)
Also holds 10th place for this record
Colorado Mammoth franchise records
Holds Colorado Mammoth records for:
games played, career (113)
goals, career (387)
assists, career (335)
points, career (722)
goals, season (61)
Also holds 2nd, 3rd, 5th, 6th, 7th and 10th place for this record
shots on goal, season (244)
Also holds 2nd, 4th, 6th and 7th place for this record
goals, game (10)*
goals, 16-game season (61)
goals, 14-game season (43)
goals, 12-game season (57)
assists, 14-game season (47)
points, 14-game season (90)
po
|
https://en.wikipedia.org/wiki/Mpack%2C%20Senegal
|
Mpack (also spelt Mpak) is a village in Niaguis Arrondissement, Ziguinchor Department, Ziguinchor Region in southern Senegal. Government statistics classified it as a rural community and recorded its population as 518 people in 72 households. It is located about seven kilometres from the regional capital of Ziguinchor. It is one of the endpoints of the 90-km long Oussouye-Kabrousse-Cap Skirring-Ziguinchor-Mpack road, which is being rebuilt with 17 billion CFA francs of funding from the European Union. The village used to be on the front lines of the Casamance Conflict between the Senegalese government and the Movement of Democratic Forces of Casamance.
The town contains the only border checkpoint between Senegal and Guinea-Bissau with an asphalt road; its counterpart on the Guinea-Bissau side is Sao Domingos. During the 1998 Guinea-Bissau Civil War, up to 100 refugees an hour passed through the checkpoint and the village as they fled the fighting. Later, as the Casamance Conflict intensified, the checkpoint was frequently closed, as MDFC members were believed to be taking refuge in Guinea-Bissau. The area was also heavily mined during the fighting; local NGOs made efforts to clear the mines in 2002 and 2003, rehabilitating over 100 houses in the village and its surrounding area, following which the Senegalese military declared the area safe; however, casualties due to exploding mines continued to occur in 2004. A camp was set up in the Bourgadié neighbourhood there in March 2006 to receive Senegalese refugees fleeing Guinea-Bissau after the October 2004 army mutiny left the country in disarray.
References
Populated places in the Ziguinchor Department
Guinea-Bissau–Senegal border crossings
|
https://en.wikipedia.org/wiki/Blaschke%20selection%20theorem
|
The Blaschke selection theorem is a result in topology and convex geometry about sequences of convex sets. Specifically, given a sequence of convex sets contained in a bounded set, the theorem guarantees the existence of a subsequence and a convex set such that converges to in the Hausdorff metric. The theorem is named for Wilhelm Blaschke.
Alternate statements
A succinct statement of the theorem is that the metric space of convex bodies is locally compact.
Using the Hausdorff metric on sets, every infinite collection of compact subsets of the unit ball has a limit point (and that limit point is itself a compact set).
Application
As an example of its use, the isoperimetric problem can be shown to have a solution. That is, there exists a curve of fixed length that encloses the maximum area possible. Other problems likewise can be shown to have a solution:
Lebesgue's universal covering problem for a convex universal cover of minimal size for the collection of all sets in the plane of unit diameter,
the maximum inclusion problem,
and the Moser's worm problem for a convex universal cover of minimal size for the collection of planar curves of unit length.
Notes
References
Geometric topology
Compactness theorems
ru:Теорема выбора Бляшке
|
https://en.wikipedia.org/wiki/Mathematics%20Presents%20Wu-Tang%20Clan%20%26%20Friends%20Unreleased
|
Mathematics Presents - Wu-Tang Clan & Friends Unreleased is a compilation produced by rap music producer Mathematics. It was released on February 6, 2007 under label Nature Sounds. It contains unreleased songs by Wu-Tang Clan and their affiliates.
Later, in 2010, Mathematics produced Return of the Wu and Friends.
Track listing
Reception
Impose described the collection as "a 20-tuned compilation of remixes, B-sides and obscure Wu material", with a "continuous swinging baseline", a "soulful musical setting which provide a fine contrast to the rappers lyrical grit", and "sounds that stem from the comforting to the intimidating". Nevertheless, the magazine also described the collection as "appeal[ing] more to the Wu-head than the casual Wu-Tang fan".
Complex called one song from the collection, Maxine (Remix), "told over a heavy-duty 1970s experience" but "smooth[ed ]out considerably with an easy-like-Sunday-morning groove".
HipHopDX praised the compilation as a "testament to the knob twisting skills of Mathematics", but also stated that "the Clan's core members don’t make enough unheard appearances on the disc".
Notes
References
Hip hop compilation albums
Albums produced by Mathematics
2007 compilation albums
Nature Sounds compilation albums
|
https://en.wikipedia.org/wiki/Kolmogorov%27s%20theorem
|
Kolmogorov's theorem is any of several different results by Andrey Kolmogorov:
In statistics
Kolmogorov–Smirnov test
In probability theory
Hahn–Kolmogorov theorem
Kolmogorov extension theorem
Kolmogorov continuity theorem
Kolmogorov's three-series theorem
Kolmogorov's zero–one law
Chapman–Kolmogorov equations
Kolmogorov inequalities
Kolmogorov's inequality
Kolmogorov's inequality for positive submartingales
In functional analysis
Landau–Kolmogorov inequality
Fréchet–Kolmogorov theorem
|
https://en.wikipedia.org/wiki/Fisher%27s%20noncentral%20hypergeometric%20distribution
|
In probability theory and statistics, Fisher's noncentral hypergeometric distribution is a generalization of the hypergeometric distribution where sampling probabilities are modified by weight factors. It can also be defined as the conditional distribution of two or more binomially distributed variables dependent upon their fixed sum.
The distribution may be illustrated by the following urn model. Assume, for example, that an urn contains m1 red balls and m2 white balls, totalling N = m1 + m2 balls. Each red ball has the weight ω1 and each white ball has the weight ω2. We will say that the odds ratio is ω = ω1 / ω2. Now we are taking balls randomly in such a way that the probability of taking a particular ball is proportional to its weight, but independent of what happens to the other balls. The number of balls taken of a particular color follows the binomial distribution. If the total number n of balls taken is known then the conditional distribution of the number of taken red balls for given n is Fisher's noncentral hypergeometric distribution. To generate this distribution experimentally, we have to repeat the experiment until it happens to give n balls.
If we want to fix the value of n prior to the experiment then we have to take the balls one by one until we have n balls. The balls are therefore no longer independent. This gives a slightly different distribution known as Wallenius' noncentral hypergeometric distribution. It is far from obvious why these two distributions are different. See the entry for noncentral hypergeometric distributions for an explanation of the difference between these two distributions and a discussion of which distribution to use in various situations.
The two distributions are both equal to the (central) hypergeometric distribution when the odds ratio is 1.
Unfortunately, both distributions are known in the literature as "the" noncentral hypergeometric distribution. It is important to be specific about which distribution is meant when using this name.
Fisher's noncentral hypergeometric distribution was first given the name extended hypergeometric distribution (Harkness, 1965), and some authors still use this name today.
Univariate distribution
The probability function, mean and variance are given in the adjacent table.
An alternative expression of the distribution has both the number of balls taken of each color and the number of balls not taken as random variables, whereby the expression for the probability becomes symmetric.
The calculation time for the probability function can be high when the sum in P0 has many terms. The calculation time can be reduced by calculating the terms in the sum recursively relative to the term for y = x and ignoring negligible terms in the tails (Liao and Rosen, 2001).
The mean can be approximated by:
,
where , , .
The variance can be approximated by:
.
Better approximations to the mean and variance are given by Levin (1984, 1990), McCullagh and Nelder (1989), Liao (19
|
https://en.wikipedia.org/wiki/Caristi%20fixed-point%20theorem
|
In mathematics, the Caristi fixed-point theorem (also known as the Caristi–Kirk fixed-point theorem) generalizes the Banach fixed-point theorem for maps of a complete metric space into itself. Caristi's fixed-point theorem modifies the -variational principle of Ekeland (1974, 1979). The conclusion of Caristi's theorem is equivalent to metric completeness, as proved by Weston (1977).
The original result is due to the mathematicians James Caristi and William Arthur Kirk.
Caristi fixed-point theorem can be applied to derive other classical fixed-point results, and also to prove the existence of bounded solutions of a functional equation.
Statement of the theorem
Let be a complete metric space. Let and be a lower semicontinuous function from into the non-negative real numbers. Suppose that, for all points in
Then has a fixed point in that is, a point such that The proof of this result utilizes Zorn's lemma to guarantee the existence of a minimal element which turns out to be a desired fixed point.
References
Fixed-point theorems
Metric geometry
Theorems in real analysis
|
https://en.wikipedia.org/wiki/William%20Arthur%20Kirk
|
William Arthur ("Art") Kirk was an American mathematician. His research interests include nonlinear functional analysis, the geometry of Banach spaces and metric spaces. In particular, he has made notable contributions to the fixed point theory of metric spaces; for example, he is one of the two namesakes of the Caristi-Kirk fixed point theorem of 1976. He is also known for the Kirk theorem of 1964.
He completed his PhD, entitled "Metrization of Surface Curvature", at the University of Missouri in August 1962 under the supervision of Leonard Blumenthal. He then became an assistant professor of mathematics at the University of California, Riverside from 1962 to 1967. Since 1967 he has worked from the University of Iowa, as a full professor of mathematics since 1971 and as department chair from 1985 to 1991.
He holds an honorary doctorate from Maria Curie-Skłodowska University, an institution which was an early centre of study for the fixed point theory of metric spaces.
External links
William A. Kirk's personal webpage at the University of Iowa
Honorary doctorate from Maria Curie-Skłodowska University:
Kazimierz Goebel's introduction
William A. Kirk's acceptance speech
20th-century American mathematicians
21st-century American mathematicians
University of Iowa faculty
University of Missouri alumni
Living people
Year of birth missing (living people)
|
https://en.wikipedia.org/wiki/Graph%20partition
|
In mathematics, a graph partition is the reduction of a graph to a smaller graph by partitioning its set of nodes into mutually exclusive groups. Edges of the original graph that cross between the groups will produce edges in the partitioned graph. If the number of resulting edges is small compared to the original graph, then the partitioned graph may be better suited for analysis and problem-solving than the original. Finding a partition that simplifies graph analysis is a hard problem, but one that has applications to scientific computing, VLSI circuit design, and task scheduling in multiprocessor computers, among others. Recently, the graph partition problem has gained importance due to its application for clustering and detection of cliques in social, pathological and biological networks. For a survey on recent trends in computational methods and applications see .
Two common examples of graph partitioning are minimum cut and maximum cut problems.
Problem complexity
Typically, graph partition problems fall under the category of NP-hard problems. Solutions to these problems are generally derived using heuristics and approximation algorithms. However, uniform graph partitioning or a balanced graph partition problem can be shown to be NP-complete to approximate within any finite factor. Even for special graph classes such as trees and grids, no reasonable approximation algorithms exist, unless P=NP. Grids are a particularly interesting case since they model the graphs resulting from Finite Element Model (FEM) simulations. When not only the number of edges between the components is approximated, but also the sizes of the components, it can be shown that no reasonable fully polynomial algorithms exist for these graphs.
Problem
Consider a graph G = (V, E), where V denotes the set of n vertices and E the set of edges. For a (k,v) balanced partition problem, the objective is to partition G into k components of at most size v · (n/k), while minimizing the capacity of the edges between separate components. Also, given G and an integer k > 1, partition V into k parts (subsets) V1, V2, ..., Vk such that the parts are disjoint and have equal size, and the number of edges with endpoints in different parts is minimized. Such partition problems have been discussed in literature as bicriteria-approximation or resource augmentation approaches. A common extension is to hypergraphs, where an edge can connect more than two vertices. A hyperedge is not cut if all vertices are in one partition, and cut exactly once otherwise, no matter how many vertices are on each side. This usage is common in electronic design automation.
Analysis
For a specific (k, 1 + ε) balanced partition problem, we seek to find a minimum cost partition of G into k components with each component containing a maximum of (1 + ε)·(n/k) nodes. We compare the cost of this approximation algorithm to the cost of a (k,1) cut, wherein each of the k components must have the same size of (n/k) nodes
|
https://en.wikipedia.org/wiki/Interclass%20correlation
|
In statistics, the interclass correlation (or interclass correlation coefficient) is a measure of a relation between two variables of different classes (types), such as the weights of 10-year-old sons and of their 40-year-old fathers. Deviations of a variable are measured from the mean of the data for that class – a son's weight minus the mean of all the sons' weights, or a father's weight minus the mean of all the fathers' weights.
The Pearson correlation coefficient is the most commonly used measure of interclass correlation.
The interclass correlation contrasts with the intraclass correlation between variables of the same class, such as the weights of women and of their identical twins; here deviations are measured from the mean of all members of the single class, in this example of all women in the set of identical twins.
References
There are several errors in the article:
Covariance and correlation
Inter-rater reliability
|
https://en.wikipedia.org/wiki/Control%20plane
|
In network routing, the control plane is the part of the router architecture that is concerned with drawing the network topology, or the information in a routing table that defines what to do with incoming packets. Control plane functions, such as participating in routing protocols, run in the architectural control element. In most cases, the routing table contains a list of destination addresses and the outgoing interface(s) associated with each. Control plane logic also can identify certain packets to be discarded, as well as preferential treatment of certain packets for which a high quality of service is defined by such mechanisms as differentiated services.
Depending on the specific router implementation, there may be a separate forwarding information base that is populated by the control plane, but used by the high-speed forwarding plane to look up packets and decide how to handle them.
In computing, the control plane is the part of the software that configures and shuts down the data plane. By contrast, the data plane is the part of the software that processes the data requests. The data plane is also sometimes referred to as the forwarding plane.
The distinction has proven useful in the networking field where it originated, as it separates the concerns: the data plane is optimized for speed of processing, and for simplicity and regularity. The control plane is optimized for customizability, handling policies, handling exceptional situations, and in general facilitating and simplifying the data plane processing.
The conceptual separation of the data plane from the control plane has been done for years. An early example is Unix, where the basic file operations are open, close for the control plane and read write for the data plane.
Building the unicast routing table
A major function of the control plane is deciding which routes go into the main routing table. "Main" refers to the table that holds the unicast routes that are active. Multicast routing may require an additional routing table for multicast routes. Several routing protocols e.g. IS-IS, OSPF and BGP maintain internal databases of candidate routes which are promoted when a route fails or when a routing policy is changed.
Several different information sources may provide information about a route to a given destination, but the router must select the "best" route to install into the routing table. In some cases, there may be multiple routes of equal "quality", and the router may install all of them and load-share across them.
Sources of routing information
There are three general sources of routing information:
Information on the status of directly connected hardware and software-defined interfaces
Manually configured static routes
Information from (dynamic) routing protocols
Local interface information
Routers forward traffic that enters on an input interface and leaves on an output interface, subject to filtering and other local rules. While routers usually forward fro
|
https://en.wikipedia.org/wiki/Finitely%20generated%20algebra
|
In mathematics, a finitely generated algebra (also called an algebra of finite type) is a commutative associative algebra A over a field K where there exists a finite set of elements a1,...,an of A such that every element of A can be expressed as a polynomial in a1,...,an, with coefficients in K.
Equivalently, there exist elements s.t. the evaluation homomorphism at
is surjective; thus, by applying the first isomorphism theorem, .
Conversely, for any ideal is a -algebra of finite type, indeed any element of is a polynomial in the cosets with coefficients in . Therefore, we obtain the following characterisation of finitely generated -algebras
is a finitely generated -algebra if and only if it is isomorphic to a quotient ring of the type by an ideal .
If it is necessary to emphasize the field K then the algebra is said to be finitely generated over K . Algebras that are not finitely generated are called infinitely generated.
Examples
The polynomial algebra K[x1,...,xn ] is finitely generated. The polynomial algebra in countably infinitely many generators is infinitely generated.
The field E = K(t) of rational functions in one variable over an infinite field K is not a finitely generated algebra over K. On the other hand, E is generated over K by a single element, t, as a field.
If E/F is a finite field extension then it follows from the definitions that E is a finitely generated algebra over F.
Conversely, if E/F is a field extension and E is a finitely generated algebra over F then the field extension is finite. This is called Zariski's lemma. See also integral extension.
If G is a finitely generated group then the group algebra KG is a finitely generated algebra over K.
Properties
A homomorphic image of a finitely generated algebra is itself finitely generated. However, a similar property for subalgebras does not hold in general.
Hilbert's basis theorem: if A is a finitely generated commutative algebra over a Noetherian ring then every ideal of A is finitely generated, or equivalently, A is a Noetherian ring.
Relation with affine varieties
Finitely generated reduced commutative algebras are basic objects of consideration in modern algebraic geometry, where they correspond to affine algebraic varieties; for this reason, these algebras are also referred to as (commutative) affine algebras. More precisely, given an affine algebraic set we can associate a finitely generated -algebra
called the affine coordinate ring of ; moreover, if is a regular map between the affine algebraic sets and , we can define a homomorphism of -algebras
then, is a contravariant functor from the category of affine algebraic sets with regular maps to the category of reduced finitely generated -algebras: this functor turns out to be an equivalence of categories
and, restricting to affine varieties (i.e. irreducible affine algebraic sets),
Finite algebras vs algebras of finite type
We recall that a commutative -algebra is a ring h
|
https://en.wikipedia.org/wiki/Affine%20algebra
|
Affine algebra may refer to:
Affine Lie algebra, a type of Kac–Moody algebras
The Lie algebra of the affine group
Finitely-generated algebra
Affine Hecke algebra
|
https://en.wikipedia.org/wiki/Opial%20property
|
In mathematics, the Opial property is an abstract property of Banach spaces that plays an important role in the study of weak convergence of iterates of mappings of Banach spaces, and of the asymptotic behaviour of nonlinear semigroups. The property is named after the Polish mathematician Zdzisław Opial.
Definitions
Let (X, || ||) be a Banach space. X is said to have the Opial property if, whenever (xn)n∈N is a sequence in X converging weakly to some x0 ∈ X and x ≠ x0, it follows that
Alternatively, using the contrapositive, this condition may be written as
If X is the continuous dual space of some other Banach space Y, then X is said to have the weak-∗ Opial property if, whenever (xn)n∈N is a sequence in X converging weakly-∗ to some x0 ∈ X and x ≠ x0, it follows that
or, as above,
A (dual) Banach space X is said to have the uniform (weak-∗) Opial property if, for every c > 0, there exists an r > 0 such that
for every x ∈ X with ||x|| ≥ c and every sequence (xn)n∈N in X converging weakly (weakly-∗) to 0 and with
Examples
Opial's theorem (1967): Every Hilbert space has the Opial property.
Sequence spaces , , have the Opial property.
Van Dulst theorem (1982): for every separable Banach space there is an equivalent norm that endows it with the Opial property.
For uniformly convex Banach spaces, Opial property holds if and only if Delta-convergence coincides with weak convergence.
References
Banach spaces
|
https://en.wikipedia.org/wiki/Scott%20information%20system
|
In domain theory, a branch of mathematics and computer science, a Scott information system is a primitive kind of logical deductive system often used as an alternative way of presenting Scott domains.
Definition
A Scott information system, A, is an ordered triple
satisfying
Here means
Examples
Natural numbers
The return value of a partial recursive function, which either returns a natural number or goes into an infinite recursion, can be expressed as a simple Scott information system as follows:
That is, the result can either be a natural number, represented by the singleton set , or "infinite recursion," represented by .
Of course, the same construction can be carried out with any other set instead of .
Propositional calculus
The propositional calculus gives us a very simple Scott information system as follows:
Scott domains
Let D be a Scott domain. Then we may define an information system as follows
the set of compact elements of
Let be the mapping that takes us from a Scott domain, D, to the information system defined above.
Information systems and Scott domains
Given an information system, , we can build a Scott domain as follows.
Definition: is a point if and only if
Let denote the set of points of A with the subset ordering. will be a countably based Scott domain when T is countable. In general, for any Scott domain D and information system A
where the second congruence is given by approximable mappings.
See also
Scott domain
Domain theory
References
Glynn Winskel: "The Formal Semantics of Programming Languages: An Introduction", MIT Press, 1993 (chapter 12)
Models of computation
Domain theory
|
https://en.wikipedia.org/wiki/Star-shaped%20polygon
|
In geometry, a star-shaped polygon is a polygonal region in the plane that is a star domain, that is, a polygon that contains a point from which the entire polygon boundary is visible.
Formally, a polygon is star-shaped if there exists a point such that for each point of the segment lies entirely within . The set of all points with this property (that is, the set of points from which all of is visible) is called the kernel of .
If a star-shaped polygon is convex, the link distance between any two of its points (the minimum number of sequential line segments sufficient to connect those points) is 1, and so the polygon's link diameter (the maximum link distance over all pairs of points) is 1. If a star-shaped polygon is not convex, the link distance between a point in the kernel and any other point in the polygon is 1, while the link distance between any two points that are in the polygon but outside the kernel is either 1 or 2; in this case the maximum link distance is 2.
Examples
Convex polygons are star shaped, and a convex polygon coincides with its own kernel.
Regular star polygons are star-shaped, with their center always in the kernel.
Antiparallelograms and self-intersecting Lemoine hexagons are star-shaped, with the kernel consisting of a single point.
Visibility polygons are star-shaped as every point within them must be visible to the center by definition.
Algorithms
Testing whether a polygon is star-shaped, and finding a single point in the kernel, may be solved in linear time by formulating the problem as a linear program and applying techniques for low-dimensional linear programming (see http://www.inf.ethz.ch/personal/emo/PublFiles/SubexLinProg_ALG16_96.pdf, page 16).
Each edge of a polygon defines an interior half-plane, the half-plane whose boundary lies on the line containing the edge and that contains the points of the polygon in a neighborhood of any interior point of the edge. The kernel of a polygon is the intersection of all its interior half-planes. The intersection of an arbitrary set of N half-planes may be found in Θ(N log N) time using the divide and conquer approach. However, for the case of kernels of polygons, a faster method is possible: presented an algorithm to construct the kernel in linear time.
See also
Monotone polygon
References
Types of polygons
Geometric algorithms
|
https://en.wikipedia.org/wiki/Loewner%27s%20torus%20inequality
|
In differential geometry, Loewner's torus inequality is an inequality due to Charles Loewner. It relates the systole and the area of an arbitrary Riemannian metric on the 2-torus.
Statement
In 1949 Charles Loewner proved that every metric on the 2-torus satisfies the optimal inequality
where "sys" is its systole, i.e. least length of a noncontractible loop. The constant appearing on the right hand side is the Hermite constant in dimension 2, so that Loewner's torus inequality can be rewritten as
The inequality was first mentioned in the literature in .
Case of equality
The boundary case of equality is attained if and only if the metric is flat and homothetic to the so-called equilateral torus, i.e. torus whose group of deck transformations is precisely the hexagonal lattice spanned by the cube roots of unity in .
Alternative formulation
Given a doubly periodic metric on (e.g. an imbedding in which is invariant by a isometric action), there is a nonzero element and a point such that , where is a fundamental domain for the action, while is the Riemannian distance, namely least length of a path joining and .
Proof of Loewner's torus inequality
Loewner's torus inequality can be proved most easily by using the computational formula for the variance,
Namely, the formula is applied to the probability measure defined by the measure of the unit area flat torus in the conformal class of the given torus. For the random variable X, one takes the conformal factor of the given metric with respect to the flat one. Then the expected value E(X 2) of X 2 expresses the total area of the given metric. Meanwhile, the expected value E(X) of X can be related to the systole by using Fubini's theorem. The variance of X can then be thought of as the isosystolic defect, analogous to the isoperimetric defect of Bonnesen's inequality. This approach therefore produces the following version of Loewner's torus inequality with isosystolic defect:
where ƒ is the conformal factor of the metric with respect to a unit area flat metric in its conformal class.
Higher genus
Whether or not the inequality
is satisfied by all surfaces of nonpositive Euler characteristic is unknown. For orientable surfaces of genus 2 and genus 20 and above, the answer is affirmative, see work by Katz and Sabourau below.
See also
Pu's inequality for the real projective plane
Gromov's systolic inequality for essential manifolds
Gromov's inequality for complex projective space
Eisenstein integer (an example of a hexagonal lattice)
Systoles of surfaces
References
Riemannian geometry
Differential geometry
Geometric inequalities
Differential geometry of surfaces
Systolic geometry
|
https://en.wikipedia.org/wiki/Visibility%20polygon
|
In computational geometry, the visibility polygon or visibility region for a point in the plane among obstacles is the possibly unbounded polygonal region of all points of the plane visible from . The visibility polygon can also be defined for visibility from a segment, or a polygon. Visibility polygons are useful in robotics, video games, and in various optimization problems such as the facility location problem and the art gallery problem.
If the visibility polygon is bounded then it is a star-shaped polygon. A visibility polygon is bounded if all rays shooting from the point eventually terminate in some obstacle. This is the case, e.g., if the obstacles are the edges of a simple polygon and is inside the polygon. In the latter case the visibility polygon may be found in linear time.
Definitions
Formally, we can define the planar visibility polygon problem as such. Let be a set of obstacles (either segments, or polygons) in . Let be a point in that is not within an obstacle. Then, the point visibility polygon is the set of points in , such that for every point in , the segment does not intersect any obstacle in .
Likewise, the segment visibility polygon or edge visibility polygon is the portion visible to any point along a line segment.
Applications
Visibility polygons are useful in robotics. For example, in robot localization, a robot using sensors such as a lidar will detect obstacles that it can see, which is similar to a visibility polygon.
They are also useful in video games, with numerous online tutorials explaining simple algorithms for implementing it.
Algorithms for point visibility polygons
Numerous algorithms have been proposed for computing the point visibility polygon. For different variants of the problem (e.g. different types of obstacles), algorithms vary in time complexity.
Naive algorithms
Naive algorithms are easy to understand and implement, but they are not optimal, since they can be much slower than other algorithms.
Uniform ray casting "physical" algorithm
In real life, a glowing point illuminates the region visible to it because it emits light in every direction. This can be simulated by shooting rays in many directions. Suppose that the point is and the set of obstacles is . Then, the pseudocode may be expressed in the following way:
algorithm naive_bad_algorithm(, ) is
:=
for :
// shoot a ray with angle
:=
for each obstacle in :
:= min(, distance from to the obstacle in this direction)
add vertex to
return
Now, if it were possible to try all the infinitely many angles, the result would be correct. Unfortunately, it is impossible to really try every single direction due to the limitations of computers. An approximation can be created by casting many, say, 50 rays spaced uniformly apart. However, this is not an exact solution, since small obstacles might be missed by two adjacent rays entirely. Furthermore, it is very slow,
|
https://en.wikipedia.org/wiki/Monotone%20polygon
|
In geometry, a polygon P in the plane is called monotone with respect to a straight line L, if every line orthogonal to L intersects the boundary of P at most twice.
Similarly, a polygonal chain C is called monotone with respect to a straight line L, if every line orthogonal to L intersects C at most once.
For many practical purposes this definition may be extended to allow cases when some edges of P are orthogonal to L, and a simple polygon may be called monotone if a line segment that connects two points in P and is orthogonal to L lies completely in P.
Following the terminology for monotone functions, the former definition describes polygons strictly monotone with respect to L.
Properties
Assume that L coincides with the x-axis. Then the leftmost and rightmost vertices of a monotone polygon decompose its boundary into two monotone polygonal chains such that when the vertices of any chain are being traversed in their natural order, their X-coordinates are monotonically increasing or decreasing. In fact, this property may be taken for the definition of monotone polygon and it gives the polygon its name.
A convex polygon is monotone with respect to any straight line and a polygon which is monotone with respect to every straight line is convex.
A linear time algorithm is known to report all directions in which a given simple polygon is monotone. It was generalized to report all ways to decompose a simple polygon into two monotone chains (possibly monotone in different directions.)
Point in polygon queries with respect to a monotone polygon may be answered in logarithmic time after linear time preprocessing (to find the leftmost and rightmost vertices).
A monotone polygon may be easily triangulated in linear time.
For a given set of points in the plane, a bitonic tour is a monotone polygon that connects the points. The minimum perimeter bitonic tour for a given point set with respect to a fixed direction may be found in polynomial time using dynamic programming. It is easily shown that such a minimal bitonic tour is a simple polygon: a pair of crossing edges may be replaced with a shorter non-crossing pair while preserving the bitonicity of the new tour.
A simple polygon may be easily cut into monotone polygons in O(n log n) time. However, since a triangle is a monotone polygon, polygon triangulation is in fact cutting a polygon into monotone ones, and it may be performed for simple polygons in O(n) time with a complex algorithm. A simpler randomized algorithm with linear expected time is also known.
Cutting a simple polygon into the minimal number of uniformly monotone polygons (i.e., monotone with respect to the same line) can be performed in polynomial time.
In the context of motion planning, two nonintersecting monotone polygons are separable by a single translation (i.e., there exists a translation of one polygon such that the two become separated by a straight line into different halfplanes) and this separation may be found in li
|
https://en.wikipedia.org/wiki/Pu%27s%20inequality
|
In differential geometry, Pu's inequality, proved by Pao Ming Pu, relates the area of an arbitrary Riemannian surface homeomorphic to the real projective plane with the lengths of the closed curves contained in it.
Statement
A student of Charles Loewner, Pu proved in his 1950 thesis that every Riemannian surface homeomorphic to the real projective plane satisfies the inequality
where is the systole of .
The equality is attained precisely when the metric has constant Gaussian curvature.
In other words, if all noncontractible loops in have length at least , then and the equality holds if and only if is obtained from a Euclidean sphere of radius by identifying each point with its antipodal.
Pu's paper also stated for the first time Loewner's inequality, a similar result for Riemannian metrics on the torus.
Proof
Pu's original proof relies on the uniformization theorem and employs an averaging argument, as follows.
By uniformization, the Riemannian surface is conformally diffeomorphic to a round projective plane. This means that we may assume that the surface is obtained from the Euclidean unit sphere by identifying antipodal points, and the Riemannian length element at each point is
where is the Euclidean length element and the function , called the conformal factor, satisfies .
More precisely, the universal cover of is , a loop is noncontractible if and only if its lift goes from one point to its opposite, and the length of each curve is
Subject to the restriction that each of these lengths is at least , we want to find an that minimizes the
where is the upper half of the sphere.
A key observation is that if we average several different that satisfy the length restriction and have the same area , then we obtain a better conformal factor , that also satisfies the length restriction and has
and the inequality is strict unless the functions are equal.
A way to improve any non-constant is to obtain the different functions from using rotations of the sphere , defining . If we average over all possible rotations, then we get an that is constant over all the sphere. We can further reduce this constant to minimum value allowed by the length restriction. Then we obtain the obtain the unique metric that attains the minimum area .
Reformulation
Alternatively, every metric on the sphere invariant under the antipodal map admits a pair of opposite points at Riemannian distance satisfying
A more detailed explanation of this viewpoint may be found at the page Introduction to systolic geometry.
Filling area conjecture
An alternative formulation of Pu's inequality is the following. Of all possible fillings of the Riemannian circle of length by a -dimensional disk with the strongly isometric property, the round hemisphere has the least area.
To explain this formulation, we start with the observation that the equatorial circle of the unit -sphere is a Riemannian circle of length . More precisely, the Riemannian
|
https://en.wikipedia.org/wiki/Calibrated%20probability%20assessment
|
Calibrated probability assessments are subjective probabilities assigned by individuals who have been trained to assess probabilities in a way that historically represents their uncertainty. For example, when a person has calibrated a situation and says they are "80% confident" in each of 100 predictions they made, they will get about 80% of them correct. Likewise, they will be right 90% of the time they say they are 90% certain, and so on.
Calibration training improves subjective probabilities because most people are either "overconfident" or "under-confident" (usually the former). By practicing with a series of trivia questions, it is possible for subjects to fine-tune their ability to assess probabilities. For example, a subject may be asked:
True or False: "A hockey puck fits in a golf hole"
Confidence: Choose the probability that best represents your chance of getting this question right...
50% 60% 70% 80% 90% 100%
If a person has no idea whatsoever, they will say they are only 50% confident. If they are absolutely certain they are correct, they will say 100%. But most people will answer somewhere in between. If a calibrated person is asked a large number of such questions, they will get about as many correct as they expected. An uncalibrated person who is systematically overconfident may say they are 90% confident in a large number of questions where they only get 70% of them correct. On the other hand, an uncalibrated person who is systematically underconfident may say they are 50% confident in a large number of questions where they actually get 70% of them correct.
Alternatively, the trainee will be asked to provide a numeric range for a question like, "In what year did Napoleon invade Russia?", with the instruction that the provided range is to represent a 90% confidence interval. That is, the test-taker should be 90% confident that the range contains the correct answer.
Calibration training generally involves taking a battery of such tests. Feedback is provided between tests and the subjects refine their probabilities. Calibration training may also involve learning other techniques that help to compensate for consistent over- or under-confidence. Since subjects are better at placing odds when they pretend to bet money, subjects are taught how to convert calibration questions into a type of betting game which is shown to improve their subjective probabilities. Various collaborative methods have been developed, such as prediction market, so that subjective estimates from multiple individuals can be taken into account.
Stochastic modeling methods such as the Monte Carlo method often use subjective estimates from "subject matter experts". Research shows that such experts are very likely to be statistically overconfident and as such, the model will tend to underestimate uncertainty and risk. Calibration training is used to increase a person’s ability to provide accurate estimates for stochastic methods. Research found that most people
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.