source stringlengths 31 168 | text stringlengths 51 3k |
|---|---|
https://en.wikipedia.org/wiki/Napkin%20ring%20problem | In geometry, the napkin-ring problem involves finding the volume of a "band" of specified height around a sphere, i.e. the part that remains after a hole in the shape of a circular cylinder is drilled through the center of the sphere. It is a counterintuitive fact that this volume does not depend on the original sphere's radius but only on the resulting band's height.
The problem is so called because after removing a cylinder from the sphere, the remaining band resembles the shape of a napkin ring.
Statement
Suppose that the axis of a right circular cylinder passes through the center of a sphere of radius and that represents the height (defined as the distance in a direction parallel to the axis) of the part of the cylinder that is inside the sphere. The "band" is the part of the sphere that is outside the cylinder. The volume of the band depends on but not on :
As the radius of the sphere shrinks, the diameter of the cylinder must also shrink in order that can remain the same. The band gets thicker, and this would increase its volume. But it also gets shorter in circumference, and this would decrease its volume. The two effects exactly cancel each other out. In the extreme case of the smallest possible sphere, the cylinder vanishes (its radius becomes zero) and the height equals the diameter of the sphere. In this case the volume of the band is the volume of the whole sphere, which matches the formula given above.
An early study of this problem was written by 17th-century Japanese mathematician Seki Kōwa. According to , Seki called this solid an arc-ring, or in Japanese kokan or kokwan.
Proof
Suppose the radius of the sphere is and the length of the cylinder (or the tunnel) is .
By the Pythagorean theorem, the radius of the cylinder is
and the radius of the horizontal cross-section of the sphere at height above the "equator" is
The cross-section of the band with the plane at height is the region inside the larger circle of radius given by (2) and outside the smaller circle of radius given by (1). The cross-section's area is therefore the area of the larger circle minus the area of the smaller circle:
The radius R does not appear in the last quantity. Therefore, the area of the horizontal cross-section at height does not depend on , as long as . The volume of the band is
and that does not depend on .
This is an application of Cavalieri's principle: volumes with equal-sized corresponding cross-sections are equal. Indeed, the area of the cross-section is the same as that of the corresponding cross-section of a sphere of radius , which has volume
See also
Visual calculus, an intuitive way to solve this type of problem, originally applied to finding the area of an annulus, given only its chord length
String girdling Earth, another problem where the radius of a sphere or circle is counter-intuitively irrelevant
References
Problem 132 asks for the volume of a sphere with a cylindrical hole drilled through it, b |
https://en.wikipedia.org/wiki/Cavalieri%27s%20principle | In geometry, Cavalieri's principle, a modern implementation of the method of indivisibles, named after Bonaventura Cavalieri, is as follows:
2-dimensional case: Suppose two regions in a plane are included between two parallel lines in that plane. If every line parallel to these two lines intersects both regions in line segments of equal length, then the two regions have equal areas.
3-dimensional case: Suppose two regions in three-space (solids) are included between two parallel planes. If every plane parallel to these two planes intersects both regions in cross-sections of equal area, then the two regions have equal volumes.
Today Cavalieri's principle is seen as an early step towards integral calculus, and while it is used in some forms, such as its generalization in Fubini's theorem and layer cake representation, results using Cavalieri's principle can often be shown more directly via integration. In the other direction, Cavalieri's principle grew out of the ancient Greek method of exhaustion, which used limits but did not use infinitesimals.
History
Cavalieri's principle was originally called the method of indivisibles, the name it was known by in Renaissance Europe. Cavalieri developed a complete theory of indivisibles, elaborated in his Geometria indivisibilibus continuorum nova quadam ratione promota (Geometry, advanced in a new way by the indivisibles of the continua, 1635) and his Exercitationes geometricae sex (Six geometrical exercises, 1647). While Cavalieri's work established the principle, in his publications he denied that the continuum was composed of indivisibles in an effort to avoid the associated paradoxes and religious controversies, and he did not use it to find previously unknown results.
In the 3rd century BC, Archimedes, using a method resembling Cavalieri's principle, was able to find the volume of a sphere given the volumes of a cone and cylinder in his work The Method of Mechanical Theorems. In the 5th century AD, Zu Chongzhi and his son Zu Gengzhi established a similar method to find a sphere's volume. The transition from Cavalieri's indivisibles to Evangelista Torricelli's and John Wallis's infinitesimals was a major advance in the history of calculus. The indivisibles were entities of codimension 1, so that a plane figure was thought as made out of an infinite number of 1-dimensional lines. Meanwhile, infinitesimals were entities of the same dimension as the figure they make up; thus, a plane figure would be made out of "parallelograms" of infinitesimal width. Applying the formula for the sum of an arithmetic progression, Wallis computed the area of a triangle by partitioning it into infinitesimal parallelograms of width 1/∞.
2-dimensional
Cycloids
N. Reed has shown how to find the area bounded by a cycloid by using Cavalieri's principle. A circle of radius r can roll in a clockwise direction upon a line below it, or in a counterclockwise direction upon a line above it. A point on the circle thereby trac |
https://en.wikipedia.org/wiki/Crepant%20resolution | In algebraic geometry, a crepant resolution of a singularity is a resolution that does not affect the canonical class of the manifold. The term "crepant" was coined by by removing the prefix "dis" from the word "discrepant", to indicate that the resolutions have no discrepancy in the canonical class.
The crepant resolution conjecture of states that the orbifold cohomology of a Gorenstein orbifold is isomorphic to a semiclassical limit of the quantum cohomology of a crepant resolution.
In 2 dimensions, crepant resolutions of complex Gorenstein quotient singularities (du Val singularities) always exist and are unique, in 3 dimensions they exist but need not be unique as they can be related by flops, and in dimensions greater than 3 they need not exist.
A substitute for crepant resolutions which always exists is a terminal model. Namely, for every variety X over a field of characteristic zero such that X has canonical singularities (for example, rational Gorenstein singularities), there is a variety Y with Q-factorial terminal singularities and a birational projective morphism f: Y → X which is crepant in the sense that KY = f*KX.
Notes
References
Algebraic geometry
Singularity theory |
https://en.wikipedia.org/wiki/Depressor%20weight | A depressor weight is a device used on tow lines in nautical applications. A depressor weight can be attached to a floating or otherwise stiff line to effectively produce a catenary geometry better able to mitigate shocks associated with towing and wave impacts.
References
Nautical terminology |
https://en.wikipedia.org/wiki/Polyphase%20sequence | In mathematics, a polyphase sequence is a sequence whose terms are complex roots of unity:
where xn is an integer.
Polyphase sequences are an important class of sequences and play important roles in synchronizing sequence design.
See also
Zadoff–Chu sequence
References
Sequences and series |
https://en.wikipedia.org/wiki/2007%E2%80%9308%20FC%20O%C8%9Belul%20Gala%C8%9Bi%20season |
Match results
Friendlies
UEFA Intertoto
UEFA Cup
Liga I
League table
Results by round
Results summary
Matches
Cupa României
Players
Squad statistics
Transfers
In
Out
References
ASC Oțelul Galați seasons
Otelul Galati |
https://en.wikipedia.org/wiki/Lucrezia%20Gonzaga | Lucrezia Gonzaga di Gazzuolo (1522 – 11 February 1576) was an Italian noblewoman known for her literary talents, and her association with Matteo Bandello. Bandello taught her mathematics, astronomy, rhetoric and logic, and wrote poetry in her honour, during his stay in Castel Goffredo at the court of Luigi Gonzaga. A volume of her letters was published in Venice in 1552, but some people believe Ortensio Lando was the author and not just the editor.
She was born in Bozzolo to Pirro Gonzaga, lord of Gazzuolo, member of a secondary branch of the Gonzaga family, and Camilla Bentivoglio. At the age of 14 she married Paolo Manfrone, and is sometimes known as Lucrezia Gonzaga Manfrona. She died in 1576 in Mantua.
Works
Lettere della molto illustre sig. la s.ra donna Lucretia Gonzaga da Gazuolo con gran diligentia raccolte, & a gloria del sesso feminile nuouamente in luce poste. Venice, 1552 (Collected by Ortensio Lando?)
Lucrezia Gonzaga, Lettere. Vita quotidiana e sensibilità religiosa nel Polesine di metà ‘500, a cura di Renzo Bragantini e Primo Griguolo, Minelliana, Rovigo, 2009.
Sources
Ginevra Canonici Fachini, Prospetto biografico delle donne italiane rinomate in letteratura, 1824
Mary Hays, Female Biography; or Memoirs of Illustrious and Celebrated Women of all Ages and Countries (volume 4), 1803
Giuseppe Maffei, Storia della letteratura italiana, 1834
1522 births
1576 deaths
Lucrezia
Nobility of Mantua
Gonzaga, Lucrezia
16th-century Italian women writers |
https://en.wikipedia.org/wiki/2006%E2%80%9307%20FC%20O%C8%9Belul%20Gala%C8%9Bi%20season |
Competitions
Friendlies
Liga I
League table
Results by round
Results summary
Matches
Cupa României
Players
Squad statistics
Transfers
In
Out
References
See also
ASC Oțelul Galați seasons
Otelul Galati |
https://en.wikipedia.org/wiki/Graph%20algebra | In mathematics, especially in the fields of universal algebra and graph theory, a graph algebra is a way of giving a directed graph an algebraic structure. It was introduced by McNulty and Shallon, and has seen many uses in the field of universal algebra since then.
Definition
Let be a directed graph, and an element not in . The graph algebra associated with has underlying set , and is equipped with a multiplication defined by the rules
if and ,
if and .
Applications
This notion has made it possible to use the methods of graph theory in universal algebra and several other areas of discrete mathematics and computer science. Graph algebras have been used, for example, in constructions concerning dualities, equational theories, flatness, groupoid rings, topologies, varieties, finite-state machines,
tree languages and tree automata, etc.
See also
Group algebra (disambiguation)
Incidence algebra
Path algebra
Citations
Works cited
Further reading
Graph theory
Universal algebra |
https://en.wikipedia.org/wiki/Microdata%20%28statistics%29 | In the study of survey and census data, microdata is information at the level of individual respondents. For instance, a national census might collect age, home address, educational level, employment status, and many other variables, recorded separately for every person who responds; this is microdata.
Advantages
Survey/census results are most commonly published as aggregates (e.g. a regional-level employment rate), both for privacy reasons and because of the large quantities of data involved; microdata for one census can easily contain millions of records, each with several dozen data items.
However, summarizing results to an aggregate level results in information loss. For instance, if statistics for education and employment are aggregated separately, they cannot be used to explore a relationship between these two variables. Access to microdata allows researchers much more freedom to investigate such interactions and perform detailed analysis.
Availability
For this reason, some statistical organizations allow access to microdata for research purposes. Controls are generally imposed to limit the risk that this data may be abused or lead to loss of privacy. For example, the Integrated Public Use Microdata Series requires researchers to implement security measures, avoid redistribution of microdata, use microdata only for noncommercial research/education purposes, and not make any attempt to identify the individuals recorded. Names and fine-level geographical data are removed, some data items are altered as necessary to make it impossible to identify individuals, and small ethnic categories are merged.
The International Household Survey Network has developed tools and guidelines to help interested statistical agencies improve their microdata management practices. The Microdata Management Toolkit is a DDI metadata editor which is now used in about 80 countries, with the support of the Accelerated Data Program, implemented by the PARIS21 Secretariat, the World Bank, and other partners, in the context of the Marrakech Action Plan for Statistics.
References
Sampling (statistics)
Censuses |
https://en.wikipedia.org/wiki/Laves%20phase | Laves phases are intermetallic phases that have composition AB2 and are named for Fritz Laves who first described them. The phases are classified on the basis of geometry alone. While the problem of packing spheres of equal size has been well-studied since Gauss, Laves phases are the result of his investigations into packing spheres of two sizes. Laves phases fall into three Strukturbericht types: cubic MgCu2 (C15), hexagonal MgZn2 (C14), and hexagonal MgNi2 (C36). The latter two classes are unique forms of the hexagonal arrangement, but share the same basic structure. In general, the A atoms are ordered as in diamond, hexagonal diamond, or a related structure, and the B atoms form tetrahedra around the A atoms for the AB2 structure.
Laves phases are of particular interest in modern metallurgy research because of their abnormal physical and chemical properties. Many hypothetical or primitive applications have been developed. However, little practical knowledge has been elucidated from Laves phase study so far.
A characteristic feature is the almost perfect electrical conductivity, but they are not plastically deformable at room temperature.
In each of the three classes of Laves phase, if the two types of atoms were perfect spheres with a size ratio of , the structure would be topologically tetrahedrally close-packed. At this size ratio, the structure has an overall packing volume density of 0.710. Compounds found in Laves phases typically have an atomic size ratio between 1.05 and 1.67. Analogues of Laves phases can be formed by the self-assembly of a colloidal dispersion of two sizes of sphere.
References
Intermetallics
Crystal structure types |
https://en.wikipedia.org/wiki/Generalized%20normal%20distribution | The generalized normal distribution or generalized Gaussian distribution (GGD) is either of two families of parametric continuous probability distributions on the real line. Both families add a shape parameter to the normal distribution. To distinguish the two families, they are referred to below as "symmetric" and "asymmetric"; however, this is not a standard nomenclature.
Symmetric version
The symmetric generalized normal distribution, also known as the exponential power distribution or the generalized error distribution, is a parametric family of symmetric distributions. It includes all normal and Laplace distributions, and as limiting cases it includes all continuous uniform distributions on bounded intervals of the real line.
This family includes the normal distribution when (with mean and variance ) and it includes the Laplace distribution when . As , the density converges pointwise to a uniform density on .
This family allows for tails that are either heavier than normal (when ) or lighter than normal (when ). It is a useful way to parametrize a continuum of symmetric, platykurtic densities spanning from the normal () to the uniform density (), and a continuum of symmetric, leptokurtic densities spanning from the Laplace () to the normal density ().
The shape parameter also controls the peakedness in addition to the tails.
Parameter estimation
Parameter estimation via maximum likelihood and the method of moments has been studied. The estimates do not have a closed form and must be obtained numerically. Estimators that do not require numerical calculation have also been proposed.
The generalized normal log-likelihood function has infinitely many continuous derivates (i.e. it belongs to the class C∞ of smooth functions) only if is a positive, even integer. Otherwise, the function has continuous derivatives. As a result, the standard results for consistency and asymptotic normality of maximum likelihood estimates of only apply when .
Maximum likelihood estimator
It is possible to fit the generalized normal distribution adopting an approximate maximum likelihood method. With initially set to the sample first moment ,
is estimated by using a Newton–Raphson iterative procedure, starting from an initial guess of ,
where
is the first statistical moment of the absolute values and is the second statistical moment. The iteration is
where
and
and where and are the digamma function and trigamma function.
Given a value for , it is possible to estimate by finding the minimum of:
Finally is evaluated as
For , median is a more appropriate estimator of . Once is estimated, and can be estimated as described above.
Applications
The symmetric generalized normal distribution has been used in modeling when the concentration of values around the mean and the tail behavior are of particular interest. Other families of distributions can be used if the focus is on other deviations from normality. If the symmetry of t |
https://en.wikipedia.org/wiki/Ardill%2C%20Saskatchewan | Ardill is a hamlet in RM of Lake Johnston No. 102, Saskatchewan, Canada. Listed as a designated place by Statistics Canada, the hamlet had a listed population of 0 in the Canada 2006 Census.
All that currently remains is the bar which was issued liquor licence #1. Ardill is located between Assiniboia and Moose Jaw, south of Old Wives Lake and at the northern end of Lake of the Rivers.
Demographics
Ardill, like so many other small communities throughout Saskatchewan, has struggled to maintain a population resulting in a ghost town with no population. Previously, Ardill was incorporated under village status, but was restructured as a hamlet under the jurisdiction of the Rural municipality of Lake Johnston.
In 2001, Ardill had a population of 0, the same as in 1996. The village had a land area of .
Infrastructure
The former Saskatchewan Transportation Company provided intercity bus service to Ardill.
See also
List of communities in Saskatchewan
Hamlets of Saskatchewan
References
Former designated places in Saskatchewan
Former villages in Saskatchewan
Hamlets in Saskatchewan
Ghost towns in Saskatchewan
Lake Johnston No. 102, Saskatchewan
Division No. 3, Saskatchewan |
https://en.wikipedia.org/wiki/Arlington%20Beach%2C%20Saskatchewan | Arlington Beach is a hamlet in the Canadian province of Saskatchewan. It is located on the eastern shore of Last Mountain Lake, north-west of Regina. Listed as a designated place by Statistics Canada, the hamlet had a population of 39 in the Canada 2006 Census.
Arlington Beach is home to a large camp and conference centre that hosts groups from all over Western Canada. Its main complex is the Kinney Memorial Lodge which features multiple meeting rooms, guest rooms, and a large dining hall. The hamlet is also home to the historic Arlington Beach House.
Right beside the Beach House is a mini-golf course and the local marina. This serves not only Arlington Beach but also allows for sheltered access and boat launches to Last Mountain Lake for the surrounding communities.
History
The Arlington Beach House was built at Arlington Beach in 1910 where it was the centre of activity for the William Pearson Land Company. Customers were brought from all over Last Mountain Lake by early steam ships that steamed up and down the lake before the railroad came to the region in the early part of the 20th century.
Business was boosted by the early steam ship, the Welcome, which started business on Last Mountain Lake in 1905. In 1907, the Qu'Appelle, an eight crew steamer that could accommodate 200 people, made the run up and down the lake until 1913.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, Arlington Beach had a population of 38 living in 22 of its 72 total private dwellings, a change of from its 2016 population of 20. With a land area of , it had a population density of in 2021.
Arlington Beach House
Arlington Beach House was very popular and was declared by visitors to be one of the coziest resort hotels in Saskatchewan. Visitors from Eastern Canada compared it favourably to their favourite watering holes back home and Arlington Beach became known as a great place to swim, fish, and boat.
As the area matured and become more populated during the 1920s and 1930s, it became a favourite spot for community picnics and activities.
As the CP Railway changed transportation patterns across the region, Arlington Beach was bought by the Canadian Sunday School Mission in 1942. At this time there was a few public buildings. One building was a dressing room for swimmers while another was a boat house. There was also a band shell and a small round building that was used for ticket sales to local sporting events.
After Arlington Beach was sold to the Canadian Sunday School Mission, buildings were moved and converted into dorms, and a large tabernacle was built for 300 people. The hotel had started to fall into disrepair and as an interim step, the fireplace was removed from the Arlington Beach House.
After two years of renting the camp, in 1960, the Free Methodist Church in Canada bought Arlington Beach House from the Canadian Sunday School Mission for $14,000.00.
In 1965 and 1966, the tabernacle from Moose Jaw Camp Grounds was |
https://en.wikipedia.org/wiki/Balone%20Beach | Balone Beach is a hamlet in the Canadian province of Saskatchewan on the northern shore of Wakaw Lake.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, Balone Beach had a population of 18 living in 10 of its 32 total private dwellings, a change of from its 2016 population of 5. With a land area of , it had a population density of in 2021.
See also
List of communities in Saskatchewan
References
Designated places in Saskatchewan
Hoodoo No. 401, Saskatchewan
Organized hamlets in Saskatchewan
Division No. 15, Saskatchewan |
https://en.wikipedia.org/wiki/Barrier%20Ford%2C%20Saskatchewan | Barrier Ford is a hamlet in the Canadian province of Saskatchewan.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, Barrier Ford had a population of 30 living in 19 of its 125 total private dwellings, a change of from its 2016 population of 20. With a land area of , it had a population density of in 2021.
References
Bjorkdale No. 426, Saskatchewan
Designated places in Saskatchewan
Organized hamlets in Saskatchewan |
https://en.wikipedia.org/wiki/Beaver%20Creek%2C%20Saskatchewan | Beaver Creek is a hamlet in the Rural Municipality of Dundurn No. 314, Saskatchewan, Canada. Listed as a designated place by Statistics Canada, the hamlet had a population of 107 in the Canada 2016 Census.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, Beaver Creek had a population of 111 living in 42 of its 42 total private dwellings, a change of from its 2016 population of 107. With a land area of , it had a population density of in 2021.
See also
List of communities in Saskatchewan
List of hamlets in Saskatchewan
Beaver Creek Conservation Area
Designated place
References
Designated places in Saskatchewan
Dundurn No. 314, Saskatchewan
Organized hamlets in Saskatchewan |
https://en.wikipedia.org/wiki/Cedar%20Villa%20Estates%2C%20Saskatchewan | Cedar Villa Estates is a hamlet in the Canadian province of Saskatchewan, located southwest of Saskatoon.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, Cedar Villa Estates had a population of 113 living in 38 of its 39 total private dwellings, a change of from its 2016 population of 104. With a land area of , it had a population density of in 2021.
References
Corman Park No. 344, Saskatchewan
Designated places in Saskatchewan
Organized hamlets in Saskatchewan |
https://en.wikipedia.org/wiki/Chortitz%2C%20Saskatchewan | Chortitz is a hamlet in Coulee Rural Municipality No. 136, Saskatchewan, Canada. Listed as a designated place by Statistics Canada, the hamlet had a population of 26 in the Canada 2006 Census. The hamlet is located on Highway 379, about 25 km south of Swift Current.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, Chortitz had a population of 15 living in 7 of its 7 total private dwellings, a change of from its 2016 population of 19. With a land area of , it had a population density of in 2021.
See also
List of communities in Saskatchewan
Hamlets of Saskatchewan
References
Designated places in Saskatchewan
Unincorporated communities in Saskatchewan
Coulee No. 136, Saskatchewan |
https://en.wikipedia.org/wiki/Colesdale%20Park%2C%20Saskatchewan | Colesdale Park is a hamlet in the Canadian province of Saskatchewan.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, Colesdale Park had a population of 25 living in 14 of its 55 total private dwellings, a change of from its 2016 population of 20. With a land area of , it had a population density of in 2021.
References
Designated places in Saskatchewan
McKillop No. 220, Saskatchewan
Organized hamlets in Saskatchewan
Division No. 6, Saskatchewan |
https://en.wikipedia.org/wiki/Congress%2C%20Saskatchewan | Congress is a hamlet in the Canadian province of Saskatchewan.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, Congress had a population of 20 living in 9 of its 10 total private dwellings, a change of from its 2016 population of 20. With a land area of , it had a population density of in 2021.
References
Designated places in Saskatchewan
Organized hamlets in Saskatchewan
Stonehenge No. 73, Saskatchewan
Division No. 3, Saskatchewan |
https://en.wikipedia.org/wiki/Courval | Courval is an unincorporated community in the Rural Municipality of Rodgers No. 133 in the Canadian province of Saskatchewan. Recognized as a designated place by Statistics Canada, Courval had a population of 5 in the Canada 2006 Census.
Demographics
Climate
See also
List of communities in Saskatchewan
List of ghost towns in Saskatchewan
References
Former designated places in Saskatchewan
Unincorporated communities in Saskatchewan
Rodgers No. 133, Saskatchewan |
https://en.wikipedia.org/wiki/Crane%20Valley%2C%20Saskatchewan | Crane Valley is a hamlet in the Canadian province of Saskatchewan.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, Crane Valley had a population of 20 living in 8 of its 9 total private dwellings, a change of from its 2016 population of 15. With a land area of , it had a population density of in 2021.
References
Designated places in Saskatchewan
Excel No. 71, Saskatchewan
Hamlets in Saskatchewan
Division No. 3, Saskatchewan |
https://en.wikipedia.org/wiki/Crooked%20River%2C%20Saskatchewan | Crooked River is a special service area in the Canadian province of Saskatchewan.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, Crooked River had a population of 49 living in 20 of its 25 total private dwellings, a change of from its 2016 population of 32. With a land area of , it had a population density of in 2021.
References
Bjorkdale No. 426, Saskatchewan
Designated places in Saskatchewan
Special service areas in Saskatchewan
Division No. 14, Saskatchewan |
https://en.wikipedia.org/wiki/Crystal%20Bay-Sunset | Crystal Bay-Sunset is a hamlet in the Canadian province of Saskatchewan.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, Crystal Bay-Sunset had a population of 39 living in 22 of its 94 total private dwellings, a change of from its 2016 population of 15. With a land area of , it had a population density of in 2021.
References
Designated places in Saskatchewan
Mervin No. 499, Saskatchewan
Organized hamlets in Saskatchewan
Division No. 17, Saskatchewan |
https://en.wikipedia.org/wiki/Reinhold%20Wosab | Reinhold Wosab (born 25 February 1938) is a German former professional footballer. He spent ten seasons in the Bundesliga with Borussia Dortmund and VfL Bochum.
Career statistics
Honours
UEFA Cup Winners' Cup winner: 1965–66.
West German football championship winner: 1963.
Bundesliga runner-up: 1965–66.
DFB-Pokal winner: 1964–65.
DFB-Pokal finalist: 1962–63.
External links
1938 births
Living people
People from Marl, North Rhine-Westphalia
Footballers from Münster (region)
German men's footballers
West German men's footballers
Men's association football midfielders
Bundesliga players
Borussia Dortmund players
VfL Bochum players |
https://en.wikipedia.org/wiki/Theorem%20of%20Bertini | In mathematics, the theorem of Bertini is an existence and genericity theorem for smooth connected hyperplane sections for smooth projective varieties over algebraically closed fields, introduced by Eugenio Bertini. This is the simplest and broadest of the "Bertini theorems" applying to a linear system of divisors; simplest because there is no restriction on the characteristic of the underlying field, while the extensions require characteristic 0.
Statement for hyperplane sections of smooth varieties
Let X be a smooth quasi-projective variety over an algebraically closed field, embedded in a projective space .
Let denote the complete system of hyperplane divisors in . Recall that it is the dual space of and is isomorphic to .
The theorem of Bertini states that the set of hyperplanes not containing X and with smooth intersection with X contains an open dense subset of the total system of divisors . The set itself is open if X is projective. If , then these intersections (called hyperplane sections of X) are connected, hence irreducible.
The theorem hence asserts that a general hyperplane section not equal to X is smooth, that is: the property of smoothness is generic.
Over an arbitrary field k, there is a dense open subset of the dual space whose rational points define hyperplanes smooth hyperplane sections of X. When k is infinite, this open subset then has infinitely many rational points and there are infinitely many smooth hyperplane sections in X.
Over a finite field, the above open subset may not contain rational points and in general there is no hyperplanes with smooth intersection with X. However, if we take hypersurfaces of sufficiently big degrees, then the theorem of Bertini holds.
Outline of a proof
We consider the subfibration of the product variety with fiber above the linear system of hyperplanes that intersect X non-transversally at x.
The rank of the fibration in the product is one less than the codimension of , so that the total space has lesser dimension than and so its projection is contained in a divisor of the complete system .
General statement
Over any infinite field of characteristic 0, if X is a smooth quasi-projective -variety, a general member of a linear system of divisors on X is smooth away from the base locus of the system. For clarification, this means that given a linear system , the preimage of a hyperplane H is smooth -- outside the base locus of f -- for all hyperplanes H in some dense open subset of the dual projective space . This theorem also holds in characteristic p>0 when the linear system f is unramified.
Generalizations
The theorem of Bertini has been generalized in various ways. For example, a result due to Steven Kleiman asserts the following (cf. Kleiman's theorem): for a connected algebraic group G, and any homogeneous G-variety X, and two varieties Y and Z mapping to X, let Yσ be the variety obtained by letting σ ∈ G act on Y. Then, there is an open dense subscheme H of G such |
https://en.wikipedia.org/wiki/2008%20Puerto%20Rico%20Islanders%20season | The 20098 season is the Puerto Rico Islanders 5th season in the USL First Division. This article shows player statistics and all matches (official and friendly) that the club had during the 2008 season. It also includes matched played in 2008 for the CONCACAF Champions League 2008–09.
Club
Management
Kit
|
|
|
Competitions
Overall
USL 1
Results summary
Results by round (regular season)
Result by brackets (Playoffs)
Teams will be re-seeded for semifinal matchups
2008
Puerto Rico Islanders
Islanders |
https://en.wikipedia.org/wiki/2009%20Cruzeiro%20Esporte%20Clube%20season |
Squad
Final Squad
Junior players with first team experience
Transfers
Out on loan
In
Out
Statistics
Top scorers
Overall
{|class="wikitable"
|-
|Games played || 72 (17 Campeonato Mineiro, 14 Copa Libertadores, 38 Campeonato Brasileiro, 3 Friendly match )
|-
|Games won || 42 (12 Campeonato Mineiro, 9 Copa Libertadores, 18 Campeonato Brasileiro, 3 Friendly match)
|-
|Games drawn || 16 (5 Campeonato Mineiro, 3 Copa Libertadores, 8 Campeonato Brasileiro, 0 Friendly match)
|-
|Games lost || 14 (0 Campeonato Mineiro, 2 Copa Libertadores, 12 Campeonato Brasileiro, 0 Friendly match)
|-
|Goals scored || 141
|-
|Goals conceded || 82
|-
|Goal difference || +59
|-
|Best result || 7-0 (H) v Democrata – Campeonato Mineiro – 2009.3.25
|-
|Worst result || 0-4 (A) v Estudiantes – Copa Libertadores – 2009.4.8
|-
|Top scorer || Wellington Paulista (26 goals)
|-
Pre-season
Copa Bimbo
Bracket
Semi-final
Final
Juan Pablo Sorín farewell match
Campeonato Mineiro
League table
Matches
Bracket
Quarter-finals
Semi-finals
Finals
Copa Libertadores
Group stage
Knockout stage
Round of 16
Quarter-finals
Semi-finals
Finals
Campeonato Brasileiro
League table
Results summary
Pld = Matches played; W = Matches won; D = Matches drawn; L = Matches lost;
Results by round
Matches
See also
Cruzeiro Esporte Clube
References
External links
official website
Cruzeiro
Cruzeiro Esporte Clube seasons |
https://en.wikipedia.org/wiki/Decision%20rule | In decision theory, a decision rule is a function which maps an observation to an appropriate action. Decision rules play an important role in the theory of statistics and economics, and are closely related to the concept of a strategy in game theory.
In order to evaluate the usefulness of a decision rule, it is necessary to have a loss function detailing the outcome of each action under different states.
Formal definition
Given an observable random variable X over the probability space , determined by a parameter θ ∈ Θ, and a set A of possible actions, a (deterministic) decision rule is a function δ : → A.
Examples of decision rules
An estimator is a decision rule used for estimating a parameter. In this case the set of actions is the parameter space, and a loss function details the cost of the discrepancy between the true value of the parameter and the estimated value. For example, in a linear model with a single scalar parameter , the domain of may extend over (all real numbers). An associated decision rule for estimating from some observed data might be, "choose the value of the , say , that minimizes the sum of squared error between some observed responses and responses predicted from the corresponding covariates given that you chose ." Thus, the cost function is the sum of squared error, and one would aim to minimize this cost. Once the cost function is defined, could be chosen, for instance, using some optimization algorithm.
Out of sample prediction in regression and classification models.
See also
Admissible decision rule
Bayes estimator
Classification rule
Scoring rule
Decision theory |
https://en.wikipedia.org/wiki/Tracy%E2%80%93Widom%20distribution | The Tracy–Widom distribution is a probability distribution from random matrix theory introduced by . It is the distribution of the normalized largest eigenvalue of a random Hermitian matrix. The distribution is defined as a Fredholm determinant.
In practical terms, Tracy–Widom is the crossover function between the two phases of weakly versus strongly coupled components in a system.
It also appears in the distribution of the length of the longest increasing subsequence of random permutations, as large-scale statistics in the Kardar-Parisi-Zhang equation, in current fluctuations of the asymmetric simple exclusion process (ASEP) with step initial condition, and in simplified mathematical models of the behavior of the longest common subsequence problem on random inputs. See and for experimental testing (and verifying) that the interface fluctuations of a growing droplet (or substrate) are described by the TW distribution (or ) as predicted by .
The distribution is of particular interest in multivariate statistics. For a discussion of the universality of , , see . For an application of to inferring population structure from genetic data see .
In 2017 it was proved that the distribution F is not infinitely divisible.
Definition as a law of large numbers
Let denote the cumulative distribution function of the Tracy–Widom distribution with given . It can be defined as a law of large numbers, similar to the central limit theorem.
There are typically three Tracy–Widom distributions, , with . They correspond to the three gaussian ensembles: orthogonal (), unitary (), and symplectic ().
In general, consider a gaussian ensemble with beta value , with its diagonal entries having variance 1, and off-diagonal entries having variance , and let be probability that an matrix sampled from the ensemble have maximal eigenvalue , then definewhere denotes the largest eigenvalue of the random matrix. The shift by centers the distribution, since at the limit, the eigenvalue distribution converges to the semicircular distribution with radius . The multiplication by is used because the standard deviation of the distribution scales as (first derived in ).
For example:
where the matrix is sampled from the gaussian unitary ensemble with off-diagonal variance .
The definition of the Tracy–Widom distributions may be extended to all (Slide 56 in , ).
One may naturally ask for the limit distribution of second-largest eigenvalues, third-largest eigenvalues, etc. They are known.
Functional forms
Fredholm determinant
can be given as the Fredholm determinant
of the kernel ("Airy kernel") on square integrable functions on the half line , given in terms of Airy functions Ai by
Painlevé transcendents
can also be given as an integral
in terms of a solution of a Painlevé equation of type II
with boundary condition This function is a Painlevé transcendent.
Other distributions are also expressible in terms of the same :
Functional equations
Define then |
https://en.wikipedia.org/wiki/Hyperprior | In Bayesian statistics, a hyperprior is a prior distribution on a hyperparameter, that is, on a parameter of a prior distribution.
As with the term hyperparameter, the use of hyper is to distinguish it from a prior distribution of a parameter of the model for the underlying system. They arise particularly in the use of hierarchical models.
For example, if one is using a beta distribution to model the distribution of the parameter p of a Bernoulli distribution, then:
The Bernoulli distribution (with parameter p) is the model of the underlying system;
p is a parameter of the underlying system (Bernoulli distribution);
The beta distribution (with parameters α and β) is the prior distribution of p;
α and β are parameters of the prior distribution (beta distribution), hence hyperparameters;
A prior distribution of α and β is thus a hyperprior.
In principle, one can iterate the above: if the hyperprior itself has hyperparameters, these may be called hyperhyperparameters, and so forth.
One can analogously call the posterior distribution on the hyperparameter the hyperposterior, and, if these are in the same family, call them conjugate hyperdistributions or a conjugate hyperprior. However, this rapidly becomes very abstract and removed from the original problem.
Purpose
Hyperpriors, like conjugate priors, are a computational convenience – they do not change the process of Bayesian inference, but simply allow one to more easily describe and compute with the prior.
Uncertainty
Firstly, use of a hyperprior allows one to express uncertainty in a hyperparameter: taking a fixed prior is an assumption, varying a hyperparameter of the prior allows one to do sensitivity analysis on this assumption, and taking a distribution on this hyperparameter allows one to express uncertainty in this assumption: "assume that the prior is of this form (this parametric family), but that we are uncertain as to precisely what the values of the parameters should be".
Mixture distribution
More abstractly, if one uses a hyperprior, then the prior distribution (on the parameter of the underlying model) itself is a mixture density: it is the weighted average of the various prior distributions (over different hyperparameters), with the hyperprior being the weighting. This adds additional possible distributions (beyond the parametric family one is using), because parametric families of distributions are generally not convex sets – as a mixture density is a convex combination of distributions, it will in general lie outside the family.
For instance, the mixture of two normal distributions is not a normal distribution: if one takes different means (sufficiently distant) and mix 50% of each, one obtains a bimodal distribution, which is thus not normal. In fact, the convex hull of normal distributions is dense in all distributions, so in some cases, you can arbitrarily closely approximate a given prior by using a family with a suitable hyperprior.
What makes this approach p |
https://en.wikipedia.org/wiki/Statistical%20regions%20of%20Serbia | The statistical regions of Serbia () are regulated by the Law of the Regional Development and the Law of the Official Statistics. Serbia is divided into five statistical regions which are chiefly used for statistical purposes, such as census data. The regions encompass one or multiple districts each.
Introduction
In 2009, National Assembly of Serbia adopted the Law on Equal Territorial Development that formed seven statistical regions in the territory of Serbia. The Law was amended on 7 April 2010, so that the number of regions was reduced to five. The previously formed region of Eastern Serbia was merged with Southern Serbia and the region of Šumadija was merged with Western Serbia.
The five statistical regions of Serbia are:
Vojvodina
Belgrade
Šumadija and Western Serbia
Southern and Eastern Serbia
Kosovo and Metohija
Statistical regional classification
In a bylaw from 2010, the Government of Serbia specified a nomenclature of statistic territorial units in the country. The act was an attempt to synchronize the existing statistical division of the country with the Nomenclature of Territorial Units for Statistics of the European Union. According to the act, an additional top level of grouping was introduced, with the territory of Serbia divided into two NUTS 1 regions:
Serbia-North, comprising
Vojvodina
Belgrade, and
Serbia-South, comprising
Šumadija and Western Serbia
Southern and Eastern Serbia
Kosovo and Metohija
The five statistical regions would therefore become NUTS level 2 regions, while the Districts of Serbia would correspond with NUTS level 3. However, the classification has remained largely in internal, and limited, use within Serbia. , it has not been sanctioned by the European Union. According to a 2011 whitepaper by ESPON, which discusses the possibility to include Albania, Serbia, Montenegro and Bosnia and Herzegovina into NUTS nomenclature, "the statistical NUTS1 and NUTS2 regions created by the government in order to meet the NUTS criteria as well as the requirements of the EU regional policy, do not have actually a considerable administrative power; also, they are not self-governed entities. The political criterion prevailed for their creation."
Officially, NUTS regions only exist for EU Member States. For EFTA, EU candidate and potential candidate countries, the European Commission agrees with the countries concerned on a nomenclature referred to as "Statistical regions".
See also
Administrative divisions of Serbia
List of Serbian regions by Human Development Index
References
External links
Administrative divisions of Serbia |
https://en.wikipedia.org/wiki/Andr%C3%A1s%20Hajnal | András Hajnal (May 13, 1931 – July 30, 2016) was a professor of mathematics at Rutgers University and a member of the Hungarian Academy of Sciences known for his work in set theory and combinatorics.
Biography
Hajnal was born on 13 May 1931, in Budapest, Hungary.
He received his university diploma (M.Sc. degree) in 1953 from the Eötvös Loránd University, his Candidate of Mathematical Science degree (roughly equivalent to Ph.D.) in 1957, under the supervision of László Kalmár, and his Doctor of Mathematical Science degree in 1962. From 1956 to 1995 he was a faculty member at the Eötvös Loránd University; in 1994, he moved to Rutgers University to become the director of DIMACS, and he remained there as a professor until his retirement in 2004. He became a member of the Hungarian Academy of Sciences in 1982, and directed its mathematical institute from 1982 to 1992. He was general secretary of the János Bolyai Mathematical Society from 1980 to 1990, and president of the society from 1990 to 1994. Starting in 1981, he was an advisory editor of the journal Combinatorica. Hajnal was also one of the honorary presidents of the European Set Theory Society.
Hajnal was an avid chess player.
Hajnal was the father of Peter Hajnal, the co-dean of the European College of Liberal Arts.
Research and publications
Hajnal was the author of over 150 publications. Among the many co-authors of Paul Erdős, he had the second largest number of joint papers, 56.
With Peter Hamburger, he wrote a textbook, Set Theory (Cambridge University Press, 1999, ). Some of his more well-cited research papers include
A paper on circuit complexity with Maas, Pudlak, Szegedy, and György Turán, showing exponential lower bounds on the size of bounded-depth circuits with weighted majority gates that solve the problem of computing the parity of inner products.
The Hajnal–Szemerédi theorem on equitable coloring, proving a 1964 conjecture of Erdős: let Δ denote the maximum degree of a vertex in a finite graph G. Then G can be colored with Δ + 1 colors in such a way that the sizes of the color classes differ by at most one. Several authors have subsequently published simplifications and generalizations of this result.
A paper with Erdős and J. W. Moon on graphs that avoid having any k-cliques. Turán's theorem characterizes the graphs of this type with the maximum number of edges; Erdős, Hajnal and Moon find a similar characterization of the smallest maximal k-clique-free graphs, showing that they take the form of certain split graphs. This paper also proves a conjecture of Erdős and Gallai on the number of edges in a critical graph for domination.
A paper with Erdős on graph coloring problems for infinite graphs and hypergraphs. This paper extends greedy coloring methods from finite to infinite graphs: if the vertices of a graph can be well-ordered so that each vertex has few earlier neighbors, it has low chromatic number. When every finite subgraph has an ordering of this type in which the |
https://en.wikipedia.org/wiki/2005%E2%80%9306%20FC%20O%C8%9Belul%20Gala%C8%9Bi%20season |
Competitions
Friendlies
Divizia A
League table
Results by round
Results summary
Matches
Cupa României
Players
Squad statistics
Transfers
In
Out
References
ASC Oțelul Galați seasons
Otelul Galati |
https://en.wikipedia.org/wiki/Cioffi | Cioffi may refer to :
Claudio Cioffi (born 7 May 1951), also Claudio Cioffi-Revilla, is an Italian-American scientist and inventor, best known for his work in applied mathematics and computational social science.
Charles Cioffi (born 31 October 1935), also credited as Charles M. Cioffi, is an American television actor.
Felippe Cioffi (fl. 1828–1846) was trombone soloist in the 19th century.
Frank Cioffi, an American philosopher
Gabriele Cioffi (born 30 September 1975 in Florence) is an Italian footballer who plays for Serie B team Ascoli in the role of a defender.
John Cioffi (born 7 November 1956), also credited as John M. Cioffi, is an American electrical engineer and prolific inventor best known for his work in DSL technology.
Landulf Cioffi was a Knight crusader and Lord of Salerno, Italy, after the fall of Acre. Library of the Province of Salerno.
Ralph Cioffi was an executive at the U.S. investment bank Bear Stearns and managed two Bear-branded hedge funds.
Sandy Cioffi is a Seattle-based film and video artist.
Mauro Cioffi, Italian footballer |
https://en.wikipedia.org/wiki/2007%E2%80%9308%20Saudi%20Premier%20League | The 2007-08 season of the Saudi Professional League was the 32nd season of top-tier football in Saudi Arabia.
Teams and venues
League standings
Season statistics
Top scorers
References
2007–08 in Asian association football leagues
2007–08 in Saudi Arabian football
Saudi Premier League seasons |
https://en.wikipedia.org/wiki/Egon%20Milder | Egon Milder (22 April 1942 – 18 October 1975) was a German footballer who played as a Defender or midfielder. He spent four seasons in the Bundesliga with Borussia Mönchengladbach.
Career statistics
1 1964–65 includes the Regionalliga promotion playoffs.
References
External links
1942 births
1975 deaths
German men's footballers
Men's association football defenders
Men's association football midfielders
Bundesliga players
VfL Bochum players
Borussia Mönchengladbach players
FC Luzern players
SC Kriens players |
https://en.wikipedia.org/wiki/American%20Book%20Company%20%281996%29 | American Book Company is a textbook and software publishing company. Its main focus is on standardized test preparation materials. It offers books covering language arts, mathematics, science, and social studies tests. The company also produces transparencies, basic review books, and ACT and SAT preparation books.
It currently publishes nearly 200 materials, most of them specifically developed for Alabama, Arizona, California, Florida, Georgia, Indiana, Louisiana, Maine, Maryland, Minnesota, Mississippi, Nevada, New Jersey, North Carolina, Ohio, South Carolina, Tennessee, and Texas.
Company history
American Book Company was founded in 1996 by Dr. Frank Pintozzi, a professor of reading and English as a Second Language at Kennesaw State University, and Colleen Pintozzi, a math teacher and supervisor of General Educational Development programs.
They began writing textbooks for nearby South Carolina. After finding success, they created more books for states across the country.
Book Features
American Book Company books have frequent references to standards provided by state departments of education. Each book is based around multiple practice tests modeled after the relevant state test.
Many of the company's books include links to online modules curated by the National Science Teachers Association's sciLINKS program.
Validation Study
In 2006, publishing consultant Lois Eskin Associates studied the effects of using American Book Company books to prepare for standardized tests. The goal of the study was to find if students were more likely to pass standardized tests after studying with American Book Company books. The study included schools in Alabama, California, Florida, Georgia, Indiana, Louisiana, South Carolina, and Texas.
Results were positive, with every school showing success attributed to American Book Company. In particular, Tranquillity High School increased its pass rate on the CAHSEE by 135%.
Books Adopted by State Boards of Education
As of 2009, the following fourteen American Book Company books have been officially adopted by the Alabama and Georgia departments of education:
Mastering the AL Direct Assessment in Writing: Grade 10
Mastering the GA CRCT 6th in Language Arts
Mastering the GA CRCT 6th in Reading
Mastering the GA CRCT 7th in Language Arts
Mastering the GA CRCT 7th in Reading
Passing the 7th Grade ARMT in Reading
Passing the 8th Grade ARMT in Reading
Passing the GA CRCT 3rd in Reading
Passing the GA CRCT 5th in Reading
Passing the GA CRCT 8th in Language Arts
Passing the GA CRCT 8th in Reading
Passing the GA Grade 8 Writing Assessment
Passing the New AL High School Graduation Exam in Language
Passing the New AL High School Graduation Exam in Reading
American Book Company books are also listed as a resource by the New Jersey Department of Education and schools in many states, including Minnesota, California, and Alabama.
Other Ventures
American Book Company offers online practice testin |
https://en.wikipedia.org/wiki/WBIC | WBIC may refer to:
WBIC, Widely applicable Bayesian information criterion in statistics
WBIC-LP, a low-power radio station (97.3 FM) licensed to serve Wilson, North Carolina, United States
WYZI, a radio station (810 AM) licensed to serve Royston, Georgia, United States, which held the call sign WBIC from 1990 to 2009
WBWD (AM), a radio station (540 AM) licensed to serve Islip, New York , United States, which held the call sign WBIC from 1959 to 1967
Wolfson Brain Imaging Centre |
https://en.wikipedia.org/wiki/Equatorial%20Rossby%20wave | Equatorial Rossby waves, often called planetary waves, are very long, low frequency water waves found near the equator and are derived using the equatorial beta plane approximation.
Mathematics
Using the equatorial beta plane approximation, , where β is the variation of the Coriolis parameter with latitude, . With this approximation, the primitive equations become the following:
the continuity equation (accounting for the effects of horizontal convergence and divergence and written with geopotential height):
the U-momentum equation (zonal component):
the V-momentum equation (meridional component):
In order to fully linearize the primitive equations, one must assume the following solution:
Upon linearization, the primitive equations yield the following dispersion relation:
, where c is the phase speed of an equatorial Kelvin wave (). Their frequencies are much lower than that of gravity waves and represent motion that occurs as a result of the undisturbed potential vorticity varying (not constant) with latitude on the curved surface of the earth. For very long waves (as the zonal wavenumber approaches zero), the non-dispersive phase speed is approximately:
, which indicates that these long equatorial Rossby waves move in the opposite direction (westward) of Kelvin waves (which move eastward) with speeds reduced by factors of 3, 5, 7, etc. To illustrate, suppose c = 2.8 m/s for the first baroclinic mode in the Pacific; then the Rossby wave speed would correspond to ~0.9 m/s, requiring a 6-month time frame to cross the Pacific basin from east to west. For very short waves (as the zonal wavenumber increases), the group velocity (energy packet) is eastward and opposite to the phase speed, both of which are given by the following relations:
Frequency relation:
Group velocity:
Thus, the phase and group speeds are equal in magnitude but opposite in direction (phase speed is westward and group velocity is eastward); note that is often useful to use potential vorticity as a tracer for these planetary waves, due to its invertibility (especially in the quasi-geostrophic framework). Therefore, the physical mechanism responsible for the propagation of these equatorial Rossby waves is none other than the conservation of potential vorticity:
Thus, as a fluid parcel moves equatorward (βy approaches zero), the relative vorticity must increase and become more cyclonic in nature. Conversely, if the same fluid parcel moves poleward, (βy becomes larger), the relative vorticity must decrease and become more anticyclonic in nature.
As a side note, these equatorial Rossby waves can also be vertically-propagating waves when the Brunt–Vaisala frequency (buoyancy frequency) is held constant, ultimately resulting in solutions proportional to , where m is the vertical wavenumber and k is the zonal wavenumber.
Equatorial Rossby waves can also adjust to equilibrium under gravity in the tropics; because the planetary waves have frequen |
https://en.wikipedia.org/wiki/Adjusted%20mutual%20information | In probability theory and information theory, adjusted mutual information, a variation of mutual information may be used for comparing clusterings. It corrects the effect of agreement solely due to chance between clusterings, similar to the way the adjusted rand index corrects the Rand index. It is closely related to variation of information: when a similar adjustment is made to the VI index, it becomes equivalent to the AMI. The adjusted measure however is no longer metrical.
Mutual information of two partitions
Given a set S of N elements , consider two partitions of S, namely with R clusters, and with C clusters. It is presumed here that the partitions are so-called hard clusters; the partitions are pairwise disjoint:
for all , and complete:
The mutual information of cluster overlap between U and V can be summarized in the form of an RxC contingency table , where denotes the number of objects that are common to clusters and . That is,
Suppose an object is picked at random from S; the probability that the object falls into cluster is:
The entropy associated with the partitioning U is:
H(U) is non-negative and takes the value 0 only when there is no uncertainty determining an object's cluster membership, i.e., when there is only one cluster. Similarly, the entropy of the clustering V can be calculated as:
where . The mutual information (MI) between two partitions:
where denotes the probability that a point belongs to both the cluster in U and cluster in V:
MI is a non-negative quantity upper bounded by the entropies H(U) and H(V). It quantifies the information shared by the two clusterings and thus can be employed as a clustering similarity measure.
Adjustment for chance
Like the Rand index, the baseline value of mutual information between two random clusterings does not take on a constant value, and tends to be larger when the two partitions have a larger number of clusters (with a fixed number of set elements N).
By adopting a hypergeometric model of randomness, it can be shown that the expected mutual information between two random clusterings is:
where
denotes . The variables and are partial sums of the contingency table; that is,
and
The adjusted measure for the mutual information may then be defined to be:
.
The AMI takes a value of 1 when the two partitions are identical and 0 when the MI between two partitions equals the value expected due to chance alone.
References
External links
Matlab code for computing the adjusted mutual information
R code for fast and parallel calculation of adjusted mutual information
Python code for computing the adjusted mutual information
Information theory
Clustering criteria |
https://en.wikipedia.org/wiki/List%20of%20Newport%20County%20A.F.C.%20records%20and%20statistics | This article details records and statistics of Newport County Football Club.
Honours
League
Football League Third Division South champions 1939.
Conference National play-off winners 2013.
Conference South champions 2010.
Southern League Midland Division champions 1995; runners-up 1999.
Hellenic League champions 1990.
Welsh Football League winners 1928, 1937, 1955, 1975, 1980 (Reserve team).
League Performance
Tier 2: 1939–47 (Second Division, 1 full season)
Tier 3: 1920–21, 1958–62, 1980–87 (Third Division, 12 seasons); 1921–31, 1932–39, 1947–58 (Third Division South, 28 seasons)
Tier 4: 1962–80, 1987–8 (Fourth Division, 19 seasons); 2013– (League Two, 10 seasons)
Tier 5: 2010–13 (Conference, 3 full seasons)
Cups
Welsh Cup winners 1980; runners-up 1963, 1987.
FA Trophy runners-up 2012.
FAW Premier Cup winners 2008; runners-up 2003, 2007.
Southern League Merit Cup joint holders 1995, 1999.
Hellenic League Cup winners 1990.
Gloucestershire Senior Challenge Cup winners 1994.
Herefordshire Senior Cup winners 2000.
Welsh Football League Cup winners 1937, 1953, 1958, 1977, 1978; runners-up 1948, 1956, 1973, 1980 (Reserve team)
Royal Gwent Cup winners 1923, 1925.
Monmouthshire/ Gwent Senior Cup winners 1921, 1922, 1923, 1924, 1926, 1928, 1932, 1936, 1954, 1958, 1959, 1965, 1968, 1969, 1970, 1972, 1973, 1974, 1997, 1998, 1999, 2000, 2001, 2002, 2004, 2005, 2011, 2012
Club records
Highest league finish: Joint 9th in Football League Second Division 1939–40 (abandoned season), 22nd in Football League Second Division 1946–47.
Best FA Cup run: 5th round (last 16)
1948–49; Defeat after extra time vs. Portsmouth, 12 February 1949 (in front of a then-record attendance of 48,581 at Fratton Park).
2018–19; Defeat by Manchester City, the eventual FA Cup winners that season, 16 February 2019.
Best League Cup run: 4th round (last 16)
2020–21; Defeat by penalties after drawing 1–1 in normal time against Newcastle United, 30 September 2020.
Best EFL Trophy run: Semi-final (last 4)
1984–85; Defeat by Brentford, 17 May 1985.
2019–20; Defeat by penalties after drawing 0–0 in normal time against Salford City, 19 February 2020.
Match records
Newport County scores are shown first in every match
Firsts
First Football League match: 0–1 vs. Reading, 28 August 1920 (Football League Third Division).
First FA Cup match: 6–1 vs. Mond Nickel Works, 1st Qualifying round, 27 Sep 1913.
First League Cup match: 2–2 vs. Southampton, 10 October 1960.
First Welsh Cup match: 1–0 vs. Mardy, Extra preliminary round, 1912.
First European match: 4–0 vs. Crusaders 16 September 1980 (European Cup Winners' Cup).
Record results
Record League victory: 10–0 vs. Merthyr Town, 10 April 1930 (Football League Third Division South).
Record FA Cup victory: 7–0 vs. Woking, 24 November 1928 (FA Cup round 1).
Record League Cup victory: 6–0 (8–1 aggregate) vs. Exeter City, 14 September 1982 (League Cup round 1, 2nd leg).
Record European Cup Winners' Cup victory: 6–0 vs. Haugar (Norway), 4 November |
https://en.wikipedia.org/wiki/Geoffrey%20K.%20Martin | Geoffrey K. Martin is a mathematician currently advising in the field of mathematical physics. Martin is also the Associate Professor and Chair of the mathematics department at the University of Toledo. His fields of study include differential geometry, relativity, and the foundations of physics. Martin earned his Ph.D. at the Stony Brook University in 1983. Geoffrey is the son of horticulturists Joy Lee Martin and Ernest Martin who owned Logee's Greenhouses.
External links
University of Toledo, Department of Mathematics, Faculty Directory
University of Toledo, Department of Mathematics, Faculty Research Interests
Year of birth missing (living people)
Living people
20th-century American mathematicians
21st-century American mathematicians
Differential geometers
University of Toledo faculty
Stony Brook University alumni
Mathematicians from Ohio |
https://en.wikipedia.org/wiki/Equitable%20coloring | In graph theory, an area of mathematics, an equitable coloring is an assignment of colors to the vertices of an undirected graph, in such a way that
No two adjacent vertices have the same color, and
The numbers of vertices in any two color classes differ by at most one.
That is, the partition of vertices among the different colors is as uniform as possible. For instance, giving each vertex a distinct color would be equitable, but would typically use many more colors than are necessary in an optimal equitable coloring. An equivalent way of defining an equitable coloring is that it is an embedding of the given graph as a subgraph of a Turán graph with the same set of vertices. There are two kinds of chromatic number associated with equitable coloring. The equitable chromatic number of a graph G is the smallest number k such that G has an equitable coloring with k colors. But G might not have equitable colorings for some larger numbers of colors; the equitable chromatic threshold of G is the smallest k such that G has equitable colorings for any number of colors greater than or equal to k.
The Hajnal–Szemerédi theorem, posed as a conjecture by and proven by , states that any graph with maximum degree Δ has an equitable coloring with Δ + 1 colors. Several related conjectures remain open. Polynomial time algorithms are also known for finding a coloring matching this bound, and for finding optimal colorings of special classes of graphs, but the more general problem of deciding whether an arbitrary graph has an equitable coloring with a given number of colors is NP-complete.
Examples
The star K1,5 shown in the illustration is a complete bipartite graph, and therefore may be colored with two colors. However, the resulting coloring has one vertex in one color class and five in another, and is therefore not equitable. The smallest number of colors in an equitable coloring of this graph is four, as shown in the illustration: the central vertex must be the only vertex in its color class, so the other five vertices must be split among at least three color classes in order to ensure that the other color classes all have at most two vertices. More generally, observes that any star K1,n needs colors in any equitable coloring; thus, the chromatic number of a graph may differ from its equitable coloring number by a factor as large as n/4. Because K1,5 has maximum degree five, the number of colors guaranteed for it by the Hajnal–Szemerédi theorem is six, achieved by giving each vertex a distinct color.
Another interesting phenomenon is exhibited by a different complete bipartite graph, K2n + 1,2n + 1. This graph has an equitable 2-coloring, given by its bipartition. However, it does not have an equitable (2n + 1)-coloring: any equitable partition of the vertices into that many color classes would have to have exactly two vertices per class, but the two sides of the bipartition cannot each be partitioned into pairs because they have an odd number of vertices. |
https://en.wikipedia.org/wiki/Mean%20percentage%20error | In statistics, the mean percentage error (MPE) is the computed average of percentage errors by which forecasts of a model differ from actual values of the quantity being forecast.
The formula for the mean percentage error is:
where is the actual value of the quantity being forecast, is the forecast, and is the number of different times for which the variable is forecast.
Because actual rather than absolute values of the forecast errors are used in the formula, positive and negative forecast errors can offset each other; as a result, the formula can be used as a measure of the bias in the forecasts.
A disadvantage of this measure is that it is undefined whenever a single actual value is zero.
See also
Percentage error
Mean absolute percentage error
Mean squared error
Mean squared prediction error
Minimum mean-square error
Squared deviations
Peak signal-to-noise ratio
Root mean square deviation
Errors and residuals in statistics
References
Summary statistics |
https://en.wikipedia.org/wiki/Valery%20Goppa | Valery Denisovich Goppa (; born 1939) is a Soviet and Russian mathematician.
He discovered a relation between algebraic geometry and codes, utilizing the Riemann-Roch theorem. Today these codes are called algebraic geometry codes. In 1981 he presented his discovery at the algebra seminar of the Moscow State University.
He also constructed other classes of codes in his career, and in 1972 he won the best paper award of the IEEE Information Theory Society for his paper "A new class of linear correcting codes". It is this class of codes that bear the name of “Goppa code”.
Selected publications
References
Russian mathematicians
Living people
Year of birth missing (living people)
Russian scientists |
https://en.wikipedia.org/wiki/Pantelides%20algorithm | Pantelides algorithm in mathematics is a systematic method for reducing high-index systems of differential-algebraic equations to lower index. This is accomplished by selectively adding differentiated forms of the equations already present in the system. It is possible for the algorithm to fail in some instances.
Pantelides algorithm is implemented in several significant equation-based simulation programs such as gPROMS, Modelica and EMSO.
References
Numerical differential equations |
https://en.wikipedia.org/wiki/Chike%20Obi | Chike Obi (April 17, 1921 – March 13, 2008) was a Nigerian politician, mathematician and professor.
The African Mathematics Union suggests that he was the first Nigerian to hold a doctorate in mathematics. Obi's early research dealt mainly with the question of the existence of periodic solutions of non-linear ordinary differential equations. He successfully used the perturbation technique, and several of his publications greatly helped to stimulate research interest in this subject throughout the world and have become classics in the literature.
Obi is the author of several books and journals on mathematics and Nigerian politics.
Early life and education
Obi was educated in various parts of Nigeria before reading mathematics as an external student of the University of London. Immediately after his first degree, he won a scholarship to do research study at Pembroke College, Cambridge, followed by doctoral studies at Massachusetts Institute of Technology.in Cambridge, Massachusetts, United States, becoming in 1950, the first Nigerian to receive a PhD in mathematics.
Career as mathematician
Obi returned to lecture at the premier Nigerian University of Ibadan. He was soon diverted from this by political activities. After the war, he returned to lecture in 1970 at the University of Lagos where he quickly rose to the senior academic role of a professor.
He left Lagos to return to his root in the city of Onitsha, establishing the Nanna Institute for Scientific Studies.
Obi had won the Sigvard Eklund Prize for original work in differential equation from the International Centre for Theoretical Physics. He was a university teacher until his retirement as an Emeritus Professor in 1985.
In 1997, Obi claimed to be the third person to solve Fermat’s Last Theorem after Andrew Wiles and Richard Taylor in 1994.
He also claimed to have found an elementary proof to Fermat’s Last Theorem. This work was carried out at his Nanna Institute for Scientific Studies in Onitsha, Eastern Nigeria and published in Algebras, Groups and Geometries. However, a review of this proof published in Mathematical Reviews indicates that it was a false proof.
Career in politics and activism
In Ibadan, Obi began to give lectures about his political philosophy, Kemalism and how best he felt the country should be managed. He helped form the Dynamic Party of Nigeria, of which he served as its first secretary-general. Through the party, he stood in as a candidate in a parliamentary election in Ibadan in 1951 but lost.
The party later entered into alliances with the larger National Council of Nigerian and Cameroon and also the Action Group. Obi was elected as part of the Nigerian delegation that negotiated the country’s path to self-rule at two London conferences in 1957 and 1958.
After Nigeria's independence from Britain in 1960, Obi was elected a legislator in the Eastern House of Assembly in 1960, he refused to vacate his seat in the national legislature in Lagos, the Speaker of |
https://en.wikipedia.org/wiki/Griesmer%20bound | In the mathematics of coding theory, the Griesmer bound, named after James Hugo Griesmer, is a bound on the length of linear binary codes of dimension k and minimum distance d.
There is also a very similar version for non-binary codes.
Statement of the bound
For a binary linear code, the Griesmer bound is:
Proof
Let denote the minimum length of a binary code of dimension k and distance d. Let C be such a code. We want to show that
Let G be a generator matrix of C. We can always suppose that the first row of G is of the form r = (1, ..., 1, 0, ..., 0) with weight d.
The matrix generates a code , which is called the residual code of obviously has dimension and length has a distance but we don't know it. Let be such that . There exists a vector such that the concatenation Then On the other hand, also since and is linear: But
so this becomes . By summing this with we obtain . But so we get As is integral, we get This implies
so that
By induction over k we will eventually get
Note that at any step the dimension decreases by 1 and the distance is halved, and we use the identity
for any integer a and positive integer k.
The bound for the general case
For a linear code over , the Griesmer bound becomes:
The proof is similar to the binary case and so it is omitted.
See also
Singleton bound
Hamming bound
Gilbert-Varshamov bound
Johnson bound
Plotkin bound
Elias Bassalygo bound
References
J. H. Griesmer, "A bound for error-correcting codes," IBM Journal of Res. and Dev., vol. 4, no. 5, pp. 532-542, 1960.
Coding theory
Articles containing proofs |
https://en.wikipedia.org/wiki/2002%20Kansas%20City%20Wizards%20season |
Squad
Competitions
Major League Soccer
U.S. Open Cup
MLS Cup Playoffs
CONCACAF Champions' Cup
Squad statistics
Final Statistics
References
Kansas City Wizards
Kansas City
Sporting Kansas City seasons
Kansas City Wizards |
https://en.wikipedia.org/wiki/2003%20Kansas%20City%20Wizards%20season |
Squad
Competitions
Major League Soccer
U.S. Open Cup
MLS Cup Playoffs
Squad statistics
Final Statistics
References
Sporting Kansas City seasons
Kansas City Wizards
Kansas City Wizards
Kansas City Wizards |
https://en.wikipedia.org/wiki/The%20Hidden%20Game%20of%20Football | The Hidden Game of Football: A Revolutionary Approach to the Game and Its Statistics is a book on American football statistics published in 1988 and written by Bob Carroll, John Thorn, and Pete Palmer. It was the first systematic statistical approach to analyzing American football in a book.
Original publication
The original was published in 1988. The purpose of the book is to look at the statistics of football and how they change the game based on coaching decision and player selection.
Updated edition
In 1998, a new version title The Hidden Game Of Football: The Next Edition was published by the authors. The new version is updated and includes more commentary on past statistics. A 2023 edition, with a foreword by Aaron Schatz, is being published by University of Chicago Press.
Reception
The book received mixed reviews at the time of its original publication, but has been assessed more positively by retrospective reviews. Christopher Lehmann-Haupt, in a 1988 review for The New York Times, found the application of statistics to football "cumbersome." By contrast, Shane Richmond of Pigskin Books wrote that "it’s likely that the book changed the way teams themselves think about the game; it certainly changed how the smarter sportswriters and analysts looked at it." Rustin Dodd, in a retrospective article in The Athletic, described the book as "a seminal work of football analytics" in 2022.
The book inspired Aaron Schatz to found Football Outsiders, and develop the Defense-adjusted Value Over Average (DVOA) statistic.
See also
Advanced statistics in basketball, statistical analysis of basketball.
Analytics (ice hockey), statistical analysis of ice hockey.
Advanced Football Analytics, website dedicated to the statistical analysis of the NFL.
Ernie Adams, a Phillips Academy classmate of Bill Belichick whose role with the New England Patriots is compared to Bill James's role with the Boston Red Sox.
Football Outsiders, a website dedicated to the statistical analysis of football based on some of the concepts from The Hidden Game of Football.
Sabermetrics, statistical analysis of baseball.
Sean Lahman, creator of the Pro Football Prospectus series, author of The Pro Football Historical Abstract, and editor of ESPN Pro Football Encyclopedia.
Sports rating system
Win probability
References
American football strategy |
https://en.wikipedia.org/wiki/Win%20probability | Win probability is a statistical tool which suggests a sports team's chances of winning at any given point in a game, based on the performance of historical teams in the same situation. The art of estimating win probability involves choosing which pieces of context matter. Baseball win probability estimates often include whether a team is home or away, inning, number of outs, which bases are occupied, and the score difference. Because baseball proceeds batter by batter, each new batter introduces a discrete state. There are a limited number of possible states, and so baseball win probability tools usually have enough data to make an informed estimate.
American football win probability estimates often include whether a team is home or away, the down and distance, score difference, time remaining, and field position. American football has many more possible states than baseball with far fewer games, so football estimates have a greater margin of error. The first win probability analysis was done in 1971 by Robert E. Machol and former NFL quarterback Virgil Carter.
As a brief example, guessing that each team playing at home will win is based on home advantage. This guess uses a single contextual factor and involves a very large number of games. But with only one factor, the accuracy of this guess is limited to home advantage itself (about 55–70% across sports) and does not change within the game based on in-game factors.
Win probability added is the change in win probability, often how a play or team member affected the probable outcome of the game.
Current research
Current research work involves measuring the accuracy of win probability estimates, as well as quantifying the uncertainty in individual estimates. That is, if a tool estimates a 24% win probability because 24% of previous teams in that situation won their games, do future teams win at the same 24% rate? Estimating from hidden data uses testing tools like cross-validation.
While many models involve frequency analysis of past events, other models use Bayesian processes.
Some models include a measure of teams' strength coming into the game, while others assume every team is average. Including strength estimates increases the number of possible states, and therefore decreases an estimate's power while possibly increasing its accuracy.
References
External links
doi:10.7910/DVN/25502
Sports technology
Terminology used in multiple sports
Baseball statistics
Baseball strategy
1971 introductions |
https://en.wikipedia.org/wiki/Leopoldt%27s%20conjecture | In algebraic number theory, Leopoldt's conjecture, introduced by , states that the p-adic regulator of a number field does not vanish. The p-adic regulator is an analogue of the usual
regulator defined using p-adic logarithms instead of the usual logarithms, introduced by .
Leopoldt proposed a definition of a p-adic regulator Rp attached to K and a prime number p. The definition of Rp uses an appropriate determinant with entries the p-adic logarithm of a generating set of units of K (up to torsion), in the manner of the usual regulator. The conjecture, which for general K is still open , then comes out as the statement that Rp is not zero.
Formulation
Let K be a number field and for each prime P of K above some fixed rational prime p, let UP denote the local units at P and let U1,P denote the subgroup of principal units in UP. Set
Then let E1 denote the set of global units ε that map to U1 via the diagonal embedding of the global units in E.
Since is a finite-index subgroup of the global units, it is an abelian group of rank , where is the number of real embeddings of and the number of pairs of complex embeddings. Leopoldt's conjecture states that the -module rank of the closure of embedded diagonally in is also
Leopoldt's conjecture is known in the special case where is an abelian extension of or an abelian extension of an imaginary quadratic number field: reduced the abelian case to a p-adic version of Baker's theorem, which was proved shortly afterwards by .
has announced a proof of Leopoldt's conjecture for all CM-extensions of .
expressed the residue of the p-adic Dedekind zeta function of a totally real field at s = 1 in terms of the p-adic regulator. As a consequence, Leopoldt's conjecture for those fields is equivalent to their p-adic Dedekind zeta functions having a simple pole at s = 1.
References
.
.
Algebraic number theory
Conjectures
Unsolved problems in number theory |
https://en.wikipedia.org/wiki/Quadratic%20integer | In number theory, quadratic integers are a generalization of the usual integers to quadratic fields. Quadratic integers are algebraic integers of degree two, that is, solutions of equations of the form
with and (usual) integers. When algebraic integers are considered, the usual integers are often called rational integers.
Common examples of quadratic integers are the square roots of rational integers, such as , and the complex number , which generates the Gaussian integers. Another common example is the non-real cubic root of unity , which generates the Eisenstein integers.
Quadratic integers occur in the solutions of many Diophantine equations, such as Pell's equations, and other questions related to integral quadratic forms. The study of rings of quadratic integers is basic for many questions of algebraic number theory.
History
Medieval Indian mathematicians had already discovered a multiplication of quadratic integers of the same , which allowed them to solve some cases of Pell's equation.
The characterization given in of the quadratic integers was first given by Richard Dedekind in 1871.
Definition
A quadratic integer is an algebraic integer of degree two. More explicitly, it is a complex number , which solves an equation of the form , with and integers. Each quadratic integer that is not an integer is not rational – namely, it's a real irrational number if and non-real if – and lies in a uniquely determined quadratic field , the extension of generated by the square root of the unique square-free integer that satisfies for some integer . If is positive, the quadratic integer is real. If , it is imaginary (that is, complex and non-real).
The quadratic integers (including the ordinary integers) that belong to a quadratic field form an integral domain called the ring of integers of
Although the quadratic integers belonging to a given quadratic field form a ring, the set of all quadratic integers is not a ring because it is not closed under addition or multiplication. For example, and are quadratic integers, but and are not, as their minimal polynomials have degree four.
Explicit representation
Here and in the following, the quadratic integers that are considered belong to a quadratic field where is a square-free integer. This does not restrict the generality, as the equality (for any positive integer ) implies
An element of is a quadratic integer if and only if there are two integers and such that either
or, if is a multiple of
with and both odd
In other words, every quadratic integer may be written , where and are integers, and where is defined by
(as has been supposed square-free the case is impossible, since it would imply that is divisible by the square 4).
Norm and conjugation
A quadratic integer in may be written
,
where and are either both integers, or, only if , both halves of odd integers. The norm of such a quadratic integer is
.
The norm of a quadratic integer is a |
https://en.wikipedia.org/wiki/P-constrained%20group | In mathematics, a p-constrained group is a finite group resembling the centralizer of an element of prime order p in a group of Lie type over a finite field of characteristic p. They were introduced by in order to extend some of Thompson's results about odd groups to groups with dihedral Sylow 2-subgroups.
Definition
If a group has trivial p core Op(G), then it is defined to be p-constrained if the p-core Op(G) contains its centralizer, or in other words if its generalized Fitting subgroup is a p-group. More generally, if Op(G) is non-trivial, then G is called p-constrained if G/Op(G) is .
All p-solvable groups are p-constrained.
See also
p-stable group
The ZJ theorem has p-constraint as one of its conditions.
References
Finite groups
Properties of groups |
https://en.wikipedia.org/wiki/Normal%20p-complement | In mathematical group theory, a normal p-complement of a finite group for a prime p is a normal subgroup of order coprime to p and index a power of p. In other words the group is a semidirect product of the normal p-complement and any Sylow p-subgroup. A group is called p-nilpotent if it has a normal p-complement.
Cayley normal 2-complement theorem
Cayley showed that if the Sylow 2-subgroup of a group G is cyclic then the group has a normal 2-complement, which shows that the Sylow 2-subgroup of a simple group of even order cannot be cyclic.
Burnside normal p-complement theorem
showed that if a Sylow p-subgroup of a group G is in the center of its normalizer then G has a normal p-complement. This implies that if p is the smallest prime dividing the order of a group G and the Sylow p-subgroup is cyclic, then G has a normal p-complement.
Frobenius normal p-complement theorem
The Frobenius normal p-complement theorem is a strengthening of the Burnside normal p-complement theorem, that states that if the normalizer of every non-trivial subgroup of a Sylow p-subgroup of G has a normal p-complement, then so does G. More precisely, the following conditions are equivalent:
G has a normal p-complement
The normalizer of every non-trivial p-subgroup has a normal p-complement
For every p-subgroup Q, the group NG(Q)/CG(Q) is a p-group.
Thompson normal p-complement theorem
The Frobenius normal p-complement theorem shows that if every normalizer of a non-trivial subgroup of a Sylow p-subgroup has a normal p-complement then so does G. For applications it is often useful to have a stronger version where instead of using all non-trivial subgroups of a Sylow p-subgroup, one uses only the non-trivial characteristic subgroups. For odd primes p Thompson found such a strengthened criterion: in fact he did not need all characteristic subgroups, but only two special ones.
showed that if p is an odd prime and the groups N(J(P)) and C(Z(P)) both have normal p-complements for a Sylow P-subgroup of G, then G has a normal p-complement.
In particular if the normalizer of every nontrivial characteristic subgroup of P has a normal p-complement, then so does G. This consequence is sufficient for many applications.
The result fails for p = 2 as the simple group PSL2(F7) of order 168 is a counterexample.
gave a weaker version of this theorem.
Glauberman normal p-complement theorem
Thompson's normal p-complement theorem used conditions on two particular characteristic subgroups of a Sylow p-subgroup. Glauberman improved this further by showing that one only needs to use one characteristic subgroup: the center of the Thompson subgroup.
used his ZJ theorem to prove a normal p-complement theorem, that if p is an odd prime and the normalizer of Z(J(P)) has a normal p-complement, for P a Sylow p-subgroup of G, then so does G. Here Z stands for the center of a group and J for the Thompson subgroup.
The result fails for p = 2 as the simple group PSL2(F7) of orde |
https://en.wikipedia.org/wiki/Genius%20%28mathematics%20software%29 | Genius (also known as the Genius Math Tool) is a free open-source numerical computing environment and programming language, similar in some aspects to MATLAB, GNU Octave, Mathematica and Maple. Genius is aimed at mathematical experimentation rather than computationally intensive tasks. It is also very useful as just a calculator. The programming language is called GEL and aims to have a mathematically friendly syntax. The software comes with a command-line interface and a GUI, which uses the GTK+ libraries. The graphical version supports both 2D and 3D plotting. The graphical version includes a set of tutorials originally aimed at in class demonstrations.
History
Genius was the original calculator for the GNOME project started in 1997, but was split into a separate project soon after the 0.13 release of GNOME in 1998. Because of this ancestry, it was also known as Genius Calculator or GNOME Genius. There was an attempt to merge Genius and the Dr. Geo interactive geometry software, but this merge never materialized. Version 1.0 was released in 2007 almost 10 years after the initial release.
Example GEL source code
Here is a sample definition of a function calculating the factorial recursively
function f(x) = (
if x <= 1 then
1
else
(f(x-1)*x)
)
GEL contains primitives for writing the product iteratively and hence we can get the following iterative
version
function f(x) = prod k=1 to x do k
See also
Comparison of numerical analysis software
Notes
Array programming languages
Free educational software
Free mathematics software
Free software programmed in C
Numerical analysis software for Linux
Numerical analysis software for macOS
Numerical programming languages
Science software that uses GTK
Unix programming tools |
https://en.wikipedia.org/wiki/List%20of%20Huddersfield%20Town%20A.F.C.%20managers | This is a list of the records of all the managers of Huddersfield Town since the club's inception in 1908.
Statistics
Information correct as of matches played up to and including 1 April 2023. Only competitive matches are counted.
Managers with honours
References
99 Years & Counting – Stats & Stories – Huddersfield Town History
Managers
Huddersfield Town |
https://en.wikipedia.org/wiki/Hadwiger%20conjecture%20%28combinatorial%20geometry%29 | In combinatorial geometry, the Hadwiger conjecture states that any convex body in n-dimensional Euclidean space can be covered by 2n or fewer smaller bodies homothetic with the original body, and that furthermore, the upper bound of 2n is necessary if and only if the body is a parallelepiped. There also exists an equivalent formulation in terms of the number of floodlights needed to illuminate the body.
The Hadwiger conjecture is named after Hugo Hadwiger, who included it on a list of unsolved problems in 1957; it was, however, previously studied by and independently, . Additionally, there is a different Hadwiger conjecture concerning graph coloring—and in some sources the geometric Hadwiger conjecture is also called the Levi–Hadwiger conjecture or the Hadwiger–Levi covering problem.
The conjecture remains unsolved even in three dimensions, though the two dimensional case was resolved by .
Formal statement
Formally, the Hadwiger conjecture is: If K is any bounded convex set in the n-dimensional Euclidean space Rn, then there exists a set of 2n scalars si and a set of 2n translation vectors vi such that all si lie in the range 0 < si < 1, and
Furthermore, the upper bound is necessary iff K is a parallelepiped, in which case all 2n of the scalars may be chosen to be equal to 1/2.
Alternate formulation with illumination
As shown by Boltyansky, the problem is equivalent to one of illumination: how many floodlights must be placed outside of an opaque convex body in order to completely illuminate its exterior? For the purposes of this problem, a body is only considered to be illuminated if for each point of the boundary of the body, there is at least one floodlight that is separated from the body by all of the tangent planes intersecting the body on this point; thus, although the faces of a cube may be lit by only two floodlights, the planes tangent to its vertices and edges cause it to need many more lights in order for it to be fully illuminated. For any convex body, the number of floodlights needed to completely illuminate it turns out to equal the number of smaller copies of the body that are needed to cover it.
Examples
As shown in the illustration, a triangle may be covered by three smaller copies of itself, and more generally in any dimension a simplex may be covered by n + 1 copies of itself, scaled by a factor of n/(n + 1). However, covering a square by smaller squares (with parallel sides to the original) requires four smaller squares, as each one can cover only one of the larger square's four corners. In higher dimensions, covering a hypercube or more generally a parallelepiped by smaller homothetic copies of the same shape requires a separate copy for each of the vertices of the original hypercube or parallelepiped; because these shapes have 2n vertices, 2n smaller copies are necessary. This number is also sufficient: a cube or parallelepiped may be covered by 2n copies, scaled by a factor of 1/2. Hadwiger's conjecture is that pa |
https://en.wikipedia.org/wiki/Why%20Beauty%20Is%20Truth | Why Beauty Is Truth: A History of Symmetry is a 2007 book by Ian Stewart.
Overview
Following the life and work of famous mathematicians from antiquity to the present, Stewart traces mathematics' developing handling of the concept of symmetry. One of the first takeaways, established in the preface of this book, is that it dispels the idea of the origins of symmetry in geometry, as is often the first context in which the term is introduced. This book, through its chapters, establishes its origins in algebra, more specifically group theory.
Contents
The topics covered are:
Chapter 1: The Scribes of Babylon
The earliest records of solving quadratic equations.
Chapter 2: The Household Name
Euclid's influence on geometry in general and on regular polygons in particular.
Chapter 3: The Persian Poet
Omar Khayyám's solution to the cubic equation, which makes use of conic sections.
Chapter 4: The Gambling Scholar
Niccolò Fontana Tartaglia found the first algebraic solutions to special cubic equations.
Gerolamo Cardano used algebra to solve the cubic and quartic equation.
Chapter 5: The Cunning Fox
Carl Friedrich Gauss proved that the regular 17-gon can be constructed using only compass and straightedge, and extended the field of real numbers to the complex numbers.
Chapter 6: The Frustrated Doctor and the Sickly Genius
Joseph Louis Lagrange understood that all approaches to solve algebraic equations could be understood as symmetry transformations of such equations.
Alexandre-Théophile Vandermonde used symmetric functions as an ansatz to solve general algebraic equations, which would lead to the development of Galois theory.
Paolo Ruffini developed a first (incomplete) proof that the quintic equation cannot be solved analytically.
Niels Abel formalized group theory, the indispensable tool in describing symmetries.
Chapter 7: The Luckless Revolutionary
Évariste Galois laid the foundations to what is today known as Galois theory.
Chapter 8: The Mediocre Engineer and the Transcendent Professor
Pierre Laurent Wantzel proved that it is impossible to double the cube, trisect the angle, and constructing a regular polygon using only compass and straightedge.
Ferdinand von Lindemann proved the transcendence of pi, and by implication that it is impossible to square the circle using only compass and straightedge.
Chapter 9: The Drunken Vandal
William Rowan Hamilton extended the field of complex numbers to the quaternions.
Chapter 10: The Would-Be Soldier and the Weakly Bookworm
Marius Sophus Lie formalized Lie groups and Lie algebras.
Wilhelm Killing classified all simple Lie algebras (in what Ian Stewart calls the "greatest mathematical paper of all time")
Chapter 11: The Clerk from the Patent Office
Albert Einstein developed in his theory of general relativity a symmetry of space and time.
Chapter 12: A Quantum Quintet
Max Planck, Erwin Schrödinger, Werner Heisenberg, Paul Dirac, Eugene Wigner were major contributors to the early de |
https://en.wikipedia.org/wiki/Lune%20of%20Hippocrates | In geometry, the lune of Hippocrates, named after Hippocrates of Chios, is a lune bounded by arcs of two circles, the smaller of which has as its diameter a chord spanning a right angle on the larger circle. Equivalently, it is a non-convex plane region bounded by one 180-degree circular arc and one 90-degree circular arc. It was the first curved figure to have its exact area calculated mathematically.
History
Hippocrates wanted to solve the classic problem of squaring the circle, i.e. constructing a square by means of straightedge and compass, having the same area as a given circle. He proved that the lune bounded by the arcs labeled E and F in the figure has the same area as triangle ABO. This afforded some hope of solving the circle-squaring problem, since the lune is bounded only by arcs of circles. Heath concludes that, in proving his result, Hippocrates was also the first to prove that the area of a circle is proportional to the square of its diameter.
Hippocrates' book on geometry in which this result appears, Elements, has been lost, but may have formed the model for Euclid's Elements. Hippocrates' proof was preserved through the History of Geometry compiled by Eudemus of Rhodes, which has also not survived, but which was excerpted by Simplicius of Cilicia in his commentary on Aristotle's Physics.
Not until 1882, with Ferdinand von Lindemann's proof of the transcendence of π, was squaring the circle proved to be impossible.
Proof
Hippocrates' result can be proved as follows: The center of the circle on which the arc lies is the point , which is the midpoint of the hypotenuse of the isosceles right triangle . Therefore, the diameter of the larger circle is times the diameter of the smaller circle on which the arc lies. Consequently, the smaller circle has half the area of the larger circle, and therefore the quarter circle is equal in area to the semicircle . Subtracting the crescent-shaped area from the quarter circle gives triangle and subtracting the same crescent from the semicircle gives the lune. Since the triangle and lune are both formed by subtracting equal areas from equal area, they are themselves equal in area.
Generalizations
Using a similar proof to the one above, the Arab mathematician Hasan Ibn al-Haytham (Latinized name Alhazen, c. 965 – c. 1040) showed that where two lunes are formed, on the two sides of a right triangle, whose outer boundaries are semicircles and whose inner boundaries are formed by the circumcircle of the triangle, then the areas of these two lunes added together are equal to the area of the triangle. The lunes formed in this way from a right triangle are known as the lunes of Alhazen. The quadrature of the lune of Hippocrates is the special case of this result for an isosceles right triangle.
All lunes constructable by compass and straight-edge can be specified by the two angles formed by the inner and outer arcs on their respective circles; in this notation, for instance, the lune |
https://en.wikipedia.org/wiki/Bogolyubov%20Prize%20%28NASU%29 | The Bogoliubov Prize is an award offered by the National Academy of Sciences of Ukraine for scientists with outstanding contribution to theoretical physics and applied mathematics. The award is issued in the memory of theoretical physicist and mathematician Nikolay Bogoliubov.
The award was founded in 1992.
Laureates
2004 — Anton Grigorievich Naumovets
2002 — Leonid A. Pastur, for a cycle of works on research of the theory of a field and the theory of the disorder systems
2002 — Sergiy Peletminsky, for the set of works "Field theory and the theory of disordered systems".
1998 — A. V. Pogorelov, for a series of "Creation and support of advanced mathematical methods for solving problems in physics and mathematics"
1997 — Vasiliy S. Vladimirov
1996 — Vladimir A. Marchenko, for a series of "functional-algebraic methods in mathematical physics"
1993 — Oleksandr Sharkovsky, for a series of his works "The theory of scattering of quantum systems and one-dimensional dynamical systems".
1992 — Yurii Mitropolskiy, for a series of his works "Method of averaging and its applications to mathematical and theoretical physics".
See also
List of physics awards
References
Physics awards
Mathematics awards |
https://en.wikipedia.org/wiki/Projectivization | In mathematics, projectivization is a procedure which associates with a non-zero vector space V a projective space , whose elements are one-dimensional subspaces of V. More generally, any subset S of V closed under scalar multiplication defines a subset of formed by the lines contained in S and is called the projectivization of S.
Properties
Projectivization is a special case of the factorization by a group action: the projective space is the quotient of the open set V\{0} of nonzero vectors by the action of the multiplicative group of the base field by scalar transformations. The dimension of in the sense of algebraic geometry is one less than the dimension of the vector space V.
Projectivization is functorial with respect to injective linear maps: if
is a linear map with trivial kernel then f defines an algebraic map of the corresponding projective spaces,
In particular, the general linear group GL(V) acts on the projective space by automorphisms.
Projective completion
A related procedure embeds a vector space V over a field K into the projective space of the same dimension. To every vector v of V, it associates the line spanned by the vector of .
Generalization
In algebraic geometry, there is a procedure that associates a projective variety Proj S with a graded commutative algebra S (under some technical restrictions on S). If S is the algebra of polynomials on a vector space V then Proj S is This Proj construction gives rise to a contravariant functor from the category of graded commutative rings and surjective graded maps to the category of projective schemes.
Projective geometry
Linear algebra |
https://en.wikipedia.org/wiki/Bras%20d%27Or%2C%20Nova%20Scotia | Bras d'Or is a community in the Canadian province of Nova Scotia, located in the Cape Breton Regional Municipality.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, Bras D'Or had a population of 96 living in 50 of its 55 total private dwellings, a change of from its 2016 population of 104. With a land area of , it had a population density of in 2021.
References
Bras d'Or on Destination Nova Scotia
Communities in the Cape Breton Regional Municipality
Designated places in Nova Scotia
General Service Areas in Nova Scotia |
https://en.wikipedia.org/wiki/Dod%20%28surname%29 | Dod is the surname of:
Albert Baldwin Dod (1805–1845), American Presbyterian theologian and professor of mathematics
Charles Dod Irish journalist and writer, known for his reference works, including Dod's Parliamentary Companion
Daniel Dod (1788–1823), American mathematician and mechanical engineer who fabricated the engine for the first steamboat to cross the Atlantic Ocean
Donald Dungan Dod (1912–2008), American missionary and orchidologist
John Dod (c.1549–1645), non-conforming English clergyman
Lottie Dod (1871–1960), English sportswoman, youngest winner of Wimbledon Ladies' Singles Championship
Pierce Dod (1683–1754), British physician
Thaddeus Dod (1740–1793), Presbyterian preacher and educator
William Dod (1867–1954), British Olympic archer
See also
Dodd (surname)
Dodds (surname)
Dods (disambiguation), including people with the surname |
https://en.wikipedia.org/wiki/Mahler%20volume | In convex geometry, the Mahler volume of a centrally symmetric convex body is a dimensionless quantity that is associated with the body and is invariant under linear transformations. It is named after German-English mathematician Kurt Mahler. It is known that the shapes with the largest possible Mahler volume are the balls and solid ellipsoids; this is now known as the Blaschke–Santaló inequality. The still-unsolved Mahler conjecture states that the minimum possible Mahler volume is attained by a hypercube.
Definition
A convex body in Euclidean space is defined as a compact convex set with non-empty interior. If is a centrally symmetric convex body in -dimensional Euclidean space, the polar body is another centrally symmetric body in the same space, defined as the set
The Mahler volume of is the product of the volumes of and .
If is an invertible linear transformation, then . Applying to multiplies its volume by and multiplies the volume of by . As these determinants are multiplicative inverses, the overall Mahler volume of is preserved by linear transformations.
Examples
The polar body of an -dimensional unit sphere is itself another unit sphere. Thus, its Mahler volume is just the square of its volume,
where is the Gamma function.
By affine invariance, any ellipsoid has the same Mahler volume.
The polar body of a polyhedron or polytope is its dual polyhedron or dual polytope. In particular, the polar body of a cube or hypercube is an octahedron or cross polytope. Its Mahler volume can be calculated as
The Mahler volume of the sphere is larger than the Mahler volume of the hypercube by a factor of approximately .
Extreme shapes
The Blaschke–Santaló inequality states that the shapes with maximum Mahler volume are the spheres and ellipsoids. The three-dimensional case of this result was proven by Wilhelm Blaschke; the full result was proven much later by using a technique known as Steiner symmetrization by which any centrally symmetric convex body can be replaced with a more sphere-like body without decreasing its Mahler volume.
The shapes with the minimum known Mahler volume are hypercubes, cross polytopes, and more generally the Hanner polytopes which include these two types of shapes, as well as their affine transformations. The Mahler conjecture states that the Mahler volume of these shapes is the smallest of any n-dimensional symmetric convex body; it remains unsolved when . As Terry Tao writes:
prove that the Mahler volume is bounded below by times the volume of a sphere for some absolute constant , matching the scaling behavior of the hypercube volume but with a smaller constant. proves that, more concretely, one can take in this bound. A result of this type is known as a reverse Santaló inequality.
Partial results
The 2-dimensional case of the Mahler conjecture has been solved by and the 3-dimensional case by .
proved that the unit cube is a strict local minimizer for the Mahler volume in the class of or |
https://en.wikipedia.org/wiki/Ax%E2%80%93Grothendieck%20theorem | In mathematics, the Ax–Grothendieck theorem is a result about injectivity and surjectivity of polynomials that was proved independently by James Ax and Alexander Grothendieck.
The theorem is often given as this special case: If P is an injective polynomial function from an n-dimensional complex vector space to itself then P is bijective. That is, if P always maps distinct arguments to distinct values, then the values of P cover all of Cn.
The full theorem generalizes to any algebraic variety over an algebraically closed field.
Proof via finite fields
Grothendieck's proof of the theorem is based on proving the analogous theorem for finite fields and their algebraic closures. That is, for any field F that is itself finite or that is the closure of a finite field, if a polynomial P from Fn to itself is injective then it is bijective.
If F is a finite field, then Fn is finite. In this case the theorem is true for trivial reasons having nothing to do with the representation of the function as a polynomial: any injection of a finite set to itself is a bijection. When F is the algebraic closure of a finite field, the result follows from Hilbert's Nullstellensatz. The Ax–Grothendieck theorem for complex numbers can therefore be proven by showing that a counterexample over C would translate into a counterexample in some algebraic extension of a finite field.
This method of proof is noteworthy in that it is an example of the idea that finitistic algebraic relations in fields of characteristic 0 translate into algebraic relations over finite fields with large characteristic. Thus, one can use the arithmetic of finite fields to prove a statement about C even though there is no homomorphism from any finite field to C. The proof thus uses model-theoretic principles such as the compactness theorem to prove an elementary statement about polynomials. The proof for the general case uses a similar method.
Other proofs
There are other proofs of the theorem. Armand Borel gave a proof using topology. The case of n = 1 and field C follows since C is algebraically closed and can also be thought of as a special case of the result that for any analytic function f on C, injectivity of f implies surjectivity of f. This is a corollary of Picard's theorem.
Related results
Another example of reducing theorems about morphisms of finite type to finite fields can be found in EGA IV: There, it is proved that a radicial S-endomorphism of a scheme X of finite type over S is bijective (10.4.11), and that if X/S is of finite presentation, and the endomorphism is a monomorphism, then it is an automorphism (17.9.6). Therefore, a scheme of finite presentation over a base S is a cohopfian object in the category of S-schemes.
The Ax–Grothendieck theorem may also be used to prove the Garden of Eden theorem, a result that like the Ax–Grothendieck theorem relates injectivity with surjectivity but in cellular automata rather than in algebraic fields. Although direct proofs of this theo |
https://en.wikipedia.org/wiki/Inoue%20surface | In complex geometry, an Inoue surface is any of several complex surfaces of Kodaira class VII. They are named after Masahisa Inoue, who gave the first non-trivial examples of Kodaira class VII surfaces in 1974.
The Inoue surfaces are not Kähler manifolds.
Inoue surfaces with b2 = 0
Inoue introduced three families of surfaces, S0, S+ and S−, which are compact quotients
of (a product of a complex plane by a half-plane). These Inoue surfaces are solvmanifolds. They are obtained as quotients of by a solvable discrete group which acts holomorphically on
The solvmanifold surfaces constructed by Inoue all have second Betti number . These surfaces are of Kodaira class VII, which means that they have and Kodaira dimension . It was proven by Bogomolov, Li–Yau and Teleman that any surface of class VII with is a Hopf surface or an Inoue-type solvmanifold.
These surfaces have no meromorphic functions and no curves.
K. Hasegawa gives a list of all complex 2-dimensional solvmanifolds; these are complex torus, hyperelliptic surface, Kodaira surface and Inoue surfaces S0, S+ and S−.
The Inoue surfaces are constructed explicitly as follows.
Of type S0
Let φ be an integer 3 × 3 matrix, with two complex eigenvalues and a real eigenvalue c > 1, with . Then φ is invertible over integers, and defines an action of the group of integers, on . Let This group is a lattice in solvable Lie group
acting on with the -part acting by translations and the -part as
We extend this action to by setting , where t is the parameter of the -part of and acting trivially with the factor on . This action is clearly holomorphic, and the quotient is called Inoue surface of type
The Inoue surface of type S0 is determined by the choice of an integer matrix φ, constrained as above. There is a countable number of such surfaces.
Of type S+
Let n be a positive integer, and be the group of upper triangular matrices
The quotient of by its center C is . Let φ be an automorphism of , we assume that φ acts on as a matrix with two positive real eigenvalues a, b, and ab = 1. Consider the solvable group with acting on as φ. Identifying the group of upper triangular matrices with we obtain an action of on Define an action of on with acting trivially on the -part and the acting as The same argument as for Inoue surfaces of type shows that this action is holomorphic. The quotient is called Inoue surface of type
Of type S−
Inoue surfaces of type are defined in the same way as for S+, but two eigenvalues a, b of φ acting on have opposite sign and satisfy ab = −1. Since a square of such an endomorphism defines an Inoue surface of type S+, an Inoue surface of type S− has an unramified double cover of type S+.
Parabolic and hyperbolic Inoue surfaces
Parabolic and hyperbolic Inoue surfaces are Kodaira class VII surfaces defined by Iku Nakamura in 1984. They are not solvmanifolds. These surfaces have positive second Betti number. They have spherical shells, and can be |
https://en.wikipedia.org/wiki/Dyson%27s%20transform | Dyson's transform is a fundamental technique in additive number theory. It was developed by Freeman Dyson as part of his proof of Mann's theorem, is used to prove such fundamental results of Additive Number Theory as the Cauchy-Davenport theorem, and was used by Olivier Ramaré in his work on the Goldbach conjecture that proved that every even integer is the sum of at most 6 primes. The term Dyson's transform for this technique is used by Ramaré. Halberstam and Roth call it the τ-transformation.
This formulation of the transform is from Ramaré. Let A be a sequence of natural numbers, and x be any real number. Write A(x) for the number of elements of A which lie in [1, x]. Suppose and are two sequences of natural numbers. We write A + B for the sumset, that is, the set of all elements a + b where a is in A and b is in B; and similarly A − B for the set of differences a − b. For any element e in A, Dyson's transform consists in forming the sequences and . The transformed sequences have the properties:
Other closely related transforms are sometimes referred to as Dyson transforms. This includes the transform defined by , , , for sets in a (not necessarily abelian) group. This transformation has the property that
,
It can be used to prove a generalisation of the Cauchy-Davenport theorem.
References
Sumsets
Freeman Dyson |
https://en.wikipedia.org/wiki/Hironori%20Saruta | is a Japanese retired football player.
Saruta made 10 appearances in the J2 League with Ehime FC during 2006.
Club statistics
Honours
Bangkok Glass
Singapore Cup Winner (1): 2010
Thai FA Cup Runner-up (1): 2013
Queen's Cup Winner (1): 2010
Thai Super Cup Runner-up (1): 2009
References
External links
1982 births
Living people
Takushoku University alumni
Association football people from Hiroshima Prefecture
Japanese men's footballers
J2 League players
Japan Football League players
Ehime FC players
Kataller Toyama players
Japanese expatriate men's footballers
Japanese expatriate sportspeople in Thailand
Expatriate men's footballers in Thailand
Hironori Saruta
Hironori Saruta
Men's association football forwards |
https://en.wikipedia.org/wiki/Quasireversibility | In queueing theory, a discipline within the mathematical theory of probability, quasireversibility (sometimes QR) is a property of some queues. The concept was first identified by Richard R. Muntz and further developed by Frank Kelly. Quasireversibility differs from reversibility in that a stronger condition is imposed on arrival rates and a weaker condition is applied on probability fluxes. For example, an M/M/1 queue with state-dependent arrival rates and state-dependent service times is reversible, but not quasireversible.
A network of queues, such that each individual queue when considered in isolation is quasireversible, always has a product form stationary distribution. Quasireversibility had been conjectured to be a necessary condition for a product form solution in a queueing network, but this was shown not to be the case. Chao et al. exhibited a product form network where quasireversibility was not satisfied.
Definition
A queue with stationary distribution is quasireversible if its state at time t, x(t) is independent of
the arrival times for each class of customer subsequent to time t,
the departure times for each class of customer prior to time t
for all classes of customer.
Partial balance formulation
Quasireversibility is equivalent to a particular form of partial balance. First, define the reversed rates q'(x,x') by
then considering just customers of a particular class, the arrival and departure processes are the same Poisson process (with parameter ), so
where Mx is a set such that means the state x' represents a single arrival of the particular class of customer to state x.
Examples
Burke's theorem shows that an M/M/m queueing system is quasireversible.
Kelly showed that each station of a BCMP network is quasireversible when viewed in isolation.
G-queues in G-networks are quasireversible.
See also
Time reversibility
References
Queueing theory |
https://en.wikipedia.org/wiki/Multilinear%20polynomial | In algebra, a multilinear polynomial is a multivariate polynomial that is linear (meaning affine) in each of its variables separately, but not necessarily simultaneously. It is a polynomial in which no variable occurs to a power of 2 or higher; that is, each monomial is a constant times a product of distinct variables. For example f(x,y,z) = 3xy + 2.5 y - 7z is a multilinear polynomial of degree 2 (because of the monomial 3xy) whereas f(x,y,z) = x² +4y is not. The degree of a multilinear polynomial is the maximum number of distinct variables occurring in any monomial.
Definition
Multilinear polynomials can be understood as a multilinear map (specifically, a multilinear form) applied to the vectors [1 x], [1 y], etc. The general form can be written as a tensor contraction:
For example, in two variables:
Properties
A multilinear polynomial is linear (affine) when varying only one variable, :where and do not depend on . Note that is generally not zero, so is linear in the "shaped like a line" sense, but not in the "directly proportional" sense of a multilinear map.
All repeated second partial derivatives are zero:In other words, its Hessian matrix is a symmetric hollow matrix.
In particular, the Laplacian , so is a harmonic function. This implies has maxima and minima only on the boundary of the domain.
More generally, every restriction of to a subset of its coordinates is also multilinear, so still holds when one or more variables are fixed. In other words, is harmonic on every "slice" of the domain along coordinate axes.
On a rectangular domain
When the domain is rectangular in the coordinate axes (e.g. a hypercube), will have maxima and minima only on the vertices of the domain, i.e. the finite set of points with minimal and maximal coordinate values. The value of the function on these points completely determines the function, since the value on the edges of the boundary can be found by linear interpolation, and the value on the rest of the boundary and the interior is fixed by Laplace's equation, .
The value of the polynomial at an arbitrary point can be found by repeated linear interpolation along each coordinate axis. Equivalently, it is a weighted mean of the vertex values, where the weights are the Lagrange interpolation polynomials. These weights also constitute a set of generalized barycentric coordinates for the hyperrectangle. Geometrically, the point divides the domain into smaller hyperrectangles, and the weight of each vertex is the (fractional) volume of the hyperrectangle opposite it.
Algebraically, the multilinear interpolant on the hyperrectangle is:where the sum is taken over the vertices . Equivalently,where V is the volume of the hyperrectangle.
The value at the center is the arithmetic mean of the value at the vertices, which is also the mean over the domain boundary, and the mean over the interior. The components of the gradient at the center are proportional to the balance of the vertex values a |
https://en.wikipedia.org/wiki/Nuisance%20variable | In the theory of stochastic processes in probability theory and statistics, a nuisance variable is a random variable that is fundamental to the probabilistic model, but that is of no particular interest in itself or is no longer of any interest: one such usage arises for the Chapman–Kolmogorov equation. For example, a model for a stochastic process may be defined conceptually using intermediate variables that are not observed in practice. If the problem is to derive the theoretical properties, such as the mean, variance and covariances of quantities that would be observed, then the intermediate variables are nuisance variables.
The related term nuisance factor has been used in the context of block experiments, where the terms in the model representing block-means, often called "factors", are of no interest. Many approaches to the analysis of such experiments, particularly where the experimental design is subject to randomization, treat these factors as random variables. More recently, "nuisance variable" has been used in the same context.
"Nuisance variable" has been used in the context of statistical surveys to refer information that is not of direct interest but which needs to be taken into account in an analysis.
In the context of stochastic models, the treatment of nuisance variables does not necessarily involve working with the full joint distribution of all the random variables involved, although this is one approach. Instead, an analysis may proceed directly to the quantities of interest.
The term nuisance variable is sometimes also used in more general contexts, simply to designate those variables that are marginalized over when finding a marginal distribution.
References
Stochastic processes
Latent variable models |
https://en.wikipedia.org/wiki/Aerospace%20Education%20Services%20Project | Aerospace Education Services Project (AESP) is a NASA education project which delivers science, technology, engineering, and mathematics (STEM) professional development to K-12, pre-service, and informal educators providing classroom demonstrations, distance learning events, in-service training for educators and pre-service training for college students. Through utilization of NASA products and materials, AESP helps students understand how STEM content is relevant to them by using real-world and engaging materials in their classroom and encourages them to pursue a career in NASA or other STEM careers. The project has education specialists working at all of the NASA centers across the U.S. These educators work with schools and other organizations in order to deliver professional learning opportunities through both face-to-face and virtual venues. The project is managed by Kyle Peck, Principal Investigator, Peggy Maher, Director and Dan Cherry, NASA Project Manager at the Langley Research Center.
Programs
Robots on the Road (ROTR) is an educational program run by traveling NASA specialists in middle schools across the country. Students in grades 5-8 work in groups in order to determine what their robot is designed to do, and how it uses its motors and sensors to achieve those goals. The robots used in this program are made from Lego Mindstorms kits, and are analogous to existing NASA robots.
References
AESP Homepage
NASA's Aerospace Education Services Project
NASA groups, organizations, and centers |
https://en.wikipedia.org/wiki/Polynomial%20regression | In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable x and the dependent variable y is modelled as an nth degree polynomial in x. Polynomial regression fits a nonlinear relationship between the value of x and the corresponding conditional mean of y, denoted E(y |x). Although polynomial regression fits a nonlinear model to the data, as a statistical estimation problem it is linear, in the sense that the regression function E(y | x) is linear in the unknown parameters that are estimated from the data. For this reason, polynomial regression is considered to be a special case of multiple linear regression.
The explanatory (independent) variables resulting from the polynomial expansion of the "baseline" variables are known as higher-degree terms. Such variables are also used in classification settings.
History
Polynomial regression models are usually fit using the method of least squares. The least-squares method minimizes the variance of the unbiased estimators of the coefficients, under the conditions of the Gauss–Markov theorem. The least-squares method was published in 1805 by Legendre and in 1809 by Gauss. The first design of an experiment for polynomial regression appeared in an 1815 paper of Gergonne. In the twentieth century, polynomial regression played an important role in the development of regression analysis, with a greater emphasis on issues of design and inference. More recently, the use of polynomial models has been complemented by other methods, with non-polynomial models having advantages for some classes of problems.
Definition and example
The goal of regression analysis is to model the expected value of a dependent variable y in terms of the value of an independent variable (or vector of independent variables) x. In simple linear regression, the model
is used, where ε is an unobserved random error with mean zero conditioned on a scalar variable x. In this model, for each unit increase in the value of x, the conditional expectation of y increases by β1 units.
In many settings, such a linear relationship may not hold. For example, if we are modeling the yield of a chemical synthesis in terms of the temperature at which the synthesis takes place, we may find that the yield improves by increasing amounts for each unit increase in temperature. In this case, we might propose a quadratic model of the form
In this model, when the temperature is increased from x to x + 1 units, the expected yield changes by (This can be seen by replacing x in this equation with x+1 and subtracting the equation in x from the equation in x+1.) For infinitesimal changes in x, the effect on y is given by the total derivative with respect to x: The fact that the change in yield depends on x is what makes the relationship between x and y nonlinear even though the model is linear in the parameters to be estimated.
In general, we can model the expected value of y as an n |
https://en.wikipedia.org/wiki/Logarithmic%20differentiation | In calculus, logarithmic differentiation or differentiation by taking logarithms is a method used to differentiate functions by employing the logarithmic derivative of a function ,
The technique is often performed in cases where it is easier to differentiate the logarithm of a function rather than the function itself. This usually occurs in cases where the function of interest is composed of a product of a number of parts, so that a logarithmic transformation will turn it into a sum of separate parts (which is much easier to differentiate). It can also be useful when applied to functions raised to the power of variables or functions. Logarithmic differentiation relies on the chain rule as well as properties of logarithms (in particular, the natural logarithm, or the logarithm to the base e) to transform products into sums and divisions into subtractions. The principle can be implemented, at least in part, in the differentiation of almost all differentiable functions, providing that these functions are non-zero.
Overview
The method is used because the properties of logarithms provide avenues to quickly simplify complicated functions to be differentiated. These properties can be manipulated after the taking of natural logarithms on both sides and before the preliminary differentiation. The most commonly used logarithm laws are
Higher order derivatives
Using Faà di Bruno's formula, the n-th order logarithmic derivative is,
Using this, the first four derivatives are,
Applications
Products
A natural logarithm is applied to a product of two functions
to transform the product into a sum
Differentiating by applying the chain and the sum rules yields
and, after rearranging, yields
which is the product rule for derivatives.
Quotients
A natural logarithm is applied to a quotient of two functions
to transform the division into a subtraction
Differentiating by applying the chain and the sum rules yields
and, after rearranging, yields
which is the quotient rule for derivatives.
Functional exponents
For a function of the form
the natural logarithm transforms the exponentiation into a product
Differentiating by applying the chain and the product rules yields
and, after rearranging, yields
The same result can be obtained by rewriting f in terms of exp and applying the chain rule.
General case
Using capital pi notation, let
be a finite product of functions with functional exponents.
The application of natural logarithms results in (with capital sigma notation)
and after differentiation,
Rearrange to get the derivative of the original function,
See also
Notes
Differential calculus
Logarithms |
https://en.wikipedia.org/wiki/Peter%20Randall-Page | Peter Randall-Page RA (born 1954) is a British artist and sculptor, known for his stone sculpture work, inspired by geometric patterns from nature. In his words "geometry is the theme on which nature plays her infinite variations, fundamental mathematical principle become a kind of pattern book from which nature constructs the most complex and sophisticated structures".
Biography
Randall-Page was born in Essex and spent his childhood in Sussex both studying at the Bath Academy of Art from 1973 to 1977 after which he worked with the sculptor Barry Flanagan. After working on a conservation project at Wells Cathedral, Randall-Page went to Italy to study stone carving at the Carrara quarries. Returning to Britain, he was a visiting lecturer at Brighton Polytechnic throughout the 1980s and established a studio at Drewsteignton in Devon. From there he undertook a number of significant public sculpture commissions, often featuring fruit and organic forms. These included works for the regeneration of Castle Park in Bristol and for the Eden Project in Cornwall. For the Eden Project he was a member of the design team for the Education Resource Centre (The Core), influencing the overall design of the building and incorporating an enormous granite sculpture, Seed, at its heart. A major retrospective of his work was held in 1992 at the Leeds City Art Gallery and the Yorkshire Sculpture Park. During 1994 Randall-Page held an artist-in-residence post at the Tasmanian School of Art and undertook a lecture tour of Australia, supported by the Arts Council England.
In 1980 he was taken on by the Anne Berthoud Gallery in London's Covent Garden. Randall-Page's work is held in numerous public and private collections throughout the world including Japan, South Korea, Australia, United States, Ireland, Germany and the Netherlands. His public sculptures can be found in London, Edinburgh, Manchester, Bristol and Newbury. His work is represented in the permanent collections of the Tate Gallery and the British Museum.
Randall-Page was elected to the Royal Academy in 2015. In 1999, he was awarded an Honorary Doctorate of Arts from the University of Plymouth and from 2002 to 2005 was an Associate Research Fellow at Dartington College of Arts.
Portraits of Randall-Page
The National Portrait Gallery collection has 2003 and 2011 bromide photographic images of Randall-Page.
Public collections
Arnolfini Collection Trust, Bristol
The British Council.
The British Embassy, Dublin
The British Museum
Bughley Sculpture Garden
Castle Museum and Art Gallery, Nottingham
The Contemporary Art Society, London
The Creasy Collection of Contemporary Art, Salisbury
Derby Arboretum
University of Exeter
Leeds City Art Galleries
Lincoln City Council
Milton Keynes Community NHS Trust
The National Trust Foundation for Art
Nottinghamshire City Council
University of Nottingham
Prior's Court School for Autistic Children, Thatcham
University of Tasmania
Tate Collection; 'Where the |
https://en.wikipedia.org/wiki/Jordao%20Pattinama | Jordao Pattinama (born 1 March 1989) is a Dutch footballer who plays as a midfielder for Tweede Klasse club HBSS.
Career statistics
(As of 1 March 2009)
Personal life
He is the son of former footballer Ton Pattinama. His twin brother Edinho plays for NAC Breda.
References
1989 births
Living people
Dutch men's footballers
Feyenoord players
Excelsior Rotterdam players
Eredivisie players
Eerste Divisie players
Dutch people of Indonesian descent
Dutch people of Moluccan descent
Dutch twins
People from Spijkenisse
SC Feyenoord players
Men's association football midfielders
Footballers from South Holland |
https://en.wikipedia.org/wiki/IPAM | IPAM may refer to:
Indolepropionamide, a chemical compound
Institute for Pure and Applied Mathematics, an American mathematics institute
Institute of Public Administration and Management, an institute of the University of Sierra Leone
IP address management, software for computer network management
, the Institute for Amazon Environmental Research of Brazil |
https://en.wikipedia.org/wiki/2009%20Women%27s%20Cricket%20World%20Cup%20statistics |
Team totals
Highest team total
Batting
Most runs in the tournament
Note : Only top ten players shown.
Highest individual scores
Highest partnerships of the tournament
The partnership of Bates/Tiffen is the highest partnership in all the world cups
Highest partnerships of the tournament by wicket
Bowling
Most Wickets
Note: Only top ten are shown. Sorted by wickets then bowling average.
Best bowling
Fielding
Most Catches
Wicketkeeping
Most Dismissals
References
stats |
https://en.wikipedia.org/wiki/Danny%20Calegari | Danny Matthew Cornelius Calegari is a mathematician and, , a professor of mathematics at the University of Chicago. His research interests include geometry, dynamical systems, low-dimensional topology, and geometric group theory.
Education and career
In 1994, Calegari received a B.A. in Mathematics from the University of Melbourne with honors. He received his Ph.D. in 2000 from the University of California, Berkeley under the joint supervision of Andrew Casson and William Thurston; his dissertation concerned foliations of three-dimensional manifolds.
From 2000–2002 he was Benjamin Peirce Assistant Professor at Harvard University, after which he joined the California Institute of Technology faculty; he became Merkin Professor in 2007. He was a University Professor of Pure Mathematics at the University of Cambridge in 2011–2012, and has been a Professor of Mathematics at the University of Chicago since 2012.
Calegari is also an author of short fiction, published in Quadrant, Southerly, and Overland. His story A Green Light was a winner of a 1992 The Age Short Story Award.
Awards
Calegari was one of the recipients of the 2009 Clay Research Award for his solution to the Marden Tameness Conjecture and the Ahlfors Measure Conjecture. In 2011 he was awarded a Royal Society Wolfson Research Merit Award, and in 2012, he became a Fellow of the American Mathematical Society. In 2012 he delivered the Namboodiri Lectures at the University of Chicago, and in 2013 he delivered the Blumenthal Lectures at Tel Aviv University.
Selected works
Personal life
Mathematician Frank Calegari is Danny Calegari's brother.
References
External links
Living people
20th-century American mathematicians
Topologists
University of Melbourne alumni
University of California, Berkeley alumni
California Institute of Technology faculty
Institute for Advanced Study visiting scholars
Harvard University Department of Mathematics faculty
Harvard University faculty
Clay Research Award recipients
Fellows of the American Mathematical Society
1972 births
21st-century American mathematicians
20th-century Australian mathematicians |
https://en.wikipedia.org/wiki/Cartan%E2%80%93Brauer%E2%80%93Hua%20theorem | In abstract algebra, the Cartan–Brauer–Hua theorem (named after Richard Brauer, Élie Cartan, and Hua Luogeng) is a theorem pertaining to division rings. It says that given two division rings such that xKx−1 is contained in K for every x not equal to 0 in D, either K is contained in the center of D, or . In other words, if the unit group of K is a normal subgroup of the unit group of D, then either or K is central .
References
Theorems in ring theory |
https://en.wikipedia.org/wiki/First-level%20NUTS%20of%20the%20European%20Union | The Classification of Territorial Units for Statistics (NUTS, for the French ) is a geocode standard for referencing the administrative divisions of countries for statistical purposes. The standard was developed by the European Union.
There are three levels of NUTS defined, with two levels of local administrative units (LAUs). Depending on their size, not all countries have every level of division. One of the most extreme cases is Luxembourg, which has only LAUs; the three NUTS divisions each correspond to the entire country itself.
There are 92 first-level NUTS regions of the European Union, and 240 second-level NUTS regions.
Former member states
Below are the first-level NUTS regions of former member states of the European Union.
EFTA member states
Below are the first-level NUTS regions of EFTA.
EU candidates
Below are the first-level NUTS regions of candidates of the European Union.
See also
Local government
Regional policy of the European Union
Region (Europe)
External links
Europa – Eurostat – Regions
Overview maps of the NUTS and Statistical Regions of Europe – Overview map of EU Countries – NUTS level 1 [Archive]
The 104 NUTS-1 EU Regions of 2016 to present
1
Types of geographical division |
https://en.wikipedia.org/wiki/Wilkinson%20matrix | In linear algebra, Wilkinson matrices are symmetric, tridiagonal, order-N matrices with pairs of nearly, but not exactly, equal eigenvalues. It is named after the British mathematician James H. Wilkinson. For N = 7, the Wilkinson matrix is given by
Wilkinson matrices have applications in many fields, including scientific computing, numerical linear algebra, and signal processing.
References
Matrices
Numerical linear algebra |
https://en.wikipedia.org/wiki/Teichm%C3%BCller%E2%80%93Tukey%20lemma | In mathematics, the Teichmüller–Tukey lemma (sometimes named just Tukey's lemma), named after John Tukey and Oswald Teichmüller, is a lemma that states that every nonempty collection of finite character has a maximal element with respect to inclusion. Over Zermelo–Fraenkel set theory, the Teichmüller–Tukey lemma is equivalent to the axiom of choice, and therefore to the well-ordering theorem, Zorn's lemma, and the Hausdorff maximal principle.
Definitions
A family of sets is of finite character provided it has the following properties:
For each , every finite subset of belongs to .
If every finite subset of a given set belongs to , then belongs to .
Statement of the lemma
Let be a set and let . If is of finite character and , then there is a maximal (according to the inclusion relation) such that .
Applications
In linear algebra, the lemma may be used to show the existence of a basis. Let V be a vector space. Consider the collection of linearly independent sets of vectors. This is a collection of finite character. Thus, a maximal set exists, which must then span V and be a basis for V.
Notes
References
Brillinger, David R. "John Wilder Tukey"
Families of sets
Order theory
Axiom of choice
Lemmas in set theory |
https://en.wikipedia.org/wiki/Local%20Fields | Corps Locaux by Jean-Pierre Serre, originally published in 1962 and translated into English as Local Fields by Marvin Jay Greenberg in 1979, is a seminal graduate-level algebraic number theory text covering local fields, ramification, group cohomology, and local class field theory. The book's end goal is to present local class field theory from the cohomological point of view. This theory concerns extensions of "local" (i.e., complete for a discrete valuation) fields with finite residue field.
Contents
Part I, Local Fields (Basic Facts): Discrete valuation rings, Dedekind domains, and Completion.
Part II, Ramification: Discriminant & Different, Ramification Groups, The Norm, and Artin Representation.
Part III, Group Cohomology: Abelian & Nonabelian Cohomology, Cohomology of Finite Groups, Theorems of Tate and Nakayama, Galois Cohomology, Class Formations, and Computation of Cup Products.
Part IV, Local Class Field Theory: Brauer Group of a Local Field, Local Class Field Theory, Local Symbols and Existence Theorem, and Ramification.
References
Algebraic number theory
Class field theory
Graduate Texts in Mathematics |
https://en.wikipedia.org/wiki/Heritage%20High%20School%2C%20Clowne | Heritage High School (formerly known as Heritage Community School) is a co-educational secondary school located in Clowne in the English county of Derbyshire.
It held a Mathematics and Computing Specialist college status up until 2015. It is also the fastest growing 11-16 school in Derbyshire. There are currently just over 800 pupils on roll.
The school holds the Careers Mark, a Sport England Award, The Princess Diana Memorial Award and Derbyshire ABC Award.
The school was in a consortium with The Bolsover School, Shirebrook Academy and Springwell Community College that formed "Aspire Sixth Form", a sixth form provision that operated across all the school sites from September 2014. However, due to lack to students willing to enrol and poor performance, ASPIRE Sixth Form ceased to exist for the 2016-17 A-Level students.
Previously a community school administered by Derbyshire County Council, in April 2017 Heritage High School converted to academy status. The school is now sponsored by The Two Counties Trust.
Notable former pupils
Matthew Lowton, footballer
References
External links
Heritage High School official website
Secondary schools in Derbyshire
Academies in Derbyshire |
https://en.wikipedia.org/wiki/Innovation%20%28signal%20processing%29 | In time series analysis (or forecasting) — as conducted in statistics, signal processing, and many other fields — the innovation is the difference between the observed value of a variable at time t and the optimal forecast of that value based on information available prior to time t. If the forecasting method is working correctly, successive innovations are uncorrelated with each other, i.e., constitute a white noise time series. Thus it can be said that the innovation time series is obtained from the measurement time series by a process of 'whitening', or removing the predictable component. The use of the term innovation in the sense described here is due to Hendrik Bode and Claude Shannon (1950) in their discussion of the Wiener filter problem, although the notion was already implicit in the work of Kolmogorov.
In contrast, the residual is the difference between the observed value of a variable at time t and the optimal updated state of that value based on information available till (including) time t.
See also
Kalman filter
Filtering problem (stochastic processes)
Errors and residuals in statistics
Innovation butterfly
References
Statistical signal processing |
https://en.wikipedia.org/wiki/C.%20L.%20E.%20Moore%20instructor | The job title of C. L. E. Moore instructor is given by the Math Department at Massachusetts Institute of Technology to recent math Ph.D.s hired for their promise in pure mathematics research. The instructors are expected to do both teaching and research. Past C. L. E. Moore instructors include John Nash, Walter Rudin, Elias Stein, as well as four Fields medal winners: Paul Cohen, Daniel Quillen, Curtis T. McMullen and Akshay Venkatesh.
The instructorships are named after Clarence Lemuel Elisha Moore (1876–1931), who was a mathematics professor, specializing in geometry, at MIT from 1904 until his death.
Past holders of the position include Dan Abramovich,
Tom Apostol,
Sheldon Axler,
Patricia E. Bauman,
Alexander Braverman,
Egbert Brieskorn,
Felix Browder,
Paul Cohen,
Charles C. Conley,
Caterina Consani,
Nils Dencker,
George Duff,
Lawrence Ein,
Daniel S. Freed,
Harry Furstenberg,
John Garnett,
Mark Goresky,
Helen G. Grundman,
Joe Harris,
Sigurður Helgason,
Lars Hesselholt,
Eleny Ionel,
Vadim Kaloshin,
Yael Karshon,
Alexander Kechris,
Anthony Knapp,
Nancy Kopell,
Irwin Kra,
Kefeng Liu,
Matilde Marcolli,
Kevin McCrimmon,
Curtis McMullen,
William Messing,
Emmy Murphy,
John Forbes Nash Jr.,
Irena Peeva,
Daniel Quillen,
Douglas Ravenel,
Daniel G. Rider,
Walter Rudin,
Robert Rumely,
James Serrin,
William Shaw,
Joseph H. Silverman,
James Simons,
Isadore M. Singer,
Hart F. Smith,
Karen E. Smith,
George Springer,
Richard P. Stanley,
James D. Stasheff,
Elias Stein,
Gilbert Strang
Robert Strichartz,
Alessandro Figà Talamanca,
Shang-Hua Teng,
Robert Thomason,
Edward Thorp,
Douglas Ulmer,
Akshay Venkatesh,
Chelsea Walton,
Gerard Washnitzer,
Alan Weinstein, and
Zhiwei Yun.
See also
Mathematics education in the United States
External links
Current C.L.E. Moore instructors
Moore Instructors since 1949
Massachusetts Institute of Technology School of Science faculty
Educational institutions in the United States with year of establishment missing |
https://en.wikipedia.org/wiki/Undergraduate%20Ambassadors%20Scheme | The Undergraduate Ambassadors Scheme (UAS) is a program in the United Kingdom devised to encourage students enrolled in science, technology, engineering and mathematics (STEM) programs to enter teaching by awarding them with degree course credits.
History
Noting the declining enrollment in STEM subjects at UK universities, a team including author Simon Singh devised the idea with three aims:
to encourage undergraduates in those fields to go into teaching,
to support teachers and
to provide role models for school students who might otherwise never meet a young person who had chosen to study a STEM subject.
UAS was set up to provide a structure to get undergraduates into the classroom, based on a model pioneered at Imperial College London, but adding the incentive of academic credit for program participants.
After receiving approval to pilot UAS from the University of Surrey, Singh backed a launch of the program with his own money, with the assistance of Ravi Kapur and others. Student interest in the program was high. Singh indicated that in the pilot year of the program 10 of 13 math undergraduates who participate at the University of Southampton subsequently entered teacher training. By the midpoint of its second year, in February 2004, the program was being described by the Times Educational Supplement (TES) as a success, with nine universities onboard and an additional 30 expressing interest. In October 2005, Singh wrote in The Guardian that UAS was established in "over 50 university departments, mainly mathematics, science and engineering, with more coming on board each year." In the 2007-2008 academic year, involvement had risen to 107 university departments, with 750 undergraduate participants.
Function
According to TES, undergraduates involved first participate in a one-day program to give them basic information on instructing students in math and science. After this training, they observe a local classroom and then put together a project for the students in a class. The UAS website indicates that the program, available in the last two years of a student's undergraduate career, carries ten to 30 credits for ten weeks of work in the classroom alongside the classroom's regular teacher, who helps evaluate the undergraduate's performance.
References
External links
Official site
Education enrollment
Engineering education in the United Kingdom
Higher education organisations based in the United Kingdom
Mathematics education in the United Kingdom
Science education in the United Kingdom
Teacher training programs
Teaching in the United Kingdom
University of Surrey |
https://en.wikipedia.org/wiki/Hammersley%E2%80%93Clifford%20theorem | The Hammersley–Clifford theorem is a result in probability theory, mathematical statistics and statistical mechanics that gives necessary and sufficient conditions under which a strictly positive probability distribution (of events in a probability space) can be represented as events generated by a Markov network (also known as a Markov random field). It is the fundamental theorem of random fields. It states that a probability distribution that has a strictly positive mass or density satisfies one of the Markov properties with respect to an undirected graph G if and only if it is a Gibbs random field, that is, its density can be factorized over the cliques (or complete subgraphs) of the graph.
The relationship between Markov and Gibbs random fields was initiated by Roland Dobrushin and Frank Spitzer in the context of statistical mechanics. The theorem is named after John Hammersley and Peter Clifford, who proved the equivalence in an unpublished paper in 1971. Simpler proofs using the inclusion–exclusion principle were given independently by Geoffrey Grimmett, Preston and Sherman in 1973, with a further proof by Julian Besag in 1974.
Proof outline
It is a trivial matter to show that a Gibbs random field satisfies every Markov property. As an example of this fact, see the following:
In the image to the right, a Gibbs random field over the provided graph has the form . If variables and are fixed, then the global Markov property requires that: (see conditional independence), since forms a barrier between and .
With and constant, where and . This implies that .
To establish that every positive probability distribution that satisfies the local Markov property is also a Gibbs random field, the following lemma, which provides a means for combining different factorizations, needs to be proved:
Lemma 1
Let denote the set of all random variables under consideration, and let and denote arbitrary sets of variables. (Here, given an arbitrary set of variables , will also denote an arbitrary assignment to the variables from .)
If
for functions and , then there exist functions and such that
In other words, provides a template for further factorization of .
In order to use as a template to further factorize , all variables outside of need to be fixed. To this end, let be an arbitrary fixed assignment to the variables from (the variables not in ). For an arbitrary set of variables , let denote the assignment restricted to the variables from (the variables from , excluding the variables from ).
Moreover, to factorize only , the other factors need to be rendered moot for the variables from . To do this, the factorization
will be re-expressed as
For each : is where all variables outside of have been fixed to the values prescribed by .
Let
and
for each so
What is most important is that when the values assigned to do not conflict with the values prescribed by , making "disappear" when all variables not in are |
https://en.wikipedia.org/wiki/2004%20FC%20Seoul%20season |
Pre-season
Pre-season match results
Competitions
Overview
K League
FA Cup
League Cup
Match reports and match highlights
Fixtures and Results at FC Seoul Official Website
Season statistics
K League records
2004 season's league position was decided by aggregate points, because this season had first stage and second stage.
All competitions records
Attendance records
Season total attendance is K League Regular Season, League Cup, FA Cup, AFC Champions League in the aggregate and friendly match attendance is not included.
K League season total attendance is K League Regular Season and League Cup in the aggregate.
Squad statistics
Goals
Assists
Coaching staff
Players
Team squad
All players registered for the 2004 season are listed.
(Out)
(Out)
(In)
(Conscripted)
(Out)
(Conscripted)
(Out)
(Conscripted)
(Out)
Out on loan & military service
In: Transferred from other teams in the middle of season.
Out: Transferred to other teams in the middle of season.
Discharged: Transferred from Gwangju Sangmu and Police FC for military service after end of season. (Not registered in 2004 season.)
Conscripted: Transferred to Gwangju Sangmu and Police FC for military service after end of season.
Transfers
In
Rookie Free Agent
Out
Loan & Military service
Tactics
Tactical analysis
Starting eleven and formation
This section shows the most used players for each position considering a 3-4-3 formation.
Substitutes
See also
FC Seoul
References
FC Seoul 2004 Matchday Magazines
External links
FC Seoul Official Website
2004
Seoul |
https://en.wikipedia.org/wiki/List%20of%20Suwon%20Samsung%20Bluewings%20records%20and%20statistics | This is a list of club records for K-League side Suwon Samsung Bluewings. It details records at club and player level since the team's league debut in 1996.
Club records
Correct as of December 31, 2008.
K-League records
Record for consecutive wins : 8 – 1999.7.29 ~ 1999.8.29 / 2008.3.19 ~ 2008.4.26
Record for consecutive ties: 4 – 1997.3.22 ~ 1997.4.2 / 2004.8.1 ~ 2004.8.11
Record for consecutive loses: 3 – 2001.3.31 ~ 2001.4.8 / 2006.04.23 ~ 2006.05.05
Record for matches without loses: 18 – 2008.03.09 ~ 2008.06.28
Record for matches without wins: 13 – 2006.04.23 ~ 2006.07.15
Asia records
AFC Champions League records
Asian Cup Winners Cup records
Asian Super Cup records
A3 Champions Cup records
Pan Pacific Championship
Player recordsCorrect as of December 31, 2008. Figures include all competitive matches.''
Top league goalscorers by season
All time goalscorers
See also
K-League - All time records
Records
South Korean football club records and statistics |
https://en.wikipedia.org/wiki/Jean%20de%20Fontaney | Jean de Fontaney (1643–1710) was a French Jesuit who led a mission to China in 1687.
Jean de Fontaney had been a teacher of mathematics and astronomy at the College Louis le Grand. He was asked by king Louis XIV to set up a mission to China, following a request by Ferdinand Verbiest, in order to spread French and Catholic influence at the Chinese court with the pretext of transmitting scientific knowledge. Jean de Fontaney assembled a group of five other Jesuits to accompany him, all highly skilled in sciences, namely Joachim Bouvet, Jean-François Gerbillon, Louis-Daniel Lecomte, Guy Tachard, and Claude de Visdelou.
Guy Tachard remained in Siam where he was to have a major role, while Jean de Fontaney led the four remaining Fathers to China, where they arrived in February 1688. Upon their arrival in Beijing they were received by the Kangxi Emperor who was favorably impressed by them and retained Jean-François Gerbillion and Joachim Bouvet at the court.
Jean de Fontaney returned to Europe in 1702, where he became Rector of the Collège Royal Henry-Le-Grand in La Flèche until his death there in 1710.
See also
Jesuit China missions
Notes
References
David E. Mungello Curious land: Jesuit accommodation and the origins of Sinology, University of Hawaii Press, 1989, ,
17th-century French Jesuits
1643 births
1710 deaths
18th-century French Jesuits
French Roman Catholic missionaries
Jesuit missionaries in China
French expatriates in China |
https://en.wikipedia.org/wiki/Faradaic%20current | In electrochemistry, the faradaic current is the electric current generated by the reduction or oxidation of some chemical substance at an electrode. The net faradaic current is the algebraic sum of all the faradaic currents flowing through an indicator electrode or working electrode.
Limiting current
The limiting current in electrochemistry is the limiting value of a faradaic current that is approached as the rate of charge transfer to an electrode is increased. The limiting current can be approached, for example, by increasing the electric potential or decreasing the rate of mass transfer to the electrode. It is independent of the applied potential over a finite range, and is usually evaluated by subtracting the appropriate residual current from the measured total current. A limiting current can have the character of an adsorption, catalytic, diffusion, or kinetic current, and may include a migration current.
Migration current
The difference between the current that is actually obtained, at any particular value of the potential of the indicator or working electrode, for the reduction or oxidation of an ionic electroactive substance and the current that would be obtained, at the same potential, if there were no transport of that substance due to the electric field between the electrodes. The sign convention regarding current is such that the migration current is negative for the reduction of a cation or for the oxidation of an anion, and positive for the oxidation of a cation or the reduction of an anion. Hence the migration current may tend to either increase or decrease the total current observed. In any event the migration current approaches zero as the transport number of the electroactive substance is decreased by increasing the concentration of the supporting electrolyte, and hence the conductivity.
See also
Butler–Volmer equation
Gas diffusion electrode
References
Electrochemistry |
https://en.wikipedia.org/wiki/%28g%2CK%29-module | In mathematics, more specifically in the representation theory of reductive Lie groups, a -module is an algebraic object, first introduced by Harish-Chandra, used to deal with continuous infinite-dimensional representations using algebraic techniques. Harish-Chandra showed that the study of irreducible unitary representations of a real reductive Lie group, G, could be reduced to the study of irreducible -modules, where is the Lie algebra of G and K is a maximal compact subgroup of G.
Definition
Let G be a real Lie group. Let be its Lie algebra, and K a maximal compact subgroup with Lie algebra . A -module is defined as follows: it is a vector space V that is both a Lie algebra representation of and a group representation of K (without regard to the topology of K) satisfying the following three conditions
1. for any v ∈ V, k ∈ K, and X ∈
2. for any v ∈ V, Kv spans a finite-dimensional subspace of V on which the action of K is continuous
3. for any v ∈ V and Y ∈
In the above, the dot, , denotes both the action of on V and that of K. The notation Ad(k) denotes the adjoint action of G on , and Kv is the set of vectors as k varies over all of K.
The first condition can be understood as follows: if G is the general linear group GL(n, R), then is the algebra of all n by n matrices, and the adjoint action of k on X is kXk−1; condition 1 can then be read as
In other words, it is a compatibility requirement among the actions of K on V, on V, and K on . The third condition is also a compatibility condition, this time between the action of on V viewed as a sub-Lie algebra of and its action viewed as the differential of the action of K on V.
Notes
References
Representation theory of Lie groups |
https://en.wikipedia.org/wiki/Gaussian%20process%20emulator | In statistics, Gaussian process emulator is one name for a general type of statistical model that has been used in contexts where the problem is to make maximum use of the outputs of a complicated (often non-random) computer-based simulation model. Each run of the simulation model is computationally expensive and each run is based on many different controlling inputs. The variation of the outputs of the simulation model is expected to vary reasonably smoothly with the inputs, but in an unknown way.
The overall analysis involves two models: the simulation model, or "simulator", and the statistical model, or "emulator", which notionally emulates the unknown outputs from the simulator.
The Gaussian process emulator model treats the problem from the viewpoint of Bayesian statistics. In this approach, even though the output of the simulation model is fixed for any given set of inputs, the actual outputs are unknown unless the computer model is run and hence can be made the subject of a Bayesian analysis. The main element of the Gaussian process emulator model is that it models the outputs as a Gaussian process on a space that is defined by the model inputs. The model includes a description of the correlation or covariance of the outputs, which enables the model to encompass the idea that differences in the output will be small if there are only small differences in the inputs.
See also
Kriging
Computer experiment
References
Currin, C., Mitchell, T., Morris, M., and Ylvisaker, D. (1991) "Bayesian Prediction of Deterministic Functions, with Applications to the Design and Analysis of Computer Experiments," Journal of the American Statistical Association, 86, 953–963.
Kimeldorf, G. S. and Wahba, G. (1970) "A correspondence between Bayesian estimation on stochastic processes and smoothing by splines," The Annals of Mathematical Statistics, 41, 495–502.
O'Hagan, A. (1978) "Curve fitting and optimal design for predictions," Journal of the Royal Statistical Society B, 40, 1–42.
O'Hagan, A. (2006) "Bayesian analysis of computer code outputs: A tutorial," Reliability Engineering & System Safety, 91, 1290–1300.
Sacks, J., Welch, W. J., Mitchell, T. J., and Wynn, H. P. (1989) "Design and Analysis of Computer Experiments," Statistical Science, 4, 409–423.
Ensemble learning
Statistical randomness
Statistics articles needing expert attention
Bayesian statistics |
https://en.wikipedia.org/wiki/Soliton%20distribution | A soliton distribution is a type of discrete probability distribution that arises in the theory of erasure correcting codes, which use information redundancy to compensate for transmission errors manifesting as missing (erased) data. A paper by Luby introduced two forms of such distributions, the ideal soliton distribution and the robust soliton distribution.
Ideal distribution
The ideal soliton distribution is a probability distribution on the integers from 1 to K, where K is the single parameter of the distribution. The probability mass function is given by
Robust distribution
The robust form of distribution is defined by adding an extra set of values t(i) to the elements of mass function of the ideal soliton distribution and then normalizing so that the values add up to 1. The extra set of values, t(i), are defined in terms of an additional real-valued parameter δ (which is interpreted as a failure probability) and c, a constant parameter. Define R as R=c ln(K/δ). Then the values added to p(i), before the final normalization, are
While the ideal soliton distribution has a mode (or spike) at 2, the effect of the extra component in the robust distribution is to add an additional spike at the value K/R.
See also
Luby transform code
References
Discrete distributions
Coding theory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.