source
stringlengths 31
168
| text
stringlengths 51
3k
|
|---|---|
https://en.wikipedia.org/wiki/Table%20of%20mathematical%20symbols%20by%20introduction%20date
|
The following table lists many specialized symbols commonly used in modern mathematics, ordered by their introduction date. The table can also be ordered alphabetically by clicking on the relevant header title.
See also
History of mathematical notation
History of the Hindu–Arabic numeral system
Glossary of mathematical symbols
List of mathematical symbols by subject
Mathematical notation
Mathematical operators and symbols in Unicode
Sources
External links
RapidTables: Math Symbols List
Jeff Miller: Earliest Uses of Various Mathematical Symbols
Symbols by introduction date
Symbols
|
https://en.wikipedia.org/wiki/Noncommutative%20standard%20model
|
In theoretical particle physics, the non-commutative Standard Model (best known as Spectral Standard Model
), is a model based on noncommutative geometry that unifies a modified form of general relativity with the Standard Model (extended with right-handed neutrinos).
The model postulates that space-time is the product of a 4-dimensional compact spin manifold by a finite space . The full Lagrangian (in Euclidean signature) of the Standard model minimally coupled to gravity is obtained as pure gravity over that product space. It is therefore close in spirit to Kaluza–Klein theory but without the problem of massive tower of states.
The parameters of the model live at unification scale and physical predictions are obtained by running the parameters down through renormalization.
It is worth stressing that it is more than a simple reformation of the Standard Model. For example, the scalar sector and the fermions representations are more constrained than in effective field theory.
Motivation
Following ideas from Kaluza–Klein and Albert Einstein, the spectral approach seeks unification by expressing all forces as pure gravity on a space .
The group of invariance of such a space should combine the group of invariance of general relativity with , the group of maps from to the standard model gauge group .
acts on by permutations and the full group of symmetries of is the semi-direct product:
Note that the group of invariance of is not a simple group as it always contains the normal subgroup . It was proved by Mather
and Thurston
that for ordinary (commutative) manifolds, the connected component of the identity in is always a simple group, therefore no ordinary manifold can have this semi-direct product structure.
It is nevertheless possible to find such a space by enlarging the notion of space.
In noncommutative geometry, spaces are specified in algebraic terms. The algebraic object corresponding to a diffeomorphism is the automorphism of the algebra of coordinates. If the algebra is taken non-commutative it has trivial automorphisms (so-called inner automorphisms). These inner automorphisms form a normal subgroup of the group of automorphisms and provide the correct group structure.
Picking different algebras then give rise to different symmetries. The Spectral Standard Model takes as input the algebra where is the algebra of differentiable functions encoding the 4-dimensional manifold and is a finite dimensional algebra encoding the symmetries of the standard model.
History
First ideas to use noncommutative geometry to particle physics appeared in 1988-89
, and were formalized a couple of years later by Alain Connes and John Lott in what is known as the Connes-Lott model
. The Connes-Lott model did not incorporate the gravitational field.
In 1997, Ali Chamseddine and Alain Connes published a new action principle, the Spectral Action
, that made possible to incorporate the gravitational field into the model. Nevertheless, it was
|
https://en.wikipedia.org/wiki/Turkish%20Statistical%20Institute
|
Turkish Statistical Institute (commonly known as TurkStat; or TÜİK) is the Turkish government agency commissioned with producing official statistics on Turkey, its population, resources, economy, society, and culture. It was founded in 1926 and headquartered in Ankara. Formerly named as the State Institute of Statistics (Devlet İstatistik Enstitüsü (DİE)), the institute was renamed as the Turkish Statistical Institute on November 18, 2005.
See also
List of Turkish provinces by life expectancy
References
External links
Official website of the institute
National statistical services
Statistical
Organizations established in 1926
Organizations based in Ankara
|
https://en.wikipedia.org/wiki/Hilbert%27s%20arithmetic%20of%20ends
|
In mathematics, specifically in the area of hyperbolic geometry, Hilbert's arithmetic of ends is a method for endowing a geometric set, the set of ideal points or "ends" of a hyperbolic plane, with an algebraic structure as a field.
It was introduced by German mathematician David Hilbert.
Definitions
Ends
In a hyperbolic plane, one can define an ideal point or end to be an equivalence class of limiting parallel rays. The set of ends can then be topologized in a natural way and forms a circle. This usage of end is not canonical; in particular the concept it indicates is different from that of a topological end (see End (topology) and End (graph theory)).
In the Poincaré disk model or Klein model of hyperbolic geometry, every ray intersects the boundary circle (also called the circle at infinity or line at infinity) in a unique point, and the ends may be identified with these points. However, the points of the boundary circle are not considered to be points of the hyperbolic plane itself. Every hyperbolic line has exactly two distinct ends, and every two distinct ends are the ends of a unique line. For the purpose of Hilbert's arithmetic, it is expedient to denote a line by the ordered pair (a, b) of its ends.
Hilbert's arithmetic fixes arbitrarily three distinct ends, and labels them as 0, 1, and ∞ ;. The set H on which Hilbert defines a field structure is the set of all ends other than ∞, while H denotes the set of all ends including ∞.
Addition
Hilbert defines the addition of ends using hyperbolic reflections. For every end x in H, its negation −x is defined by constructing the hyperbolic reflection of line (x,∞) across the line (0,∞), and choosing −x to be the end of the reflected line.
The composition of any three hyperbolic reflections whose axes of symmetry all share a common end is itself another reflection, across another line with the same end. Based on this "three reflections theorem", given any two ends x and y in H, Hilbert defines the sum x + y to be the non-infinite end of the symmetry axis of the composition of the three reflections through the lines (x,∞), (0,∞), and (y,∞).
It follows from the properties of reflections that these operations have the properties required of the negation and addition operations in the algebra of fields: they form the inverse and addition operations of an additive abelian group.
Multiplication
The multiplication operation in the arithmetic of ends is defined (for nonzero elements x and y of H) by considering the lines (1,−1), (x,−x), and (y,−y). Because of the way −1, −x, and −y are defined by reflection across the line (0,∞), each of the three lines (1,−1), (x,−x), and (y,−y) is perpendicular to (0,∞).
From these three lines, a fourth line can be determined, the axis of symmetry of the composition of the reflections through (x,−x), (1,−1), and (y,−y). This line is also perpendicular to (0,∞), and so takes the form (z,−z) for some end z. Alternatively, the intersection of this line with th
|
https://en.wikipedia.org/wiki/Classification%20of%20Individual%20Consumption%20According%20to%20Purpose
|
Classification of Individual Consumption According to Purpose (COICOP) is a Reference Classification published by the United Nations Statistics Division that divides the purpose of individual consumption expenditures incurred by three institutional sectors, namely households, non-profit institutions serving households, and general government.
Categories in COICOP generally correspond to categories in the UN's CPC. Division 14 of COICOP corresponds to the Classification of the Purposes of Non-Profit Institutions Serving Households (COPNI); Division 15 of COICOP corresponds to the Classification of the Functions of Government (COFOG).
The classification units are transactions.
COICOP has been recently revised and is now available as COICOP 2018.
Structure
Structure levels
Structure Level 1: Divisions (two-digit)
Structure Level 2: Groups (three-digit)
Structure Level 3: Classes (four-digit)
Structure Level 4: Subclasses (five-digit)
Broad structure
01 Food and non-alcoholic beverages
01.1 Food
01.2 Non-alcoholic beverages
01.3 Services for processing primary goods for food and non-alcoholic beverages
02 Alcoholic beverages, tobacco and narcotics
02.1 Alcoholic beverages
02.2 Alcohol production services
02.3 Tobacco
02.4 Narcotics
03 Clothing and footwear
03.1 Clothing
03.2 Footwear
04 Housing, water, electricity, gas and other fuels
04.1 Actual rentals for housing
04.2 Imputed rentals for housing
04.3 Maintenance, repair and security of the dwelling
04.4 Water supply and miscellaneous services relating to the dwelling
04.5 Electricity, gas and other fuels
05 Furnishings, household equipment and routine household maintenance
05.1 Furniture, furnishings, and loose carpets
05.2 Household textiles
05.3 Household appliances
05.4 Glassware, tableware and household utensils
05.5 Tools and equipment for house and garden
05.6 Goods and services for routine household maintenance
06 Health
06.1 Medicines and health products
06.2 Outpatient care services
06.3 Inpatient care services
06.4 Other health services
07 Transport
07.1 Purchase of vehicles
07.2 Operation of personal transport equipment
07.3 Passenger transport services
07.4 Transport services of goods
08 Information and communication
08.1 Information and communication equipment
08.2 Software excluding games
08.3 Information and communication services
09 Recreation, sport and culture
09.1 Recreational durables
09.2 Other recreational goods
09.3 Garden products and pets
09.4 Recreational services
09.5 Cultural goods
09.6 Cultural services
09.7 Newspapers, books and stationery
09.8 Package holidays
10 Education services
10.1 Early childhood and primary education
10.2 Secondary education
10.3 Post-secondary non-tertiary education
10.4 Tertiary education
10.5 Education not defined by level
11 Restaurants and accommodation services
11.1 Food and beverage serving services
11.2 Accommodation services
12 Insurance and financial services
12.1 Insurance
12.2 Financial services
13 Personal care, social prot
|
https://en.wikipedia.org/wiki/Adaptive%20step%20size
|
In mathematics and numerical analysis, an adaptive step size is used in some methods for the numerical solution of ordinary differential equations (including the special case of numerical integration) in order to control the errors of the method and to ensure stability properties such as A-stability.
Using an adaptive stepsize is of particular importance when there is a large variation in the size of the derivative.
For example, when modeling the motion of a satellite about the earth as a standard Kepler orbit, a fixed time-stepping method such as the Euler method may be sufficient.
However things are more difficult if one wishes to model the motion of a spacecraft taking into account both the Earth and the Moon as in the Three-body problem.
There, scenarios emerge where one can take large time steps when the spacecraft is far from the Earth and Moon, but if the spacecraft gets close to colliding with one of the planetary bodies, then small time steps are needed. Romberg's method and Runge–Kutta–Fehlberg are examples of a numerical integration methods which use an adaptive stepsize.
Example
For simplicity, the following example uses the simplest integration method, the Euler method; in practice, higher-order methods such as Runge–Kutta methods are preferred due to their superior convergence and stability properties.
Consider the initial value problem
where y and f may denote vectors (in which case this equation represents a system of coupled ODEs in several variables).
We are given the function f(t,y) and the initial conditions (a, ya), and we are interested in finding the solution at t = b. Let y(b) denote the exact solution at b, and let yb denote the solution that we compute. We write , where is the error in the numerical solution.
For a sequence (tn) of values of t, with tn = a + nh, the Euler method gives approximations to the corresponding values of y(tn) as
The local truncation error of this approximation is defined by
and by Taylor's theorem, it can be shown that (provided f is sufficiently smooth) the local truncation error is proportional to the square of the step size:
where c is some constant of proportionality.
We have marked this solution and its error with a .
The value of c is not known to us. Let us now apply Euler's method again with a different step size to generate a second approximation to y(tn+1). We get a second solution, which we label with a .
Take the new step size to be one half of the original step size, and apply two steps of Euler's method. This second solution is presumably more accurate. Since we have to apply Euler's method twice, the local error is (in the worst case) twice the original error.
Here, we assume error factor is constant over the interval . In reality its rate of change is proportional to . Subtracting solutions gives the error estimate:
This local error estimate is third order accurate.
The local error estimate can be used to decide how stepsize should b
|
https://en.wikipedia.org/wiki/Anand%20Kumar
|
Anand Kumar (born 1 January 1973) is an Indian Mathematics educator, best known for his Super 30 programme, which he started in Patna, Bihar in 2002, known for coaching underprivileged students for JEE- Main & JEE-Advanced, the entrance examination for the Indian Institutes of Technology (IITs). Kumar was named in Time magazine's list of Best of Asia 2010. In 2023, he was awarded the Padma Shri, country's fourth highest civilian award by the Government of India for his contributions in the field of literature and education.
By 2018, 422 out of 510 students had made it to the IITs and Discovery Channel showcased his work in a documentary. Kumar has spoken at MIT and Harvard about his programs for students from the underprivileged sections of Indian society. Kumar and his school have been the subject of several smear campaigns, some of which have been carried in Indian media sources. His life and work had been portrayed in the 2019 film, Super 30, where Kumar is played by Hrithik Roshan.
Early life
Anand Kumar was born in Bihar, India. His father was a clerk in the postal department of India. His father could not afford private schooling for his children, and Anand attended a Hindi medium government school, where he developed his deep interest in mathematics. In his childhood, he studied at Patna High School, in Patna, Bihar. During his graduation, Kumar submitted papers on number theory, which were published in the Mathematical Spectrum. Kumar secured admission to the University of Cambridge, but could not attend because of his father's death and his financial condition.
Teaching career
In 1992, Kumar began teaching mathematics. He rented a classroom for Rs. 300 per month, and began his own institute, the Ramanujan School of Mathematics (RSM). Within the span of year, his class grew from two students to thirty-six, and after three years almost 500 students had enrolled. Then in early 2000, when a poor student came to him seeking coaching for IIT-JEE, who could not afford the annual admission fee due to poverty, Kumar was motivated to start the Super 30 programme in 2002, for which he is now well-known.
Since 2002, every May, the Ramanujan School of Mathematics holds a competitive test to select 30 students for the Super 30 program. Many students appear at the test, and eventually, he takes thirty intelligent students from economically backward sections, tutors them, and provides study materials and lodging for a year. He prepares them for the Joint Entrance Examination for the Indian Institutes of Technology (IIT). His mother, Jayanti Devi, cooks for the students, and his brother Pranav Kumar takes care of the management.
During 2003 to 2017, 391 students out of 450 passed the IITs. In 2010, all the students of Super 30 cleared IIT JEE entrance making it a three in a row for the institution. Kumar has no financial support for Super 30 from any government as well as private agencies, and manages on the tuition fee he earns from the Ramanuja
|
https://en.wikipedia.org/wiki/Bias%20of%20an%20estimator
|
In statistics, the bias of an estimator (or bias function) is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called unbiased. In statistics, "bias" is an property of an estimator. Bias is a distinct concept from consistency: consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased; see bias versus consistency for more.
All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as in unbiased estimation of standard deviation); because a biased estimator may be unbiased with respect to different measures of central tendency; because a biased estimator gives a lower value of some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful.
Bias can also be measured with respect to the median, rather than the mean (expected value), in which case one distinguishes median-unbiased from the usual mean-unbiasedness property.
Mean-unbiasedness is not preserved under non-linear transformations, though median-unbiasedness is (see ); for example, the sample variance is a biased estimator for the population variance. These are all illustrated below.
Definition
Suppose we have a statistical model, parameterized by a real number θ, giving rise to a probability distribution for observed data, , and a statistic which serves as an estimator of θ based on any observed data . That is, we assume that our data follows some unknown distribution (where θ is a fixed, unknown constant that is part of this distribution), and then we construct some estimator that maps observed data to values that we hope are close to θ. The bias of relative to is defined as
where denotes expected value over the distribution (i.e., averaging over all possible observations ). The second equation follows since θ is measurable with respect to the conditional distribution .
An estimator is said to be unbiased if its bias is equal to zero for all values of parameter θ, or equivalently, if the expected value of the estimator matches that of the parameter. Unbiasedness is not guaranteed to carry over. For example, if is an unbiased estimator for parameter θ, it is not guaranteed that g() is an unbiased estimator for g(θ).
In a simulation experiment concerning the properties of an estimator, the bias of the estimator may be assessed using the mean signed difference.
Examples
Sample varian
|
https://en.wikipedia.org/wiki/Marjorie%20Lee%20Browne
|
Marjorie Lee Browne (September 9, 1914 – October 19, 1979) was a mathematics educator. She was one of the first African-American women to receive a PhD in mathematics.
Early life and education
Marjorie Lee Browne was a prominent mathematician and educator who, in 1949, became only the third African-American woman to earn a doctorate in her field. Browne was born on September 9, 1914, in Memphis, Tennessee, to Mary Taylor Lee and Lawrence Johnson Lee. Her father, a railway postal clerk remarried shortly after his wife's death, when Browne was almost two years old. He and his second wife, Lottie, a school teacher, encouraged their daughter to take her studies seriously as she was a gifted student. Browne attended LeMoyne High School, a private Methodist school that was started after the Civil War. During her schooling, she won the Memphis City Women's Tennis Singles Championship in 1929 and two years later graduated from LeMoyne High School.
She attended Howard University, majoring in mathematics and graduating cum laude in 1935. After receiving her bachelor's degree, she taught high school and college for a short term, including at Gilbert Academy in New Orleans.
She then applied to the University of Michigan graduate program in mathematics. Michigan accepted African Americans, while many other US educational institutions did not at the time. After working full-time at the historically black Wiley College in Marshall, Texas, and attending Michigan only during the summer, Browne's work paid off and she received a teaching fellowship at Michigan, attending full-time and completing her dissertation in 1949. Her dissertation, "Studies of One Parameter Subgroups of Certain Topological and Matrix Groups," was supervised by George Yuri Rainich. She was one of the first African-American women in the US to earn a doctorate in mathematics, along with Evelyn Boyd Granville, who also earned a Ph.D. in 1949. Euphemia Haynes was the very first African-American woman in the US to earn a doctorate in mathematics, having earned hers in 1943.
Later life and career
After receiving her doctorate, Browne was unable to keep a teaching position at a research institution. As a result of this, she worked with secondary school mathematics teachers, instructing them in "modern math." She focused especially on encouraging math education for minorities and women.
Browne then joined the faculty at North Carolina College (now North Carolina Central University (NCCU)), where she taught and researched for thirty years. She was also the head of the department for much of her time at NCCU, from 1951 to 1970. There she worked as principal investigator, coordinator of the mathematics section, and lecturer for the Summer Institute for Secondary School Science and Mathematics Teachers.
Marjorie Lee Browne died of a heart attack in Durham, North Carolina, on October 19, 1979. After her death, four of her students established the Marjorie Lee Brown Trust Fund at North Caroli
|
https://en.wikipedia.org/wiki/Uniform%20star%20polyhedron
|
In geometry, a uniform star polyhedron is a self-intersecting uniform polyhedron. They are also sometimes called nonconvex polyhedra to imply self-intersecting. Each polyhedron can contain either star polygon faces, star polygon vertex figures, or both.
The complete set of 57 nonprismatic uniform star polyhedra includes the 4 regular ones, called the Kepler–Poinsot polyhedra, 5 quasiregular ones, and 48 semiregular ones.
There are also two infinite sets of uniform star prisms and uniform star antiprisms.
Just as (nondegenerate) star polygons (which have polygon density greater than 1) correspond to circular polygons with overlapping tiles, star polyhedra that do not pass through the center have polytope density greater than 1, and correspond to spherical polyhedra with overlapping tiles; there are 47 nonprismatic such uniform star polyhedra. The remaining 10 nonprismatic uniform star polyhedra, those that pass through the center, are the hemipolyhedra as well as Miller's monster, and do not have well-defined densities.
The nonconvex forms are constructed from Schwarz triangles.
All the uniform polyhedra are listed below by their symmetry groups and subgrouped by their vertex arrangements.
Regular polyhedra are labeled by their Schläfli symbol. Other nonregular uniform polyhedra are listed with their vertex configuration.
An additional figure, the pseudo great rhombicuboctahedron, is usually not included as a truly uniform star polytope, despite consisting of regular faces and having the same vertices.
Note: For nonconvex forms below an additional descriptor nonuniform is used when the convex hull vertex arrangement has same topology as one of these, but has nonregular faces. For example an nonuniform cantellated form may have rectangles created in place of the edges rather than squares.
Dihedral symmetry
See Prismatic uniform polyhedron.
Tetrahedral symmetry
There is one nonconvex form, the tetrahemihexahedron which has tetrahedral symmetry (with fundamental domain Möbius triangle (3 3 2)).
There are two Schwarz triangles that generate unique nonconvex uniform polyhedra: one right triangle ( 3 2), and one general triangle ( 3 3). The general triangle ( 3 3) generates the octahemioctahedron which is given further on with its full octahedral symmetry.
Octahedral symmetry
There are 8 convex forms, and 10 nonconvex forms with octahedral symmetry (with fundamental domain Möbius triangle (4 3 2)).
There are four Schwarz triangles that generate nonconvex forms, two right triangles ( 4 2), and ( 3 2), and two general triangles: ( 4 3), ( 4 4).
Icosahedral symmetry
There are 8 convex forms and 46 nonconvex forms with icosahedral symmetry (with fundamental domain Möbius triangle (5 3 2)). (or 47 nonconvex forms if Skilling's figure is included). Some of the nonconvex snub forms have reflective vertex symmetry.
Degenerate cases
Coxeter identified a number of degenerate star polyhedra by the Wythoff construction method, which contain
|
https://en.wikipedia.org/wiki/Fourier%E2%80%93Mukai%20transform
|
In algebraic geometry, a Fourier–Mukai transform ΦK is a functor between derived categories of coherent sheaves D(X) → D(Y) for schemes X and Y, which is, in a sense, an integral transform along a kernel object K ∈ D(X×Y). Most natural functors, including basic ones like pushforwards and pullbacks, are of this type.
These kinds of functors were introduced by in order to prove an equivalence between the derived categories of coherent sheaves on an abelian variety and its dual. That equivalence is analogous to the classical Fourier transform that gives an isomorphism between tempered distributions on a finite-dimensional real vector space and its dual.
Definition
Let X and Y be smooth projective varieties, K ∈ Db(X×Y) an object in the derived category of coherent sheaves on their product. Denote by q the projection X×Y→X, by p the projection X×Y→Y. Then the Fourier-Mukai transform ΦK is a functor Db(X)→Db(Y) given by
where Rp* is the derived direct image functor and is the derived tensor product.
Fourier-Mukai transforms always have left and right adjoints, both of which are also kernel transformations. Given two kernels K1 ∈ Db(X×Y) and K2 ∈ Db(Y×Z), the composed functor ΦK2 ∘ ΦK1 is also a Fourier-Mukai transform.
The structure sheaf of the diagonal , taken as a kernel, produces the identity functor on Db(X). For a morphism f:X→Y, the structure sheaf of the graph Γf produces a pushforward when viewed as an object in Db(X×Y), or a pullback when viewed as an object in Db(Y×X).
On abelian varieties
Let be an abelian variety and be its dual variety. The Poincaré bundle on , normalized to be trivial on the fiber at zero, can be used as a Fourier-Mukai kernel. Let and be the canonical projections.
The corresponding Fourier–Mukai functor with kernel is then
There is a similar functor
If the canonical class of a variety is ample or anti-ample, then the derived category of coherent sheaves determines the variety. In general, an abelian variety is not isomorphic to its dual, so this Fourier–Mukai transform gives examples of different varieties (with trivial canonical bundles) that have equivalent derived categories.
Let g denote the dimension of X. The Fourier–Mukai transformation is nearly involutive :
It interchanges Pontrjagin product and tensor product.
have used the Fourier-Mukai transform to prove the Künneth decomposition for the Chow motives of abelian varieties.
Applications in string theory
In string theory, T-duality (short for target space duality), which relates two quantum field theories or string theories with different spacetime geometries, is closely related with the Fourier-Mukai transformation.
See also
Derived noncommutative algebraic geometry
References
Abelian varieties
|
https://en.wikipedia.org/wiki/Sanjit%20Narwekar
|
Sanjit Narwekar (born 8 May 1952) is an Indian documentary filmmaker scriptwriter and author. A 1967 alumni of Bombay Scottish High School, Mumbai, he completed his Bachelor's in Statistics (1974) and his Master's in Economics (1976) from the University of Mumbai.
Early career
He began writing in 1969 while he was still in college and has more than 2500 articles to his credit in the last 50 years. During his post graduation he worked at the National Institute of Bank Management but opted to work in journalism soon after he had completed his Masters. He worked in a wide variety of newspaper organizations and has been News Editor of Screen (1980-1991), Editor of TV & Video World (1994–95) and Executive Editor of Documentary Today (2007-2012).
Career
Narwekar has authored/edited 20 books on film history and published more than 100 books for various organizations. He has also hosted interviews/magazine programs for All India Radio and Doordarshan. He has participated in several national and international seminars and conducted workshops on various aspects of Indian cinema, both in India and abroad. He has also served on several selection committees and juries and has been a member of the Film Advisory Board of the Government of India (1992–93).
He worked briefly as scriptwriter before turning to documentary films. He is based in Mumbai, India and heads his cinema research company Cinemaink.
Awards
In 1996 he was awarded the Swarna Kamal (National Award) for the Best Book on Cinema for Marathi Cinema in Retrospect. The Delhi-based newspaper The Pioneer nominated it among the top five books on Indian cinema in post-independent India (1947–1997).
In 2017 he was made Sanmanniya Sabhasad (Distinguished Fellow) of the Mumbai Marathi Patrakar Sangh.
In 2020 he was conferred the Bimal Roy Memorial Award "for documenting Indian Cinema's History with knowledge and scholarship".
In 2022 he was Chairman, National Jury, Mumbai International Film Festival for Documentary, Short and Animation Films 2022. Interaction with National Jury
In 2022 he was conferred the Dr V Shantaram Lifetime Achievement Award at the Mumbai International Film Festival for Documentary, Short and Animation Films 2022.Award presentation
Films: Documentaries
Nations in Turmoil (Writer-Director) (2011)
(Writer-Producer-Director) (2010)
Dreaming Movies (Writer-Director) (2007)
Protecting Creativity (Writer-Director) (2005)
Remembering Devendra Goel (Creative Consultant) (2005)
Lavni Rupdarshan (Director) (2004)
Life On The Edge (Writer-Director) (2002)
Yes! I Am A Communist (Writer/Co-director: Vinay Newalkar) (2002)
India Unveiled (Writer-Director) (1998)
Lavni (Executive Producer-Writer) (1998)
Films: Short Fiction
Mahilaone Thama Tha Akash (miniseries)(Executive Producer) (2002)
Yeh Kal Kab Aayega (Writer-Director) (2001)
Film Scripts
Durghatna (Hindi) (1994/unproduced)
Ghar Sansar (Marathi) (1993)
Khulyancha Bazaar (Marathi) (1992)
Premankur (Marathi) (1992)
|
https://en.wikipedia.org/wiki/Freudenthal%20Institute
|
The Freudenthal Institute (FI) is a research institute, part of the Faculty of Science of Utrecht University in the Netherlands. The FI aims to improve education in science and mathematics by means of education research and valorisation.
The institute was founded in 1971 by the German/Dutch writer, pedagogue and mathematician, professor Hans Freudenthal (1905–1990), as the Institute for the Development of Mathematical Education. In 1991, the institute was renamed after its founder.
Since 2003, an international institute for mathematics education, Freudenthal Institute - USA (Fi-US) was established in collaboration with the Wisconsin University in Madison, Wisconsin, USA. Since January 2006 Fi-US has been reallocated to the University of Colorado at Boulder.
References
External links
Website of the FI (in English)
Utrecht University
|
https://en.wikipedia.org/wiki/Carlos%20J.%20Moreno
|
Carlos Julio Moreno is a Colombian mathematician and faculty member at Baruch College and at the Graduate Center of the City University of New York (CUNY).
His B.A. and his Ph.D. in mathematics were earned at New York University. Moreno has over sixty publications, including two books, on topics dealing with algebra and number theory.
Selected publications
References
External links
Baruch College, Department of Mathematics, Masters Program in Financial Engineering
Year of birth missing (living people)
Living people
20th-century American mathematicians
21st-century American mathematicians
CUNY Graduate Center faculty
New York University alumni
|
https://en.wikipedia.org/wiki/Kurgan%20Airport
|
Kurgan Airport () is an airport in Russia located 6 km northeast of Kurgan. It handles medium-sized airliners.
Airlines and destinations
Statistics
External links
Kurgan Airport Official Site
Kurgan Airport flight schedule
Kurgan Aviation Museum
References
Airports built in the Soviet Union
Airports in Kurgan Oblast
|
https://en.wikipedia.org/wiki/%28417634%29%202006%20XG1
|
provisional designation , is a sub-kilometer asteroid, classified as near-Earth object and potentially hazardous asteroid of the Apollo group, that had a low but non-zero probability of impacting Earth on 31 October 2041. The asteroid was discovered on 20 September 2006, by astronomers of the Catalina Sky Survey, using a dedicated 0.68-meter telescope at Mount Lemmon Observatory in Arizona, United States.
Description
Originally listed with a Torino Scale hazard rating of 0, this was raised to a rating of 1 on 22 December 2006 as a result of additional observations and refinement of the orbital calculations. However, on 9 January 2007 it was returned to a rating of 0. It was removed from the Sentry Risk Table on 7 February 2007.
It is now known that the asteroid will not make a close approach to the Earth in 2041. On 31 October 2041, the asteroid will be from the Earth.
passed from asteroid 87 Sylvia on 20 June 1969. It is also a Mars-crosser asteroid.
Physical characteristics
According to the survey carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, measures 418 meters in diameter and its surface has an albedo of 0.154. Previously, JPL's Sentry System estimated a diameter of 670 meters with a mass of .
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
417634
417634
417634
417634
20061211
|
https://en.wikipedia.org/wiki/Disphenoid
|
In geometry, a disphenoid () is a tetrahedron whose four faces are congruent acute-angled triangles. It can also be described as a tetrahedron in which every two edges that are opposite each other have equal lengths. Other names for the same shape are isotetrahedron, sphenoid, bisphenoid, isosceles tetrahedron, equifacial tetrahedron, almost regular tetrahedron, and tetramonohedron.
All the solid angles and vertex figures of a disphenoid are the same, and the sum of the face angles at each vertex is equal to two right angles. However, a disphenoid is not a regular polyhedron, because, in general, its faces are not regular polygons, and its edges have three different lengths.
Special cases and generalizations
If the faces of a disphenoid are equilateral triangles, it is a regular tetrahedron with Td tetrahedral symmetry, although this is not normally called a disphenoid. When the faces of a disphenoid are isosceles triangles, it is called a tetragonal disphenoid. In this case it has D2d dihedral symmetry.
A sphenoid with scalene triangles as its faces is called a rhombic disphenoid and it has D2 dihedral symmetry. Unlike the tetragonal disphenoid, the rhombic disphenoid has no reflection symmetry, so it is chiral.
Both tetragonal disphenoids and rhombic disphenoids are isohedra: as well as being congruent to each other, all of their faces are symmetric to each other.
It is not possible to construct a disphenoid with right triangle or obtuse triangle faces. When right triangles are glued together in the pattern of a disphenoid, they form a flat figure (a doubly-covered rectangle) that does not enclose any volume. When obtuse triangles are glued in this way, the resulting surface can be folded to form a disphenoid (by Alexandrov's uniqueness theorem) but one with acute triangle faces and with edges that in general do not lie along the edges of the given obtuse triangles.
Two more types of tetrahedron generalize the disphenoid and have similar names. The digonal disphenoid has faces with two different shapes, both isosceles triangles, with two faces of each shape. The phyllic disphenoid similarly has faces with two shapes of scalene triangles.
Disphenoids can also be seen as digonal antiprisms or as alternated quadrilateral prisms.
Characterizations
A tetrahedron is a disphenoid if and only if its circumscribed parallelepiped is right-angled.
We also have that a tetrahedron is a disphenoid if and only if the center in the circumscribed sphere and the inscribed sphere coincide.
Another characterization states that if d1, d2 and d3 are the common perpendiculars of AB and CD; AC and BD; and AD and BC respectively in a tetrahedron ABCD, then the tetrahedron is a disphenoid if and only if d1, d2 and d3 are pairwise perpendicular.
The disphenoids are the only polyhedra having infinitely many non-self-intersecting closed geodesics. On a disphenoid, all closed geodesics are non-self-intersecting.
The disphenoids are the tetrahedra in which all f
|
https://en.wikipedia.org/wiki/Dispersive%20partial%20differential%20equation
|
In mathematics, a dispersive partial differential equation or dispersive PDE is a partial differential equation that is dispersive. In this context, dispersion means that waves of different wavelength propagate at different phase velocities.
Examples
Linear equations
Euler–Bernoulli beam equation with time-dependent loading
Airy equation
Schrödinger equation
Klein–Gordon equation
Nonlinear equations
nonlinear Schrödinger equation
Korteweg–de Vries equation (or KdV equation)
Boussinesq equation (water waves)
sine–Gordon equation
See also
Dispersion (optics)
Dispersion (water waves)
Dispersionless equation
External links
The Dispersive PDE Wiki.
Partial differential equations
Nonlinear systems
|
https://en.wikipedia.org/wiki/Mersenne%20conjectures
|
In mathematics, the Mersenne conjectures concern the characterization of a kind of prime numbers called Mersenne primes, meaning prime numbers that are a power of two minus one.
Original Mersenne conjecture
The original, called Mersenne's conjecture, was a statement by Marin Mersenne in his Cogitata Physico-Mathematica (1644; see e.g. Dickson 1919) that the numbers were prime for n = 2, 3, 5, 7, 13, 17, 19, 31, 67, 127 and 257, and were composite for all other positive integers n ≤ 257. The first seven entries of his list ( forn = 2, 3, 5, 7, 13, 17, 19) had already been proven to be primes by trial division before Mersenne's time; only the last four entries were new claims by Mersenne. Due to the size of those last numbers, Mersenne did not and could not test all of them, nor could his peers in the 17th century. It was eventually determined, after three centuries and the availability of new techniques such as the Lucas–Lehmer test, that Mersenne's conjecture contained five errors, namely two entries are composite (those corresponding to the primes n = 67, 257) and three primes are missing (those corresponding to the primes n = 61, 89, 107). The correct list for n≤ 257 is: n = 2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107 and 127.
While Mersenne's original conjecture is false, it may have led to the New Mersenne conjecture.
New Mersenne conjecture
The New Mersenne conjecture or Bateman, Selfridge and Wagstaff conjecture (Bateman et al. 1989) states that for any odd natural number p, if any two of the following conditions hold, then so does the third:
p = 2k ± 1 or p = 4k ± 3 for some natural number k. ()
2p − 1 is prime (a Mersenne prime). ()
(2p + 1)/3 is prime (a Wagstaff prime). ()
If p is an odd composite number, then 2p − 1 and (2p + 1)/3 are both composite. Therefore it is only necessary to test primes to verify the truth of the conjecture.
Currently, there are nine known numbers for which all three conditions hold: 3, 5, 7, 13, 17, 19, 31, 61, 127 . Bateman et al. expected that no number greater than 127 satisfies all three conditions, and showed that heuristically no greater number would even satisfy two conditions, which would make the New Mersenne Conjecture trivially true.
As of 2023 all the Mersenne primes up to 257885161 − 1 are known, and for none of these does the third condition hold except for the ones just mentioned.
Primes which satisfy at least one condition are
2, 3, 5, 7, 11, 13, 17, 19, 23, 31, 43, 61, 67, 79, 89, 101, 107, 127, 167, 191, 199, 257, 313, 347, 521, 607, 701, 1021, 1279, 1709, 2203, 2281, 2617, 3217, 3539, 4093, 4099, 4253, 4423, 5807, 8191, 9689, 9941, ...
Note that the two primes for which the original Mersenne conjecture is false (67 and 257) satisfy the first condition of the new conjecture (67 = 26 + 3, 257 = 28 + 1), but not the other two. 89 and 107, which were missed by Mersenne, satisfy the second condition but not the other two. Mersenne may have thought that 2p − 1 is prime only if p = 2k ±
|
https://en.wikipedia.org/wiki/Daniel%20W.%20Stroock
|
Daniel Wyler Stroock (born March 20, 1940) is an American mathematician, a probabilist. He is regarded and revered as one of the fundamental contributors to Malliavin calculus with Shigeo Kusuoka and the theory of diffusion processes with S. R. Srinivasa Varadhan with an orientation towards the refinement and further development of Itô’s stochastic calculus.
Biography
He received his undergraduate degree from Harvard University in 1962 and his doctorate from Rockefeller University in 1966. He has taught at the Courant Institute of Mathematical Sciences and the University of Colorado, Boulder and is currently Simons Professor at the Massachusetts Institute of Technology. He is known for his work with S. R. S. Varadhan on diffusion processes, for which he received the Leroy P. Steele Prize for Seminal Contribution to Research in 1996.
Stroock is a member of the U.S. National Academy of Sciences., In 2012 he became a fellow of the American Mathematical Society.
Quotes
Mathematics is one, and possibly the only, human endeavor for which there is a widely, if not universally, recognized criterion with which to determine truth. For this reason, mathematicians can avoid some of the interminable disputes which plague other fields. On the other hand, I sometimes wonder whether the most interesting questions are not those for which such disputes are inevitable.
Selected publications
with S. R. S. Varadhan: ; reprintings 1997, 2006
with Andrzej Korzeniowski:
with Jean-Dominique Deuschel: ; reprinting 2001
; Birkhäuser, 2nd edition 1994;
References
External links
Home page for Daniel W. Stroock
Members of the United States National Academy of Sciences
Fellows of the American Mathematical Society
Probability theorists
20th-century American mathematicians
21st-century American mathematicians
Living people
University of Colorado faculty
Massachusetts Institute of Technology School of Science faculty
Harvard University alumni
1940 births
Courant Institute of Mathematical Sciences faculty
|
https://en.wikipedia.org/wiki/Infinitely%20near%20point
|
In algebraic geometry, an infinitely near point of an algebraic surface S is a point on a surface obtained from S by repeatedly blowing up points. Infinitely near points of algebraic surfaces were introduced by .
There are some other meanings of "infinitely near point". Infinitely near points can also be defined for higher-dimensional varieties: there are several inequivalent ways to do this, depending on what one is allowed to blow up. Weil gave a definition of infinitely near points of smooth varieties, though these are not the same as infinitely near points in algebraic geometry.
In the line of hyperreal numbers, an extension of the real number line, two points are called infinitely near if their difference is infinitesimal.
Definition
When blowing up is applied to a point P on a surface S, the new surface S* contains a whole curve C where P used to be. The points of C have the geometric interpretation as the tangent directions at P to S. They can be called infinitely near to P as way of visualizing them on S, rather than S*. More generally this construction can be iterated by blowing up a point on the new curve C, and so on.
An infinitely near point (of order n) Pn on a surface S0 is given by a sequence of points P0, P1,...,Pn on surfaces S0, S1,...,Sn such that Si is given by blowing up Si–1 at the point Pi–1 and Pi is a point of the surface Si with image Pi–1.
In particular the points of the surface S are the infinitely near points on S of order 0.
Infinitely near points correspond to 1-dimensional valuations of the function field of S with 0-dimensional center, and in particular correspond to some of the points of the Zariski–Riemann surface. (The 1-dimensional valuations with 1-dimensional center correspond to irreducible curves of S.) It is also possible to iterate the construction infinitely often, producing an infinite sequence P0, P1,... of infinitely near points. These infinite sequences correspond to the 0-dimensional valuations of the function field of the surface, which correspond to the "0-dimensional" points of the Zariski–Riemann surface.
Applications
If C and D are distinct irreducible curves on a smooth surface S intersecting at a point p, then the multiplicity of their intersection at p is given by
where mx(C) is the multiplicity of C at x. In general this is larger than mp(C)mp(D) if C and D have a common tangent line at x so that they also intersect at infinitely near points of order greater than 0, for example if C is the line y = 0 and D is the parabola y = x2 and p = (0,0).
The genus of C is given by
where N is the normalization of C and mx is the multiplicity of the infinitely near point x on C.
References
Geometry
Differential calculus
Nonstandard analysis
Birational geometry
|
https://en.wikipedia.org/wiki/John%20Morgan%20%28mathematician%29
|
John Willard Morgan (born March 21, 1946) is an American mathematician known for his contributions to topology and geometry. He is a Professor Emeritus at Columbia University and a member of the Simons Center for Geometry and Physics at Stony Brook University.
Life
Morgan received his B.A. in 1968 and Ph.D. in 1969, both from Rice University. His Ph.D. thesis, entitled Stable tangential homotopy equivalences, was written under the supervision of Morton L. Curtis. He was an instructor at Princeton University from 1969 to 1972, and an assistant professor at MIT from 1972 to 1974. He has been on the faculty at Columbia University since 1974, serving as the Chair of the Department of Mathematics from 1989 to 1991 and becoming Professor Emeritus in 2010. Morgan is a member of the Simons Center for Geometry and Physics at Stony Brook University and served as its founding director from 2009 to 2016.
From 1974 to 1976, Morgan was a Sloan Research Fellow. In 2008, he was awarded a Gauss Lectureship by the German Mathematical Society. In 2009 he was elected to the National Academy of Sciences. In 2012 he became a fellow of the American Mathematical Society. Morgan is a Member of the European Academy of Sciences.
Mathematical contributions
Morgan's best-known work deals with the topology of complex manifolds and algebraic varieties. In the 1970s, Dennis Sullivan developed the notion of a minimal model of a differential graded algebra. One of the simplest examples of a differential graded algebra is the space of smooth differential forms on a smooth manifold, so that Sullivan was able to apply his theory to understand the topology of smooth manifolds. In the setting of Kähler geometry, due to the corresponding version of the Poincaré lemma, this differential graded algebra has a decomposition into holomorphic and anti-holomorphic parts. In collaboration with Pierre Deligne, Phillip Griffiths, and Sullivan, Morgan used this decomposition to apply Sullivan's theory to study the topology of simply-connected compact Kähler manifolds. Their primary result is that the real homotopy type of such a space is determined by its cohomology ring. Morgan later extended this analysis to the setting of smooth complex algebraic varieties, using Deligne's formulation of mixed Hodge structures to extend the Kähler decomposition of smooth differential forms and of the exterior derivative.
In 2002 and 2003, Grigori Perelman posted three papers to the arXiv which purported to use Richard Hamilton's theory of Ricci flow solve the geometrization conjecture in three-dimensional topology, of which the renowned Poincaré conjecture is a special case. Perelman's first two papers claimed to prove the geometrization conjecture; the third paper gives an argument which would obviate the technical work in the second half of the second paper in order to give a shortcut to prove the Poincaré conjecture. Many mathematicians found Perelman's work to be hard to follow due to a lack of detai
|
https://en.wikipedia.org/wiki/Bruce%20Kleiner
|
Bruce Alan Kleiner is an American mathematician, working in differential geometry and topology and geometric group theory.
He received his Ph.D. in 1990 from the University of California, Berkeley. His advisor was Wu-Yi Hsiang. Kleiner is a professor of mathematics at New York University.
Kleiner has written expository papers on the Ricci flow. Together with John Lott of the University of Michigan, he filled in details of Grigori Perelman's proof of the Geometrization conjecture (from which the Poincaré conjecture follows) in the years 2003–2006. Theirs was the first publication acknowledging Perelman's accomplishment (in May, 2006), which was shortly followed by similar papers by Huai-Dong Cao and Xi-Ping Zhu (in June) and John Morgan and Gang Tian (in July).
Kleiner found a relatively simple proof of Gromov's theorem on groups of polynomial growth. He also proved the Cartan–Hadamard conjecture in dimension 3.
References
Citations
Bibliography
External links
Home page at NYU
20th-century American mathematicians
21st-century American mathematicians
Geometers
Living people
Topologists
Yale University faculty
Courant Institute of Mathematical Sciences faculty
University of California, Berkeley alumni
University of Michigan faculty
Year of birth missing (living people)
|
https://en.wikipedia.org/wiki/Zhu%20Xiping
|
Zhu Xiping (born 1962 in Shixing, Guangdong) is a Chinese mathematician. He is a professor of Mathematics at Sun Yat-sen University, China.
Poincaré conjecture
In 2002 and 2003, Grigori Perelman posted three preprints to the arXiv claiming a resolution of the renowned Poincaré conjecture, along with the more general geometrization conjecture. His work contained a number of notable new results on the Ricci flow, although many proofs were only sketched and a number of details were unaddressed. Zhu collaborated with Huai-Dong Cao of Lehigh University in filling in the details of Perelman's work, along with reworking various elements. Their work, containing expositions of Perelman's work along with the foundational work of Richard Hamilton, was published in the June 2006 issue of the Asian Journal of Mathematics. Other notable expositions were released around the same time, one by John Morgan of Columbia University and Gang Tian of Princeton University, and the other by Bruce Kleiner of Yale University and John Lott of University of Michigan.
Cao and Zhu later posted a version with revised wording to the arxiv, following criticism alleging that their original version claimed too much credit for themselves. They also published an erratum, as it had been found that one of the pages of their work was essentially identical to a page from a publicly available draft of Kleiner and Lott from 2003. They explained that they had taken down some notes from Kleiner and Lott's paper. When writing their exposition, they had failed to realize these particular notes' original source.
Morningside Medal
In December 2004, Zhu won the Morningside Medal of Mathematics at the Third International Congress of Chinese Mathematicians (ICCM), a triennial congress hosted by institutions in Mainland China, Taiwan, and Hong Kong on a rotating basis. According to ICCM, "Awardees (of the Morningside Medal) are selected by a panel of international renowned mathematicians with the aim to encourage outstanding mathematicians of Chinese descent in their pursuit of mathematical truth."
Major publications
References
1962 births
Living people
21st-century Chinese mathematicians
Educators from Guangdong
Hakka scientists
Mathematicians from Guangdong
21st-century Chinese science writers
Academic staff of Sun Yat-sen University
Writers from Shaoguan
|
https://en.wikipedia.org/wiki/Huai-Dong%20Cao
|
Huai-Dong Cao (born 8 November 1959, in Jiangsu) is a Chinese–American mathematician. He is the A. Everett Pitcher Professor of Mathematics at Lehigh University. He is known for his research contributions to the Ricci flow, a topic in the field of geometric analysis.
Academic history
Cao received his B.A. from Tsinghua University in 1981 and his Ph.D. from Princeton University in 1986 under the supervision of Shing-Tung Yau.
Cao is a former Associate Director, Institute for Pure and Applied Mathematics (IPAM) at UCLA. He has held visiting Professorships at MIT, Harvard University, Isaac Newton Institute, Max-Planck Institute, IHES, ETH Zurich, and University of Pisa. He has been the managing editor of the Journal of Differential Geometry since 2003. His awards and honors include:
Sloan Research Fellowship (1991-1993)
Guggenheim Fellowship (2004)
Outstanding Overseas Young Researcher Award awarded by the National Natural Science Foundation of China (2005)
Mathematical contributions
Kähler-Ricci flow
In 1982, Richard S. Hamilton introduced the Ricci flow, proving a dramatic new theorem on the geometry of three-dimensional manifolds. Cao, who had just begun his Ph.D. studies under Shing-Tung Yau, began to study the Ricci flow in the setting of Kähler manifolds. In his Ph.D. thesis, published in 1985, he showed that Yau's estimates in the resolution of the Calabi conjecture could be modified to the Kähler-Ricci flow context, to prove a convergence theorem similar to Hamilton's original result. This also provided a parabolic alternative to Yau's method of continuity in the proof of the Calabi conjecture, although much of the technical work in the proofs is similar.
Perelman's work on the Ricci flow
Following a suggestion of Yau's that the Ricci flow could be used to prove William Thurston's Geometrization conjecture, Hamilton developed the theory over the following two decades. In 2002 and 2003, Grisha Perelman posted two articles to the arXiv in which he claimed to present a proof, via the Ricci flow, of the geometrization conjecture. Additionally, he posted a third article in which he gave a shortcut to the proof of the famous Poincaré conjecture, for which the results in the second half of the second paper were unnecessary. Perelman's papers were immediately recognized as giving notable new results in the theory of Ricci flow, although many mathematicians were unable to fully understand the technical details of some unusually complex or terse sections in his work.
Bruce Kleiner of Yale University and John Lott of the University of Michigan began posting annotations of Perelman's first two papers to the web in 2003, adding to and modifying them over the next several years. The results of this work were published in an academic journal in 2008. Cao collaborated with Xi-Ping Zhu of Zhongshan University, publishing an exposition in 2006 of Hamilton's work and of Perelman's first two papers, explaining them in the context of the mathematical
|
https://en.wikipedia.org/wiki/Alexander%20Givental
|
Alexander Givental () is a Russian-American mathematician working in symplectic topology and singularity theory, as well as their relation to topological string theories. He graduated from Moscow Phys-Math school number 2 (later renamed into Lyceum ) and then the Gubkin Russian State University of Oil and Gas, and he finally got his Ph.D. under the supervision of V. I. Arnold in 1987. He emigrated to the United States in 1990. He provided the first proof of the mirror conjecture for Calabi–Yau manifolds that are complete intersections in toric ambient spaces, in particular for quintic hypersurfaces in P4. He is now Professor of Mathematics at the University of California, Berkeley. As an extracurricular activity, he translates Russian poetry into English and publishes books, including his own translation of a textbook () in geometry by Andrey Kiselyov and poetry of Marina Tsvetaeva. Givental is a father of two.
References
.
Sumizdat, publisher of English translation of Geometry
MAA review of Geometry
External links
Personal website at Berkeley
20th-century American mathematicians
21st-century American mathematicians
Russian mathematicians
Algebraic geometers
American people of Russian descent
Living people
University of California, Berkeley College of Letters and Science faculty
Soviet mathematicians
1958 births
|
https://en.wikipedia.org/wiki/Frank%20Quinn%20%28mathematician%29
|
Frank Stringfellow Quinn, III (born 1946) is an American mathematician and professor of mathematics at Virginia Polytechnic Institute and State University, specializing in geometric topology.
Contributions
He contributed to the mathematical field of 4-manifolds, including a proof of the 4-dimensional annulus theorem. In surgery theory, he made several important contributions: the invention of the assembly map, that enables a functorial description of surgery in the topological category, with his thesis advisor, William Browder, the development of an early surgery theory for stratified spaces, and perhaps most importantly, he pioneered the use of controlled methods in geometric topology and in algebra. Among his important applications of "control" are his aforementioned proof of the 4-dimensional annulus theorem, his development of a flexible category of stratified spaces, and, in combination with work of Robert D. Edwards, a useful characterization of high-dimensional manifolds among homology manifolds.
In addition to his work in mathematical research, he has written articles on the nature and history of mathematics and on issues of mathematical education.
Awards and honors
In 2012 he became a fellow of the American Mathematical Society.
Selected publications
Frank Quinn, Ends of maps. I. Annals of Mathematics (2) 110 (1979), no. 2, 275–331.
Frank Quinn, Ends of maps. II. Inventiones Mathematicae 68 (1982), no. 3, 353–424.
Frank Quinn, Ends of maps. III. Dimensions 4 and 5. Journal of Differential Geometry 17 (1982), no. 3, 503–521.
Michael Freedman and Frank Quinn, Topology of 4-manifolds. Princeton Mathematical Series, 39. Princeton University Press, Princeton, NJ, 1990. viii+259 pp.
Vyacheslav S. Krushkal and Frank Quinn, Subexponential groups in 4-manifold topology. Geometry and Topology 4 (2000), 407–430.
References
External links
Home page
Theoretical Mathematics by Arthur Jaffe and Frank Quinn
AMS K-12 education Working Group
Frank Quinn papers at Microsoft Academic
Prospects in Topology (AM-138), Volume 138: Proceedings of a Conference in Honor of William Browder. (AM-138)
20th-century American mathematicians
21st-century American mathematicians
Topologists
Princeton University alumni
Virginia Tech faculty
Fellows of the American Mathematical Society
1946 births
Living people
|
https://en.wikipedia.org/wiki/Joseph%20J.%20Kohn
|
Joseph John Kohn (May 18, 1932 – September 13, 2023) was a Czechoslovakian-born American academic and mathematician. He was professor of mathematics at Princeton University, where he researched partial differential operators and complex analysis.
Life and work
Kohn's father was Czech-Jewish architect Otto Kohn. After Nazi Germany invaded Czechoslovakia, he and his family emigrated to Ecuador in 1939. There, Otto attended Colegio Americano de Quito.
In 1945, Joseph moved to the United States, where he attended Brooklyn Technical High School. He studied at MIT (B.S. 1953) and at Princeton University, where he earned his Ph.D. in 1956 under Donald Spencer ("A Non-Self-Adjoint Boundary Value Problem on Pseudo-Kähler Manifolds").
From 1956 to 1957, Kohn was an instructor at Princeton. In 1958, he served as assistant professor, in 1962, associate professor and in 1964, professor at Brandeis University, where he also served as Chairman of the Mathematics Department (1963–66). Since 1968, he had been a professor at Princeton University, where he served as chairman from 1993 to 1996. He was a visiting professor at Harvard (1996–97), Prague, Florence, Mexico City (National Polytechnic Institute), Stanford, Berkeley, Scuola Normale Superiore (Pisa, Italy), and IHES (France).
Kohn's work focused, among other things, on the use of partial differential operators in the theory of functions of several complex variables and microlocal analysis. He has at least 65 doctoral descendants.
Kohn was a Sloan Fellow in 1963 and a Guggenheim Fellow in 1976–77. From 1976 to 1988, he was a member of the editorial board of the Annals of Mathematics. In 1966, he was an invited speaker at the International Congress of Mathematicians in Moscow ("Differential complexes").
Film director Miloš Forman was his half-brother through their father Otto Kohn.
Kohn died in Plainsboro, New Jersey on September 13, 2023, at the age of 91.
Awards and honors
Since 1966, Kohn has been a member of the American Academy of Arts and Sciences and a member of the National Academy of Sciences since 1988. In 2012, he became a fellow of the American Mathematical Society (AMS).
Kohn won the AMS's Steele Prize in 1979 for his paper Harmonic integrals on strongly convex domains. In 1990, he received an Honorary Doctorate from the University of Bologna. In 2004, he was awarded the Bolzano Prize.
Literature
Bloom, Catlin, D´Angelo, Siu (Herausgeber) Modern methods in complex analysis. Papers from the conference honoring Robert Gunning and Joseph Kohn on the occasion of their 60th birthdays held at Princeton University 1992, Princeton University Press (PUP) 1995
References
External links
Curriculum Vitae of Joseph J. Kohn
Leroy P. Steele prizes
1932 births
2023 deaths
Czech Jews
Princeton University alumni
Harvard University staff
Princeton University faculty
20th-century American mathematicians
21st-century American mathematicians
Complex analysts
PDE theorists
Fellows of the American Aca
|
https://en.wikipedia.org/wiki/Region%20connection%20calculus
|
The region connection calculus (RCC) is intended to serve for qualitative spatial representation and reasoning. RCC abstractly describes regions (in Euclidean space, or in a topological space) by their possible relations to each other. RCC8 consists of 8 basic relations that are possible between two regions:
disconnected (DC)
externally connected (EC)
equal (EQ)
partially overlapping (PO)
tangential proper part (TPP)
tangential proper part inverse (TPPi)
non-tangential proper part (NTPP)
non-tangential proper part inverse (NTPPi)
From these basic relations, combinations can be built. For example, proper part (PP) is the union of TPP and NTPP.
Axioms
RCC is governed by two axioms.
for any region x, x connects with itself
for any region x, y, if x connects with y, y will connect with x
Remark on the axioms
The two axioms describe two features of the connection relation, but not the characteristic feature of the connect relation. For example, we can say that an object is less than 10 meters away from itself and that if object A is less than 10 meters away from object B, object B will be less than 10 meters away from object A. So, the relation 'less-than-10-meters' also satisfies the above two axioms, but does not talk about the connection relation in the intended sense of RCC.
Composition table
The composition table of RCC8 are as follows:
"*" denotes the universal relation, no relation can be discarded.
Usage example: if a TPP b and b EC c, (row 4, column 2) of the table says that a DC c or a EC c.
Examples
The RCC8 calculus is intended for reasoning about spatial configurations. Consider the following example: two houses are connected via a road. Each house is located on an own property. The first house possibly touches the boundary of the property; the second one surely does not. What can we infer about the relation of the second property to the road?
The spatial configuration can be formalized in RCC8 as the following constraint network:
house1 DC house2
house1 {TPP, NTPP} property1
house1 {DC, EC} property2
house1 EC road
house2 { DC, EC } property1
house2 NTPP property2
house2 EC road
property1 { DC, EC } property2
road { DC, EC, TPP, TPPi, PO, EQ, NTPP, NTPPi } property1
road { DC, EC, TPP, TPPi, PO, EQ, NTPP, NTPPi } property2
Using the RCC8 composition table and the path-consistency algorithm, we can refine the network in the following way:
road { PO, EC } property1
road { PO, TPP } property2
That is, the road either overlaps (PO) property2, or is a tangential proper part of it. But, if the road is a tangential proper part of property2, then the road can only be externally connected (EC) to property1. That is, road PO property1 is not possible when road TPP property2. This fact is not obvious, but can be deduced once we examine the consistent "singleton-labelings" of the constraint network. The following paragraph briefly describes singleton-labelings.
First, we note that the path-consistency algorith
|
https://en.wikipedia.org/wiki/Minkowski%27s%20bound
|
In algebraic number theory, Minkowski's bound gives an upper bound of the norm of ideals to be checked in order to determine the class number of a number field K. It is named for the mathematician Hermann Minkowski.
Definition
Let D be the discriminant of the field, n be the degree of K over , and be the number of complex embeddings where is the number of real embeddings. Then every class in the ideal class group of K contains an integral ideal of norm not exceeding Minkowski's bound
Minkowski's constant for the field K is this bound MK.
Properties
Since the number of integral ideals of given norm is finite, the finiteness of the class number is an immediate consequence, and further, the ideal class group is generated by the prime ideals of norm at most MK.
Minkowski's bound may be used to derive a lower bound for the discriminant of a field K given n, r1 and r2. Since an integral ideal has norm at least one, we have 1 ≤ MK, so that
For n at least 2, it is easy to show that the lower bound is greater than 1, so we obtain Minkowski's Theorem, that the discriminant of every number field, other than Q, is non-trivial. This implies that the field of rational numbers has no unramified extension.
Proof
The result is a consequence of Minkowski's theorem.
References
External links
Stevenhagen, Peter. Number Rings.
The Minkowski Bound at Secret Blogging Seminar
Theorems in algebraic number theory
Hermann Minkowski
|
https://en.wikipedia.org/wiki/Model%20output%20statistics
|
In weather forecasting, model output statistics (MOS) is a multiple linear regression technique in which predictands, often near-surface quantities (such as two-meter-above-ground-level air temperature, horizontal visibility, and wind direction, speed and gusts), are related statistically to one or more predictors. The predictors are typically forecasts from a numerical weather prediction (NWP) model, climatic data, and, if applicable, recent surface observations. Thus, output from NWP models can be transformed by the MOS technique into sensible weather parameters that are familiar to a layperson.
Background
Output directly from the NWP model's lowest layer(s) generally is not used by forecasters because the actual physical processes that occur within the Earth's boundary layer are crudely approximated in the model (i.e., physical parameterizations) along with its relatively coarse horizontal resolution. Because of this lack of fidelity and its imperfect initial state, forecasts of near-surface quantities obtained directly from the model are subject to systematic (bias) and random model errors, which tend to grow with time.
In the development of MOS equations, past observations and archived NWP model forecast fields are used with a screening regression to determine the 'best' predictors and their coefficients for a particular predictand and forecast time. By using archived model forecast output along with verifying surface observations, the resulting equations implicitly take into account physical effects and processes which the underlying numerical weather prediction model cannot explicitly resolve, resulting in much better forecasts of sensible weather quantities. In addition to correcting systematic errors, MOS can produce reliable probabilities of weather events from a single model run. In contrast, despite the enormous amount of computing resources devoted to generating them, ensemble model forecasts' relative frequency of events—often used as a proxy for probability—do not exhibit useful reliability. Thus, ensemble NWP model output also requires additional post-processing in order to obtain reliable probabilistic forecasts, using nonhomogeneous Gaussian regression or other methods.
History
United States
MOS was conceived and planning for its use began within the U.S. National Weather Service’s (NWS’s) Techniques Development Laboratory (TDL) in 1965 and forecasts first issued from it in 1968. Since then, TDL, now the Meteorological Development Laboratory (MDL), continued to create, refine and update MOS equation sets as additional NWP models were developed and made operational at the National Meteorological Center (NMC) and then the Environmental Modeling Center or EMC.
Given its multi-decadal history within the U.S. NWS and its continuous improvement and superior skill over direct NWP model output, MOS guidance is still one of the most valuable forecast tools used by forecasters within the agency.
United States forecast guidance
Ther
|
https://en.wikipedia.org/wiki/Risk%20matrix
|
A risk matrix is a matrix that is used during risk assessment to define the level of risk by considering the category of probability or likelihood against the category of consequence severity. This is a simple mechanism to increase visibility of risks and assist management decision making.
Definitions
Risk is the lack of certainty about the outcome of making a particular choice. Statistically, the level of downside risk can be calculated as the product of the probability that harm occurs (e.g., that an accident happens) multiplied by the severity of that harm (i.e., the average amount of harm or more conservatively the maximum credible amount of harm). In practice, the risk matrix is a useful approach where either the probability or the harm severity cannot be estimated with accuracy and precision.
Although standard risk matrices exist in certain contexts (e.g. US DoD, NASA, ISO), individual projects and organizations may need to create their own or tailor an existing risk matrix. For example, the harm severity can be categorized as:
Catastrophic: death or permanent total disability, significant irreversible environmental impact, total loss of equipment
Critical: accident level injury resulting in hospitalization, permanent partial disability, significant reversible environmental impact, damage to equipment
Marginal: injury causing lost workdays, reversible moderate environmental impact, minor accident damage level
Minor: injury not causing lost workdays, minimal environmental impact, damage less than a minor accident level
The probability of harm occurring might be categorized as 'certain', 'likely', 'possible', 'unlikely' and 'rare'. However it must be considered that very low probabilities may not be very reliable.
The resulting risk matrix could be:
The company or organization then would calculate what levels of risk they can take with different events. This would be done by weighing the risk of an event occurring against the cost to implement safety and the benefit gained from it.
The following is an example matrix of possible personal injuries, with particular accidents allocated to appropriate cells within the matrix:
The risk matrix is approximate and can often be challenged. For example, the likelihood of death in an aircraft crash is about 1:11 million but death by motor vehicle is 1:5000, but nobody usually survives a plane crash, so it is far more catastrophic.
Development
On January 30 1978, a new version of US Department of Defense Instruction 6055.1 ("Department of Defense Occupational Safety and Health Program") was released. It is said to have been an important step towards the development of the risk matrix.
In August 1978, business textbook author David E Hussey defined an investment "risk matrix" with risk on one axis, and profitability on the other. The values on the risk axis were determined by first determining risk impact and risk probability values in a manner identical to completing a 7 x 7 version of the moder
|
https://en.wikipedia.org/wiki/De%20Morgan%20algebra
|
In mathematics, a De Morgan algebra (named after Augustus De Morgan, a British mathematician and logician) is a structure A = (A, ∨, ∧, 0, 1, ¬) such that:
(A, ∨, ∧, 0, 1) is a bounded distributive lattice, and
¬ is a De Morgan involution: ¬(x ∧ y) = ¬x ∨ ¬y and ¬¬x = x. (i.e. an involution that additionally satisfies De Morgan's laws)
In a De Morgan algebra, the laws
¬x ∨ x = 1 (law of the excluded middle), and
¬x ∧ x = 0 (law of noncontradiction)
do not always hold. In the presence of the De Morgan laws, either law implies the other, and an algebra which satisfies them becomes a Boolean algebra.
Remark: It follows that ¬(x ∨ y) = ¬x ∧ ¬y, ¬1 = 0 and ¬0 = 1 (e.g. ¬1 = ¬1 ∨ 0 = ¬1 ∨ ¬¬0 = ¬(1 ∧ ¬0) = ¬¬0 = 0). Thus ¬ is a dual automorphism of (A, ∨, ∧, 0, 1).
If the lattice is defined in terms of the order instead, i.e. (A, ≤) is a bounded partial order with a least upper bound and greatest lower bound for every pair of elements, and the meet and join operations so defined satisfy the distributive law, then the complementation can also be defined as an involutive anti-automorphism, that is, a structure A = (A, ≤, ¬) such that:
(A, ≤) is a bounded distributive lattice, and
¬¬x = x, and
x ≤ y → ¬y ≤ ¬x.
De Morgan algebras were introduced by Grigore Moisil around 1935, although without the restriction of having a 0 and a 1. They were then variously called quasi-boolean algebras in the Polish school, e.g. by Rasiowa and also distributive i-lattices by J. A. Kalman. (i-lattice being an abbreviation for lattice with involution.) They have been further studied in the Argentinian algebraic logic school of Antonio Monteiro.
De Morgan algebras are important for the study of the mathematical aspects of fuzzy logic. The standard fuzzy algebra F = ([0, 1], max(x, y), min(x, y), 0, 1, 1 − x) is an example of a De Morgan algebra where the laws of excluded middle and noncontradiction do not hold.
Another example is Dunn's four-valued semantics for De Morgan algebra, which has the values T(rue), F(alse), B(oth), and N(either), where
F < B < T,
F < N < T, and
B and N are not comparable.
Kleene algebra
If a De Morgan algebra additionally satisfies x ∧ ¬x ≤ y ∨ ¬y, it is called a Kleene algebra. (This notion should not be confused with the other Kleene algebra generalizing regular expressions.) This notion has also been called a normal i-lattice by Kalman.
Examples of Kleene algebras in the sense defined above include: lattice-ordered groups, Post algebras and Łukasiewicz algebras. Boolean algebras also meet this definition of Kleene algebra. The simplest Kleene algebra that is not Boolean is Kleene's three-valued logic K3. K3 made its first appearance in Kleene's On notation for ordinal numbers (1938). The algebra was named after Kleene by Brignole and Monteiro.
Related notions
De Morgan algebras are not the only plausible way to generalize Boolean algebras. Another way is to keep ¬x ∧ x = 0 (i.e. the law of noncontradiction) but to dro
|
https://en.wikipedia.org/wiki/Negative%20probability
|
The probability of the outcome of an experiment is never negative, although a quasiprobability distribution allows a negative probability, or quasiprobability for some events. These distributions may apply to unobservable events or conditional probabilities.
Physics and mathematics
In 1942, Paul Dirac wrote a paper "The Physical Interpretation of Quantum Mechanics" where he introduced the concept of negative energies and negative probabilities:
The idea of negative probabilities later received increased attention in physics and particularly in quantum mechanics. Richard Feynman argued that no one objects to using negative numbers in calculations: although "minus three apples" is not a valid concept in real life, negative money is valid. Similarly he argued how negative probabilities as well as probabilities above unity possibly could be useful in probability calculations.
Negative probabilities have later been suggested to solve several problems and paradoxes. Half-coins provide simple examples for negative probabilities. These strange coins were introduced in 2005 by Gábor J. Székely. Half-coins have infinitely many sides numbered with 0,1,2,... and the positive even numbers are taken with negative probabilities. Two half-coins make a complete coin in the sense that if we flip two half-coins then the sum of the outcomes is 0 or 1 with probability 1/2 as if we simply flipped a fair coin.
In Convolution quotients of nonnegative definite functions and Algebraic Probability Theory Imre Z. Ruzsa and Gábor J. Székely proved that if a random variable X has a signed or quasi distribution where some of the probabilities are negative then one can always find two random variables, Y and Z, with ordinary (not signed / not quasi) distributions such that X, Y are independent and X + Y = Z in distribution. Thus X can always be interpreted as the "difference" of two ordinary random variables, Z and Y. If Y is interpreted as a measurement error of X and the observed value is Z then the negative regions of the distribution of X are masked / shielded by the error Y.
Another example known as the Wigner distribution in phase space, introduced by Eugene Wigner in 1932 to study quantum corrections, often leads to negative probabilities. For this reason, it has later been better known as the Wigner quasiprobability distribution. In 1945, M. S. Bartlett worked out the mathematical and logical consistency of such negative valuedness. The Wigner distribution function is routinely used in physics nowadays, and provides the cornerstone of phase-space quantization. Its negative features are an asset to the formalism, and often indicate quantum interference. The negative regions of the distribution are shielded from direct observation by the quantum uncertainty principle: typically, the moments of such a non-positive-semidefinite quasiprobability distribution are highly constrained, and prevent direct measurability of the negative regions of the distribution. Neverth
|
https://en.wikipedia.org/wiki/%28144898%29%202004%20VD17
|
(144898) , provisional designation , is a sub-kilometer asteroid, classified as near-Earth object of the Apollo group once thought to have a low probability of impacting Earth on 4 May 2102. It reached a Torino Scale rating of 2 and a Palermo Technical Impact Hazard Scale rating of -0.25. With an observation arc of 17 years it is known that closest Earth approach will occur two days earlier on 2 May 2102 at a distance of about 5.5 million km.
History
was discovered on 7 November 2004, by the NASA-funded LINEAR asteroid survey. The object is estimated by NASA's Near Earth Object Program Office to be 580 meters in diameter with an approximate mass of .
Being approximately 580 meters in diameter, if were to impact land, it would create an impact crater about 10 kilometres wide and generate an earthquake of magnitude 7.4.
Elevated risk estimate in 2006
From February to May 2006, was listed with a Torino Scale impact risk value of 2, only the second asteroid in risk-monitoring history to be rated above value 1. With an observation arc of 1511 days, it was estimated to have a 1 in 1320 chance of impacting on 4 May 2102. The Torino rating was lowered to 1 after additional observations on 20 May 2006, and finally dropped to 0 on 17 October 2006.
2008 observations
As of 4 January 2008, the Sentry Risk Table assigned a Torino value of 0 and an impact probability of 1 in 58.8 million for 4 May 2102. This value was far below the background impact rate of objects this size. Further observations allowed it to be removed from the Sentry Risk Table on 14 February 2008.
It will pass from the Earth on 1 May 2032, allowing a refinement to the orbit.
Properties
It has a spectral type of E. This suggests that the asteroid has a high albedo and is on the smaller size range for an object with an absolute magnitude of 18.8.
See also
3103 Eger, possible parent of the Aubrite asteroids
99942 Apophis
Asteroid impact avoidance
Aubrite asteroid family
E-type asteroid
Hungaria family of asteroids
List of exceptional asteroids
References
External links
144898
144898
144898
Near-Earth objects removed from the Sentry Risk Table
20041107
|
https://en.wikipedia.org/wiki/Odd%20greedy%20expansion
|
In number theory, the odd greedy expansion problem asks whether a greedy algorithm for finding Egyptian fractions with odd denominators always succeeds. , it remains unsolved.
Description
An Egyptian fraction represents a given rational number as a sum of distinct unit fractions. If a rational number is a sum of unit fractions with odd denominators,
then must be odd. Conversely, every fraction with odd can be represented as a sum of distinct odd unit fractions. One method of finding such a representation replaces by where for a sufficiently large , and then expands as a sum of distinct divisors of .
However, a simpler greedy algorithm has successfully found Egyptian fractions in which all denominators are odd for all instances (with odd ) on which it has been tested: let be the least odd number that is greater than or equal to , include the fraction in the expansion, and continue in the same way (avoiding repeated uses of the same unit fraction) with the remaining fraction . This method is called the odd greedy algorithm and the expansions it creates are called odd greedy expansions.
Stein, Selfridge, Graham, and others have posed the question of whether the odd greedy algorithm terminates with a finite expansion for every with odd. , this question remains open.
Example
Let = 4/23.
23/4 = 5; the next larger odd number is 7. So the first step expands
161/5 = 32; the next larger odd number is 33. So the next step expands
5313/4 = 1328; the next larger odd number is 1329. So the third step expands
Since the final term in this expansion is a unit fraction, the process terminates with this expansion as its result.
Fractions with long expansions
It is possible for the odd greedy algorithm to produce expansions that are shorter than the usual greedy expansion, with smaller denominators. For instance,
where the left expansion is the greedy expansion and the right expansion is the odd greedy expansion. However, the odd greedy expansion is more typically long, with large denominators. For instance, as Wagon discovered, the odd greedy expansion for 3/179 has 19 terms, the largest of which is approximately 1.415×10439491. Curiously, the numerators of the fractions to be expanded in each step of the algorithm form a sequence of consecutive integers:
A similar phenomenon occurs with other numbers, such as 5/5809 (an example found independently by K. S. Brown and David Bailey) which has a 27-term expansion. Although the denominators of this expansion are difficult to compute due to their enormous size, the numerator sequence may be found relatively efficiently using modular arithmetic. describes several additional examples of this type found by Broadhurst, and notes that K. S. Brown has described methods for finding fractions with arbitrarily long expansions.
On even denominators
The odd greedy algorithm cannot terminate when given a fraction with an even denominator, because these fractions do not have finite representations wit
|
https://en.wikipedia.org/wiki/STUDENT%20%28computer%20program%29
|
STUDENT is an early artificial intelligence program that solves algebra word problems. It is written in Lisp by Daniel G. Bobrow as his PhD thesis in 1964 (Bobrow 1964). It was designed to read and solve the kind of word problems found in high school algebra books. The program is often cited as an early accomplishment of AI in natural language processing.
Technical description
In the 1960s, mainframe computers were only available within a research context at the university. Within Project MAC at MIT, the STUDENT system was an early example of a question answering software, which uniquely involved natural language processing and symbolic programming. Other early attempts for solving algebra story problems were realized with 1960s hardware and software as well: for example, the Philips, Baseball and Synthex systems.
STUDENT accepts an algebra story written in the English language as input, and generates a number as output. This is realized with a layered pipeline that consists of heuristics for pattern transformation. At first, sentences in English are converted into kernel sentences, which each contain a single piece of information. Next, the kernel sentences are converted into mathematical expressions. The knowledge base that supports the transformation contains 52 facts.
STUDENT uses a rule-based system with logic inference. The rules are pre-programmed by the software developer and are able to parse natural language.
More powerful techniques for natural language processing, such as machine learning, came into use later as hardware grew more capable, and gained popularity over simpler rule-based systems.
Example
(extracted from Norvig)
References
Natural Language Input for a Computer Problem Solving System, Bobrow's PhD thesis.
, p. 19
, pp. 76–79
History of artificial intelligence
|
https://en.wikipedia.org/wiki/Matrix-free%20methods
|
In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products. Such methods can be preferable when the matrix is so big that storing and manipulating it would cost a lot of memory and computing time, even with the use of methods for sparse matrices. Many iterative methods allow for a matrix-free implementation, including:
the power method,
the Lanczos algorithm,
Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG),
Wiedemann's coordinate recurrence algorithm, and
the conjugate gradient method.
Krylov subspace methods
Distributed solutions have also been explored using coarse-grain parallel software systems to achieve homogeneous solutions of linear systems.
It is generally used in solving non-linear equations like Euler's equations in computational fluid dynamics. Matrix-free conjugate gradient method has been applied in the non-linear elasto-plastic finite element solver. Solving these equations requires the calculation of the Jacobian which is costly in terms of CPU time and storage. To avoid this expense, matrix-free methods are employed. In order to remove the need to calculate the Jacobian, the Jacobian vector product is formed instead, which is in fact a vector itself. Manipulating and calculating this vector is easier than working with a large matrix or linear system.
References
Numerical linear algebra
|
https://en.wikipedia.org/wiki/Incomplete%20LU%20factorization
|
In numerical linear algebra, an incomplete LU factorization (abbreviated as ILU) of a matrix is a sparse approximation of the LU factorization often used as a preconditioner.
Introduction
Consider a sparse linear system . These are often solved by computing the factorization , with L lower unitriangular and U upper triangular.
One then solves , , which can be done efficiently because the matrices are triangular.
For a typical sparse matrix, the LU factors can be much less sparse than the original matrix — a phenomenon called fill-in.
The memory requirements for using a direct solver can then become a bottleneck in solving linear systems. One can combat this problem by using fill-reducing reorderings of the matrix's unknowns, such as the Minimum degree algorithm.
An incomplete factorization instead seeks triangular matrices L, U such that rather than . Solving for can be done quickly but does not yield the exact solution to . So, we instead use the matrix as a preconditioner in another iterative solution algorithm such as the conjugate gradient method or GMRES.
Definition
For a given matrix one defines the graph as
which is used to define the conditions a sparsity patterns needs to fulfill
A decomposition of the form where the following hold
is a lower unitriangular matrix
is an upper triangular matrix
are zero outside of the sparsity pattern:
is zero within the sparsity pattern:
is called an incomplete LU decomposition (with respect to the sparsity pattern ).
The sparsity pattern of L and U is often chosen to be the same as the sparsity pattern of the original matrix A. If the underlying matrix structure can be referenced by pointers instead of copied, the only extra memory required is for the entries of L and U. This preconditioner is called ILU(0).
Stability
Concerning the stability of the ILU the following theorem was proven by Meijerink and van der Vorst.
Let be an M-matrix, the (complete) LU decomposition given by , and the ILU by .
Then
holds.
Thus, the ILU is at least as stable as the (complete) LU decomposition.
Generalizations
One can obtain a more accurate preconditioner by allowing some level of extra fill in the factorization. A common choice is to use the sparsity pattern of A2 instead of A; this matrix is appreciably more dense than A, but still sparse over all. This preconditioner is called ILU(1). One can then generalize this procedure; the ILU(k) preconditioner of a matrix A is the incomplete LU factorization with the sparsity pattern of the matrix Ak+1.
More accurate ILU preconditioners require more memory, to such an extent that eventually the running time of the algorithm increases even though the total number of iterations decreases. Consequently, there is a cost/accuracy trade-off that users must evaluate, typically on a case-by-case basis depending on the family of linear systems to be solved.
The ILU factorization can be performed as a fixed-point iteration in a highly parallel way.
S
|
https://en.wikipedia.org/wiki/Symmetric%20successive%20over-relaxation
|
In applied mathematics, symmetric successive over-relaxation (SSOR), is a preconditioner.
If the original matrix can be split into diagonal, lower and upper triangular as then the SSOR preconditioner matrix is defined as
It can also be parametrised by as follows.
See also
Successive over-relaxation
References
Numerical linear algebra
|
https://en.wikipedia.org/wiki/Ricardinho%20%28footballer%2C%20born%20June%201976%29
|
Ricardo Alexandre dos Santos (born June 24, 1976, in Passos), or simply Ricardinho, is a former Brazilian footballer who played as defensive midfielder.
Club statistics
National team statistics
Honours
Minas Gerais State League: 1994, 1996, 1997, 1998
Brazilian Cup: 1996, 2000
Libertadores Cup: 1997
Brazilian Center-West Cup: 1999
Recopa: 1999
South Minas Cup: 2001, 2002
Minas Gerais State Superleague: 2002
Personal Honours
Brazilian Bola de Prata (Placar): 2000
Contract
5 July 2007 to 5 January 2009
References
External links
placar
websoccerclub
soccerterminal
1976 births
Living people
Brazilian men's footballers
Brazilian expatriate men's footballers
Campeonato Brasileiro Série A players
Cruzeiro Esporte Clube players
Kashiwa Reysol players
Kashima Antlers players
J1 League players
J2 League players
Expatriate men's footballers in Japan
Sport Club Corinthians Paulista players
Brazil men's international footballers
Footballers from Minas Gerais
Men's association football midfielders
People from Passos, Minas Gerais
|
https://en.wikipedia.org/wiki/Budapest%20Semesters%20in%20Mathematics
|
The Budapest Semesters in Mathematics program is a study abroad opportunity for North American undergraduate students in Budapest, Hungary. The coursework is primarily mathematical and conducted in English by Hungarian professors whose primary positions are at Eötvös Loránd University or the Alfréd Rényi Institute of Mathematics of the Hungarian Academy of Sciences. Originally started by László Lovász, László Babai, Vera Sós, and Pál Erdős, the first semester was conducted in Spring 1985. The North- American part of the program is currently run by Tina Garrett (North American Director) out of St. Olaf College in Northfield, MN. She is supported by Kendra Killpatrick (Associate Director) and Eileen Shimota (Program Administrator). The former North American Directors were Paul D. Humke (1988–2011) and Tom Trotter. The Hungarian director is Dezső Miklós. The first Hungarian director was Gábor J. Székely (1985–1995).
History of the Program
Courses offered
Courses commonly offered at BSM:
Introduction to Abstract Algebra
Advanced Abstract Algebra
Topics in Analysis
Complex Functions
Combinatorics 1
Combinatorics 2
Commutative Algebra
Conjecture and Proof
Functional Analysis
Elementary Problem Solving
Galois Theory
Topics in Geometry
Graph Theory
Number Theory
Topics in Number Theory
Probability Theory
Real Functions and Measures
Set Theory
Introduction to Topology
Mathematical Physics
Independent Research Groups
Theory of Computing
Differential Geometry
Dynamical Systems and Bifurcations
Stochastic Models in Bioinformatics
Mathematical Logic
In addition to mathematics-based courses, students have the opportunity to take culture classes, such as beginning and intermediate Hungarian Language classes, Hungarian Arts and Culture, and Holocaust and Memory.
Location
Classes are held in the College International, located at Bethlen Gábor Tér in the heart of Pest in Budapest's District VII. This is also the location for several other programs which attract both Hungarian and international students. Entry to the building is monitored; each student receives a card that electronically admits him or her to the building. There are also cameras to monitor movement exterior to the building. Several tram and bus lines have stops near the school, as does the Red Metro Line, which stops at Keleti railway station.
Optional intensive language course
Prior to classes starting, students can arrive early to attend an optional two-week, 80-hour intensive language course at the Babilon Nyelvstúdió. Babilon is located at Astoria, right in front of Budapest's Great Synagogue. For approximately eight hours each day, students are immersed in the language, learning numbers, greetings, and other necessary vocabulary.
See also
The North American home page of the program
The Hungarian home page of the program (in English)
The page for the intensive language course
Math in Moscow is a similar program held in Moscow, Russia.
Re
|
https://en.wikipedia.org/wiki/2006%20Claxton%20Shield
|
Results and statistics for the 2006 Claxton Shield
Results
Round 1: Saturday, 21 January 2006
Round 2: Sunday, 22 January 2006
Round 3: Monday, 23 January 2006
Round 4: Tuesday, 24 January 2006
Round 5: Wednesday, 25 January 2006
Final standings
Finals
Semi-finals
Grand final
External links
Information and Results on 2006 Claxton Shield
Queensland Win 2006 Claxton Shield
Claxton Shield
2006 in Australian sport
2006 in baseball
January 2006 sports events in Australia
|
https://en.wikipedia.org/wiki/Winifred%20Merrill
|
Winifred Merrill may refer to:
Winifred Edgerton Merrill (1862–1951), mathematician and astronomer, the first American woman to receive a PhD in mathematics
Winifred Merrill Warren (1898–1990), American violinist and music educator
|
https://en.wikipedia.org/wiki/Doob%27s%20martingale%20inequality
|
In mathematics, Doob's martingale inequality, also known as Kolmogorov’s submartingale inequality is a result in the study of stochastic processes. It gives a bound on the probability that a submartingale exceeds any given value over a given interval of time. As the name suggests, the result is usually given in the case that the process is a martingale, but the result is also valid for submartingales.
The inequality is due to the American mathematician Joseph L. Doob.
Statement of the inequality
The setting of Doob's inequality is a submartingale relative to a filtration of the underlying probability space. The probability measure on the sample space of the martingale will be denoted by . The corresponding expected value of a random variable , as defined by Lebesgue integration, will be denoted by .
Informally, Doob's inequality states that the expected value of the process at some final time controls the probability that a sample path will reach above any particular value beforehand. As the proof uses very direct reasoning, it does not require any restrictive assumptions on the underlying filtration or on the process itself, unlike for many other theorems about stochastic processes. In the continuous-time setting, right-continuity (or left-continuity) of the sample paths is required, but only for the sake of knowing that the supremal value of a sample path equals the supremum over an arbitrary countable dense subset of times.
Discrete time
Let be a discrete-time submartingale relative to a filtration of the underlying probability space, which is to say:
The submartingale inequality says that
for any positive number . The proof relies on the set-theoretic fact that the event defined by may be decomposed as the disjoint union of the events defined by and for all . Then
having made use of the submartingale property for the last inequality and the fact that for the last equality. Summing this result as ranges from 1 to results in the conclusion
which is sharper than the stated result. By using the elementary fact that , the given submartingale inequality follows.
In this proof, the submartingale property is used once, together with the definition of conditional expectation. The proof can also be phrased in the language of stochastic processes so as to become a corollary of the powerful theorem that a stopped submartingale is itself a submartingale. In this setup, the minimal index appearing in the above proof is interpreted as a stopping time.
Continuous time
Now let be a submartingale indexed by an interval of real numbers, relative to a filtration of the underlying probability space, which is to say:
for all The submartingale inequality says that if the sample paths of the martingale are almost-surely right-continuous, then
for any positive number . This is a corollary of the above discrete-time result, obtained by writing
in which is any sequence of finite sets whose union is the set of all rational numbers. The first
|
https://en.wikipedia.org/wiki/2005%20Claxton%20Shield
|
Results and Statistics of the 2005 Claxton Shield
Results
Round 1: Saturday, 22 January 2005
Round 2: Sunday, 23 January 2005
Round 3: Monday, 24 January 2005
Round 4: Tuesday, 25 January 2005
Round 5: Wednesday, 26 January 2005
Round 2 Make-up: Thursday, 27 January 2005
Games 1 and 2 on 23 January were rained out, so a make up round was called.
Ladder
Finals
Semi-finals
Grand final
External links
Official 2005 Claxton Shield Website
Claxton Shield
2005 in Australian sport
2005 in baseball
January 2005 sports events in Australia
|
https://en.wikipedia.org/wiki/Valentin%20Po%C3%A9naru
|
Valentin Alexandre Poénaru (born 1932 in Bucharest) is a Romanian–French mathematician. He was a Professor of Mathematics at University of Paris-Sud, specializing in low-dimensional topology.
Life and career
Born in Bucharest, Romania, he did his undergraduate studies at the University of Bucharest. In 1962, he was an invited speaker at the International Congress of Mathematicians in Stockholm, Sweden. While at the congress, Poénaru defected, subsequently leaving for France. He arrived in mid-September 1962 at the Institut des Hautes Études Scientifiques in Bures-sur-Yvette; the IHÉS decided to support him, and he has remained associated with the institute ever since then.
Poénaru defended his Thèse d'État at the University of Paris on March 23, 1963. His dissertation topic was Sur les variétés tridimensionnelles ayant le type d'homotopie de la sphère S3, and was written under the supervision of Charles Ehresmann.
After that, he went to the United States, spending four years at Harvard University and Princeton University. In 1967, he returned to France.
Poénaru has worked for several decades on a proof of the Poincaré conjecture, making a number of related breakthroughs. His first attempt at proving the conjecture dates from 1957. He has described his general approach over the years in different papers and conferences. On December 19, 2006, he posted a preprint to the arXiv, claiming to have finally completed the details of his approach and proven the conjecture.
His doctoral students include Jean Lannes.
Works
Valentin Poenaru, Memories from my former life: the making of a mathematician. In: Geometry in history (ed. S. G. Dani and A. Papadopoulos), 705–732, Springer, Cham, 2019.
Valentin Poenaru, On the 3-Dimensional Poincaré Conjecture and the 4-Dimensional Smooth Schoenflies Problem, .
Valentin Poenaru, Sur les variétés tridimensionnelles ayant le type d'homotopie de la sphère S3, Séminaire Ehresmann, Topologie et géométrie différentielle 6 (1964), Exposé No. 1, 1–67.
Valentin Poenaru, Produits cartésiens de variétés différentielles par un disque, 1963 Proceedings of the International Congress of Mathematicians (Stockholm, 1962), pp. 481–489, Mittag-Leffler Institute, Djursholm. MR0176481.
André Haefliger and Valentin Poenaru, La classification des immersions combinatoires, Publications Mathématiques de l'IHÉS 23 (1964), 75–91.
Iconography
His friend the Peruvian painter Herman Braun-Vega made of him a family portrait with his wife the painter Rigmor Poenaru, where figures and mathematical symbols in the form of graffiti evoke his research works.
See also
Mazur manifold
Poénaru conjecture
List of Eastern Bloc defectors
References
David Gabai, Valentin Poenaru's program for the Poincaré conjecture. Geometry, topology, & physics, 139–166, Conf. Proc. Lecture Notes Geom. Topology, IV, Int. Press, Cambridge, MA, 1995.
External links
Terza e quarta dimensione: un mistero da svelare, interview by Marinella Daidone from Univ
|
https://en.wikipedia.org/wiki/Multiplicative%20distance
|
In algebraic geometry, is said to be a multiplicative distance function over a field if it satisfies
AB is congruent to A'B' iff
AB < A'B' iff
See also
Algebraic geometry
Hyperbolic geometry
Poincaré disc model
Hilbert's arithmetic of ends
References
Algebraic geometry
|
https://en.wikipedia.org/wiki/Hilbert%20system
|
In mathematical physics, Hilbert system is an infrequently used term for a physical system described by a C*-algebra.
In logic, especially mathematical logic, a Hilbert system, sometimes called Hilbert calculus, Hilbert-style deductive system or Hilbert–Ackermann system, is a type of system of formal deduction attributed to Gottlob Frege and David Hilbert. These deductive systems are most often studied for first-order logic, but are of interest for other logics as well.
Most variants of Hilbert systems take a characteristic tack in the way they balance a trade-off between logical axioms and rules of inference. Hilbert systems can be characterised by the choice of a large number of schemes of logical axioms and a small set of rules of inference. Systems of natural deduction take the opposite tack, including many deduction rules but very few or no axiom schemes. The most commonly studied Hilbert systems have either just one rule of inference modus ponens, for propositional logics or two with generalisation, to handle predicate logics, as well and several infinite axiom schemes. Hilbert systems for propositional modal logics, sometimes called Hilbert-Lewis systems, are generally axiomatised with two additional rules, the necessitation rule and the uniform substitution rule.
A characteristic feature of the many variants of Hilbert systems is that the context is not changed in any of their rules of inference, while both natural deduction and sequent calculus contain some context-changing rules. Thus, if one is interested only in the derivability of tautologies, no hypothetical judgments, then one can formalize the Hilbert system in such a way that its rules of inference contain only judgments of a rather simple form. The same cannot be done with the other two deductions systems: as context is changed in some of their rules of inferences, they cannot be formalized so that hypothetical judgments could be avoided not even if we want to use them just for proving derivability of tautologies.
Formal deductions
In a Hilbert-style deduction system, a formal deduction is a finite sequence of formulas in which each formula is either an axiom or is obtained from previous formulas by a rule of inference. These formal deductions are meant to mirror natural-language proofs, although they are far more detailed.
Suppose is a set of formulas, considered as hypotheses. For example, could be a set of axioms for group theory or set theory. The notation means that there is a deduction that ends with using as axioms only logical axioms and elements of . Thus, informally, means that is provable assuming all the formulas in .
Hilbert-style deduction systems are characterized by the use of numerous schemes of logical axioms. An axiom scheme is an infinite set of axioms obtained by substituting all formulas of some form into a specific pattern. The set of logical axioms includes not only those axioms generated from this pattern, but also any generalization of o
|
https://en.wikipedia.org/wiki/List%20of%20middle%20schools%20in%20Albuquerque%2C%20New%20Mexico
|
The following is a list of middle schools in Albuquerque, New Mexico.
Albuquerque Academy
Albuquerque Institute for Mathematics and Science
Cleveland Middle School
Cottonwood Classical Preparatory School
Desert Ridge Middle School
Eisenhower Middle School
Ernie Pyle Middle School
Garfield Middle School
Grant Middle School
Harrison Middle School
Hayes Middle School
Hoover Middle School
Jackson Middle School
James Monroe Middle School
Jefferson Middle School
Jimmy Carter Middle School
John Adams Middle School
Kennedy Middle School
L. B. Johnson Middle School
Madison Middle School
McKinley Middle School
Polk Middle School
Roosevelt Middle School
Sandia Preparatory School
Taft Middle School
Taylor Middle School
Tony Hillerman Middle School
Truman Middle School
Van Buren Middle School
Washington Middle School
2021 Washington Middle School Shooting
Wilson Middle School
Albuquerque
Albuquerque
|
https://en.wikipedia.org/wiki/Tom%20Cassidy
|
Tom Cassidy (born March 15, 1952) is a Canadian former professional ice hockey centre who briefly played in the National Hockey League for the Pittsburgh Penguins.
Career statistics
External links
1952 births
Living people
Baltimore Clippers players
California Golden Seals draft picks
Canadian ice hockey centres
Ice hockey people from Ontario
Kitchener Rangers players
People from Algoma District
Pittsburgh Penguins players
Oklahoma City Stars players
Rochester Americans players
Springfield Kings players
|
https://en.wikipedia.org/wiki/Favard%20constant
|
In mathematics, the Favard constant, also called the Akhiezer–Krein–Favard constant, of order r is defined as
This constant is named after the French mathematician Jean Favard, and after the Soviet mathematicians Naum Akhiezer and Mark Krein.
Particular values
Uses
This constant is used in solutions of several extremal problems, for example
Favard's constant is the sharp constant in Jackson's inequality for trigonometric polynomials
the sharp constants in the Landau–Kolmogorov inequality are expressed via Favard's constants
Norms of periodic perfect splines.
References
Mathematical constants
|
https://en.wikipedia.org/wiki/Automath
|
Automath ("automating mathematics") is a formal language, devised by Nicolaas Govert de Bruijn starting in 1967, for expressing complete mathematical theories in such a way that an included automated proof checker can verify their correctness.
Overview
The Automath system included many novel notions that were later adopted and/or reinvented in areas such as typed lambda calculus and explicit substitution. Dependent types is one outstanding example. Automath was also the first practical system that exploited the Curry–Howard correspondence. Propositions were represented as sets (called "categories") of their proofs, and the question of provability became a question of non-emptiness (type inhabitation); de Bruijn was unaware of Howard's work, and stated the correspondence independently.
L. S. van Benthem Jutting, as part of this Ph.D. thesis in 1976, translated Edmund Landau's Foundations of Analysis into Automath and checked its correctness.
Automath was never widely publicized at the time, however, and so never achieved widespread use; nonetheless, it proved very influential in the later development of logical frameworks and proof assistants. The Mizar system, a system of writing and checking formalized mathematics that is still in active use, was influenced by Automath.
See also
QED manifesto
References
External links
The Automath Archive (mirror)
Thirty Five years of Automath homepage of a workshop to celebrate the 35th year of Automath
Automath page by Freek Wiedijk
Proof assistants
Type theory
|
https://en.wikipedia.org/wiki/Mice%20problem
|
In mathematics, the mice problem is a continuous pursuit–evasion problem in which a number of mice (or insects, dogs, missiles, etc.) are considered to be placed at the corners of a regular polygon. In the classic setup, each then begins to move towards its immediate neighbour (clockwise or anticlockwise). The goal is often to find out at what time the mice meet.
The most common version has the mice starting at the corners of a unit square, moving at unit speed. In this case they meet after a time of one unit, because the distance between two neighboring mice always decreases at a speed of one unit. More generally, for a regular polygon of unit-length sides, the distance between neighboring mice decreases at a speed of , so they meet after a time of .
Path of the mice
For all regular polygons, each mouse traces out a pursuit curve in the shape of a logarithmic spiral. These curves meet in the center of the polygon.
In media
In Dara Ó Briain: School of Hard Sums, the mice problem is discussed. Instead of 4 mice, 4 ballroom dancers are used.
References
External links
Zeno's Mice (Ants) Problem and the Logarithmic Spirals - YouTube lecture with equation derivation
Recreational mathematics
Pursuit–evasion
|
https://en.wikipedia.org/wiki/Composite%20field%20%28mathematics%29
|
A composite field or compositum of fields is an object of study in field theory. Let L be a field, and let F, K be subfields of L. Then the (internal) composite of F and K is defined to be the intersection of all subfields of L containing both F and K. The composite is commonly denoted FK. When F and K are not regarded as subfields of a common field then the (external) composite is defined using the tensor product of fields.
It also can be defined using field of fractions
is the set of all -rational expressions in finitely many elements of .
References
, especially chapter 2
Field (mathematics)
|
https://en.wikipedia.org/wiki/Aisenstadt%20Prize
|
The André Aisenstadt Prize recognizes a young Canadian mathematician's outstanding achievement in pure or applied mathematics.
It has been awarded annually since 1992 (except in 1994, when no prize was given) by the Centre de Recherches Mathématiques at the University of Montreal. The prize consists of a $3,000 award and a medal. It is named after .
Prize Winners
Source: CRM, University of Montreal
2021 Giulio Tiozzo (University of Toronto) and Tristan C. Collins (Massachusetts Institute of Technology)
2020 Robert Haslhofer (University of Toronto) and Egor Shelukhin (Université de Montréal)
2019 Yaniv Plan (University of British Columbia)
2018 Benjamin Rossman (University of Toronto)
2017 Jacob Tsimerman (University of Toronto)
2016 Anne Broadbent (University of Ottawa)
2015 Louis-Pierre Arguin (University of Montréal and the City University of New York - Baruch College and Graduate Center)
2014 Sabin Cautis of the University of British Columbia
2013 Spyros Alexakis of the University of Toronto
2012 Marco Gualtieri of the University of Toronto and Young-Heon Kim of the University of British Columbia
2011 Joel Kamnitzer of the University of Toronto
2010 Omer Angel of the University of British Columbia
2009 Valentin Blomer of the University of Toronto
2008 József Solymosi of the University of British Columbia and Jonathan Taylor of the University of Montreal.
2007 Greg Smith of Queen's University and Alexander Holroyd of the University of British Columbia.
2006 Iosif Polterovich of the University of Montreal and Tai-Peng Tsai of the University of British Columbia
2005 Ravi Vakil of Stanford University
2004 Vinayak Vatsal of the University of British Columbia
2003 Alexander Brudnyi of the University of Calgary
2002 Jinyi Chen of the University of British Columbia
2001 Eckhard Meinrenken of the University of Toronto
2000 Changfeng Gui of the University of Connecticut
1999 John Toth of McGill University
1998 Boris A. Khesin of the University of Toronto
1997 Lisa Jeffrey and Henri Darmon of McGill University
1996 Adrian Stephen Lewis of Cornell University
1995 Nigel Higson of Pennsylvania State University and Michael J. Ward of the University of British Columbia
1994 No award
1993 Ian F. Putnam of the University of Victoria
1992 Niky Kamran of McGill University
See also
List of mathematics awards
References
Mathematics awards
Awards established in 1992
|
https://en.wikipedia.org/wiki/Clutching%20construction
|
In topology, a branch of mathematics, the clutching construction is a way of constructing fiber bundles, particularly vector bundles on spheres.
Definition
Consider the sphere as the union of the upper and lower hemispheres and along their intersection, the equator, an .
Given trivialized fiber bundles with fiber and structure group over the two hemispheres, then given a map (called the clutching map), glue the two trivial bundles together via f.
Formally, it is the coequalizer of the inclusions via and : glue the two bundles together on the boundary, with a twist.
Thus we have a map : clutching information on the equator yields a fiber bundle on the total space.
In the case of vector bundles, this yields , and indeed this map is an isomorphism (under connect sum of spheres on the right).
Generalization
The above can be generalized by replacing and with any closed triad , that is, a space X, together with two closed subsets A and B whose union is X. Then a clutching map on gives a vector bundle on X.
Classifying map construction
Let be a fibre bundle with fibre . Let be a collection of pairs such that is a local trivialization of over . Moreover, we demand that the union of all the sets is (i.e. the collection is an atlas of trivializations ).
Consider the space modulo the equivalence relation is equivalent to if and only if and . By design, the local trivializations give a fibrewise equivalence between this quotient space and the fibre bundle .
Consider the space modulo the equivalence relation is equivalent to if and only if and consider to be a map then we demand that . That is, in our re-construction of we are replacing the fibre by the topological group of homeomorphisms of the fibre, . If the structure group of the bundle is known to reduce, you could replace with the reduced structure group. This is a bundle over with fibre and is a principal bundle. Denote it by . The relation to the previous bundle is induced from the principal bundle: .
So we have a principal bundle . The theory of classifying spaces gives us an induced push-forward fibration where is the classifying space of . Here is an outline:
Given a -principal bundle , consider the space . This space is a fibration in two different ways:
1) Project onto the first factor: . The fibre in this case is , which is a contractible space by the definition of a classifying space.
2) Project onto the second factor: . The fibre in this case is .
Thus we have a fibration . This map is called the classifying map of the fibre bundle since 1) the principal bundle is the pull-back of the bundle along the classifying map and 2) The bundle is induced from the principal bundle as above.
Contrast with twisted spheres
Twisted spheres are sometimes referred to as a "clutching-type" construction, but this is misleading: the clutching construction is properly about fiber bundles.
In twisted spheres, you glue two halves along their boundary
|
https://en.wikipedia.org/wiki/Generalized%20Poincar%C3%A9%20conjecture
|
In the mathematical area of topology, the generalized Poincaré conjecture is a statement that a manifold which is a homotopy sphere a sphere. More precisely, one fixes a category of manifolds: topological (Top), piecewise linear (PL), or differentiable (Diff). Then the statement is
Every homotopy sphere (a closed n-manifold which is homotopy equivalent to the n-sphere) in the chosen category (i.e. topological manifolds, PL manifolds, or smooth manifolds) is isomorphic in the chosen category (i.e. homeomorphic, PL-isomorphic, or diffeomorphic) to the standard n-sphere.
The name derives from the Poincaré conjecture, which was made for (topological or PL) manifolds of dimension 3, where being a homotopy sphere is equivalent to being simply connected and closed. The generalized Poincaré conjecture is known to be true or false in a number of instances, due to the work of many distinguished topologists, including the Fields medal awardees John Milnor, Steve Smale, Michael Freedman, and Grigori Perelman.
Status
Here is a summary of the status of the generalized Poincaré conjecture in various settings.
Top: true in all dimensions.
PL: true in dimensions other than 4; unknown in dimension 4, where it is equivalent to Diff.
Diff: false generally, the first known counterexample is in dimension 7. True in some dimensions including 1, 2, 3, 5, 6, 12, 56 and 61. The case of dimension 4 is equivalent to PL and is unsettled . The previous list includes all odd dimensions and all even dimensions between 6 and 62 for which the conjecture is true; it may be true for some additional even dimensions though it is conjectured that this is not the case.
Thus the veracity of the Poincaré conjectures changes according to which category it is formulated in. More generally the notion of isomorphism differs between the categories Top, PL, and Diff. It is the same in dimension 3 and below. In dimension 4, PL and Diff agree, but Top differs. In dimensions above 6 they all differ. In dimensions 5 and 6 every PL manifold admits an infinitely differentiable structure that is so-called Whitehead compatible.
History
The cases n = 1 and 2 have long been known by the classification of manifolds in those dimensions.
For a PL or smooth homotopy n-sphere, in 1960 Stephen Smale proved for that it was homeomorphic to the n-sphere and subsequently extended his proof to ; he received a Fields Medal for his work in 1966. Shortly after Smale's announcement of a proof, John Stallings gave a different proof for dimensions at least 7 that a PL homotopy n-sphere was homeomorphic to the n-sphere, using the notion of "engulfing". E. C. Zeeman modified Stalling's construction to work in dimensions 5 and 6. In 1962, Smale proved that a PL homotopy n-sphere is PL-isomorphic to the standard PL n-sphere for n at least 5. In 1966, M. H. A. Newman extended PL engulfing to the topological situation and proved that for a topological homotopy n-sphere is homeomorphic to the n-sphere.
Mich
|
https://en.wikipedia.org/wiki/Gelenbevi%20Ismail%20Efendi
|
Ismail (bin Mustafa bin Mahmûd) Gelenbevi (1730–1790 or 1791) was an Ottoman Turkish mathematician, Hanafi Maturidi theologian, logician, philosopher and Professor of Geometry at the Naval College in Istanbul, Turkey.
His life and work are well documented in several scholarly works in English and Turkish
such as the thesis by Alaettin Avci "Turkiyede Askeri Okullar Tarihcesi" (History of the Military Schools in Turkey), 1963, published by the Research and Development Office of the Turkish General Staff,
and Mehmet Karabela's "The development of dialectic
and argumentation theory in
post-classical Islamic
intellectual history",
Born in 1730 in the town of Gelenbe, near Manisa, at that time in the Province of Aydin in Western Anatolia, he is known under the name "Gelenbevi" (), which means "de Gelenbe" in French, and "von Gelenbe" in German. He studied in İstanbul where he rose through the Ottoman examination system to the rank of "Müderris" or professor, at the age of 33.
At the request of the Sadrazam or Grand Vizier Halil Hamit Pasha ("Paşa" in modern Turkish) (1782–1785), and of the Fleet Admiral Cezayirli Hasan Pasha, he was appointed to a professorship in mathematics at the new Naval College in Kasımpaşa, on the Golden Horn, in Istanbul where he worked with other Ottoman reformers such as the Franco-Hungarian military engineer and aristocrat François Baron de Tott. Gelenbevi received an award from the Emperor Sultan Selim III for his very accurate ballistic computations.
Gelenbevi Ismail published some thirty five scientific treatises, including a monograph on the game of chess written in Turkish and Arabic. He is credited with the introduction of logarithms in Turkey. The late Ottoman era Cabinet Minister Mehmet Cemaleddin Efendi (1848–1917) (Turkish: Mehmet Cemâlüddin Efendi), senior judge of the Ottoman Empire and Şeyhülislam or Cabinet Minister in charge of religious and legal matters, the Ottoman period Minister of Education and Director of the Imperial School of Commerce, Gelenbevizade Mehmet Said
(1863-1937), the Turkish cinema pioneer and photographer Baha Gelenbevi (1907-1984), and Professor Erol Gelenbe are direct descendants of Gelenbevi Ismail. A selective public high school in the Fatih district of Istanbul
bears the family name.
References
1730 births
People from Manisa
Mathematicians from the Ottoman Empire
Academics from the Ottoman Empire
Scientists from the Ottoman Empire
1790 deaths
18th-century mathematicians
18th-century people from the Ottoman Empire
Hanafis
Maturidis
|
https://en.wikipedia.org/wiki/Li%20Shanlan
|
Li Shanlan (李善蘭, courtesy name: Renshu 壬叔, art name: Qiuren 秋紉) (1810 – 1882) was a Chinese mathematician of the Qing Dynasty.
A native of Haining, Zhejiang, he was fascinated by mathematics since childhood, beginning with the Nine Chapters on Mathematical Art. He eked out a living by being a private tutor for some years before fleeing to Shanghai in 1852 to evade the Taiping Rebellion. There he collaborated with Alexander Wylie, Joseph Edkins , and others to translate many Western mathematical works into Chinese, including Elements of Analytical Geometry and the Differential and Integral Calculus by Elias Loomis, Augustus De Morgan's Elements of Algebra, and the last nine volumes of Euclid's Elements (from Henry Billingsley's edition), the first six volumes of which having been rendered into Chinese by Matteo Ricci and Xu Guangqi in 1607. With Wylie, he also translated Outlines of Astronomy by John Herschel and coined the Chinese names for many of the low-numbered asteroids.
A great number of mathematical terms used in Chinese today were first coined by Li, who was later borrowed into the Japanese language as well. He discovered the Li Shanlan identity (Li Shanlan's summation formulae) in 1867. Later he worked in the think tank of Zeng Guofan. In 1868, he began to teach in Tongwen Guan where he collaborated closely with linguist John Fryer.
See also
Chinese hypothesis
References
External links
Biography at the MacTutor History of Mathematics archive
1810 births
1882 deaths
19th-century Chinese mathematicians
19th-century Chinese translators
Educators from Jiaxing
Mathematicians from Zhejiang
Qing dynasty translators
People from Haining
Scientists from Jiaxing
Writers from Jiaxing
|
https://en.wikipedia.org/wiki/Coefficient%20of%20inbreeding
|
The coefficient of inbreeding of an individual is the probability that two alleles at any locus in an individual are identical by descent from the common ancestor(s) of the two parents.
Calculation
An individual is said to be inbred if there is a loop in its pedigree chart. A loop is defined as a path that runs from an individual up to the common ancestor through one parent and back down to the other parent, without going through any individual twice. The number of loops is always the number of common ancestors the parents have. If an individual is inbred, the coefficient of inbreeding is calculated by summing all the probabilities that an individual receives the same allele from its father's side and mother's side. As every individual has a 50% chance of passing on an allele to the next generation, the formula depends on 0.5 raised to the power of however many generations separate the individual from the common ancestor of its parents, on both the father's side and mother's side. This number of generations can be calculated by counting how many individuals lie in the loop defined earlier. Thus, the coefficient of inbreeding (f) of an individual X can be calculated with the following formula:
where is the number of individuals in the aforementioned loop,and is the coefficient of inbreeding of the common ancestor of X's parents.
To give an example, consider the following pedigree.
In this pedigree chart, G is the progeny of C and F, and C is the biological uncle of F. To find the coefficient of inbreeding of G, first locate a loop that leads from G to the common ancestor through one parent and back down to the other parent without going through the same individual twice, There are only two such loops in this chart, as there are only 2 common ancestors of C and F. The loops are G - C - A - D - F and G - C - B - D - F, both of which have 5 members.
Because the common ancestors of the parents (A and B) are not inbred themselves, . Therefore the coefficient of inbreeding of individual G is .
If the parents of an individual are not inbred themselves, the coefficient of inbreeding of the individual is one-half the coefficient of relationship between the parents. This can be verified in the previous example, as 12.5% is one-half of 25%, the coefficient of relationship between an uncle and a niece.
Table of coefficients of inbreeding
References
Genealogy
Kinship and descent
Breeding
Incest
Population genetics
|
https://en.wikipedia.org/wiki/Probability%20of%20default
|
Probability of default (PD) is a financial term describing the likelihood of a default over a particular time horizon. It provides an estimate of the likelihood that a borrower will be unable to meet its debt obligations.
PD is used in a variety of credit analyses and risk management frameworks. Under Basel II, it is a key parameter used in the calculation of economic capital or regulatory capital for a banking institution.
PD is closely linked to the expected loss, which is defined as the product of the PD, the loss given default (LGD) and the exposure at default (EAD).
Overview
The probability of default is an estimate of the likelihood that the default event will occur. It applies to a particular assessment horizon, usually one year.
Credit scores, such as FICO for consumers or bond ratings from S&P, Fitch or Moodys for corporations or governments, typically imply a certain probability of default.
For group of obligors sharing similar credit risk characteristics such as a RMBS or pool of loans, a PD may be derived for a group of assets that is representative of the typical (average) obligor of the group. In comparison, a PD for a bond or commercial loan, are typically determined for a single entity.
Under Basel II, a default event on a debt obligation is said to have occurred if
it is unlikely that the obligor will be able to repay its debt to the bank without giving up any pledged collateral
the obligor is more than 90 days past due on a material credit obligation
Stressed and unstressed PD
The PD of an obligor not only depends on the risk characteristics of that particular obligor but also the economic environment and the degree to which it affects the obligor. Thus, the information available to estimate PD can be divided into two broad categories -
Macroeconomic information like house price indices, unemployment, GDP growth rates, etc. - this information remains the same for multiple obligors.
Obligor specific information like revenue growth (wholesale), number of times delinquent in the past six months (retail), etc. - this information is specific to a single obligor and can be either static or dynamic in nature. Examples of static characteristics are industry for wholesale loans and origination "loan to value ratio" for retail loans.
An unstressed PD is an estimate that the obligor will default over a particular time horizon considering the current macroeconomic as well as obligor specific information. This implies that if the macroeconomic conditions deteriorate, the PD of an obligor will tend to increase while it will tend to decrease if economic conditions improve.
A stressed PD is an estimate that the obligor will default over a particular time horizon considering the current obligor specific information, but considering "stressed" macroeconomic factors irrespective of the current state of the economy. The stressed PD of an obligor changes over time depending on the risk characteristics of the obligor, but is not heav
|
https://en.wikipedia.org/wiki/Ge%C3%ADlson
|
Geílson de Carvalho Soares (born April 10, 1984 in Cuiabá), sometimes referred to as simply Geílson, is a striker.
Club statistics
Honours
Rio Grande do Sul State League: 2004
São Paulo State League: 2006
References
External links
furacao
sambafoot
CBF
1984 births
Living people
Footballers from Cuiabá
Brazilian men's footballers
Brazilian expatriate men's footballers
Santos FC players
Club Athletico Paranaense players
Mirassol Futebol Clube players
Albirex Niigata players
Campeonato Brasileiro Série A players
J2 League players
Expatriate men's footballers in Japan
Sport Club Internacional players
Guarani FC players
Clube Atlético Votuporanguense players
Men's association football forwards
CE Operário Várzea-Grandense players
|
https://en.wikipedia.org/wiki/Nonlinear%20functional%20analysis
|
Nonlinear functional analysis is a branch of mathematical analysis that deals with nonlinear mappings.
Topics
Its subject matter includes:
generalizations of calculus to Banach spaces
implicit function theorems
fixed-point theorems (Brouwer fixed point theorem, Fixed point theorems in infinite-dimensional spaces, topological degree theory, Jordan separation theorem, Lefschetz fixed-point theorem)
Morse theory and Lusternik–Schnirelmann category theory
methods of complex function theory
See also
Functional analysis
Notes
|
https://en.wikipedia.org/wiki/Unreasonable%20ineffectiveness%20of%20mathematics
|
The unreasonable ineffectiveness of mathematics is a phrase that alludes to the article by physicist Eugene Wigner, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". This phrase is meant to suggest that mathematical analysis has not proved as valuable in other fields as it has in physics.
Life sciences
I. M. Gelfand, a mathematician who worked in biomathematics and molecular biology, as well as many other fields in applied mathematics, is quoted as stating,
Eugene Wigner wrote a famous essay on the unreasonable effectiveness of mathematics in natural sciences. He meant physics, of course. There is only one thing which is more unreasonable than the unreasonable effectiveness of mathematics in physics, and this is the unreasonable ineffectiveness of mathematics in biology.
An opposing view is given by Leonard Adleman, a theoretical computer scientist who pioneered the field of DNA computing. In Adleman's view, "Sciences reach a point where they become mathematized," starting at the fringes but eventually "the central issues in the field become sufficiently understood that they can be thought about mathematically. It occurred in physics about the time of the Renaissance; it began in chemistry after John Dalton developed atomic theory" and by the 1990s was taking place in biology. By the early 1990s, "Biology was no longer the science of things that smelled funny in refrigerators (my view from undergraduate days in the 1960s). The field was undergoing a revolution and was rapidly acquiring the depth and power previously associated exclusively with the physical sciences. Biology was now the study of information stored in DNA - strings of four letters: A, T, G, and C and the transformations that information undergoes in the cell. There was mathematics here!"
Economics and finance
K. Vela Velupillai wrote of The unreasonable ineffectiveness of mathematics in economics. To him "the headlong rush with which economists have equipped themselves with a half-baked knowledge of mathematical traditions has led to an un-natural mathematical economics and a non-numerical economic theory." His argument is built on the claim that
mathematical economics is unreasonably ineffective. Unreasonable, because the mathematical assumptions are economically unwarranted; ineffective because the mathematical formalisations imply non-constructive and uncomputable structures. A reasonable and effective mathematisation of economics entails Diophantine formalisms. These come with natural undecidabilities and uncomputabilities. In the face of this, [the] conjecture [is] that an economics for the future will be freer to explore experimental methodologies underpinned by alternative mathematical structures.
Sergio M. Focardi and Frank J. Fabozzi, on the other hand, have acknowledged that "economic science is generally considered less viable than the physical sciences" and that "sophisticated mathematical models of the economy have been developed but their
|
https://en.wikipedia.org/wiki/Golomb%20sequence
|
In mathematics, the Golomb sequence, named after Solomon W. Golomb (but also called Silverman's sequence), is a monotonically increasing integer sequence where an is the number of times that n occurs in the sequence, starting with a1 = 1, and with the property that for n > 1 each an is the smallest unique integer which makes it possible to satisfy the condition. For example, a1 = 1 says that 1 only occurs once in the sequence, so a2 cannot be 1 too, but it can be 2, and therefore must be 2. The first few values are
1, 2, 2, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12 .
Examples
a1 = 1
Therefore, 1 occurs exactly one time in this sequence.
a2 > 1
a2 = 2
2 occurs exactly 2 times in this sequence.
a3 = 2
3 occurs exactly 2 times in this sequence.
a4 = a5 = 3
4 occurs exactly 3 times in this sequence.
5 occurs exactly 3 times in this sequence.
a6 = a7 = a8 = 4
a9 = a10 = a11 = 5
etc.
Recurrence
Colin Mallows has given an explicit recurrence relation . An asymptotic expression for an is
where is the golden ratio (approximately equal to 1.618034).
References
External links
Python code for Golomb Sequence
Integer sequences
Golden ratio
|
https://en.wikipedia.org/wiki/Mathsci
|
Mathsci may refer to
Mathematical sciences
Mathematics and Science High School at Clover Hill
MathSciNet, a database of the American Mathematical Society containing data for Mathematical Reviews and Current Mathematical Publications
|
https://en.wikipedia.org/wiki/National%20Council%20of%20Teachers
|
National Council of Teachers may refer to:
National Council of Teachers of English, an education organization
National Council of Teachers of Mathematics, the world's largest organization concerned with mathematics education
|
https://en.wikipedia.org/wiki/Contraposition
|
In logic and mathematics, contraposition refers to the inference of going from a conditional statement into its logically equivalent contrapositive, and an associated proof method known as proof by contraposition. The contrapositive of a statement has its antecedent and consequent inverted and flipped.
Conditional statement . In formulas: the contrapositive of is .
If P, Then Q. — If not Q, Then not P. "If it is raining, then I wear my coat" — "If I don't wear my coat, then it isn't raining."
The law of contraposition says that a conditional statement is true if, and only if, its contrapositive is true.
The contrapositive () can be compared with three other statements:
Inversion (the inverse), "If it is not raining, then I don't wear my coat." Unlike the contrapositive, the inverse's truth value is not at all dependent on whether or not the original proposition was true, as evidenced here.
Conversion (the converse), "If I wear my coat, then it is raining." The converse is actually the contrapositive of the inverse, and so always has the same truth value as the inverse (which as stated earlier does not always share the same truth value as that of the original proposition).
Negation (the logical complement), "It is not the case that if it is raining then I wear my coat.", or equivalently, "Sometimes, when it is raining, I don't wear my coat. " If the negation is true, then the original proposition (and by extension the contrapositive) is false.
Note that if is true and one is given that is false (i.e., ), then it can logically be concluded that must be also false (i.e., ). This is often called the law of contrapositive, or the modus tollens rule of inference.
Intuitive explanation
In the Euler diagram shown, if something is in A, it must be in B as well. So we can interpret "all of A is in B" as:
It is also clear that anything that is not within B (the blue region) cannot be within A, either. This statement, which can be expressed as:
is the contrapositive of the above statement. Therefore, one can say that
.
In practice, this equivalence can be used to make proving a statement easier. For example, if one wishes to prove that every girl in the United States (A) has brown hair (B), one can either try to directly prove by checking that all girls in the United States do indeed have brown hair, or try to prove by checking that all girls without brown hair are indeed all outside the US. In particular, if one were to find at least one girl without brown hair within the US, then one would have disproved , and equivalently .
In general, for any statement where A implies B, not B always implies not A. As a result, proving or disproving either one of these statements automatically proves or disproves the other, as they are logically equivalent to each other.
Formal definition
A proposition Q is implicated by a proposition P when the following relationship holds:
This states that, "if , then ", or, "if Socrates is a man, then Socrate
|
https://en.wikipedia.org/wiki/Algebraic%20fraction
|
In algebra, an algebraic fraction is a fraction whose numerator and denominator are algebraic expressions. Two examples of algebraic fractions are and . Algebraic fractions are subject to the same laws as arithmetic fractions.
A rational fraction is an algebraic fraction whose numerator and denominator are both polynomials. Thus is a rational fraction, but not because the numerator contains a square root function.
Terminology
In the algebraic fraction , the dividend a is called the numerator and the divisor b is called the denominator. The numerator and denominator are called the terms of the algebraic fraction.
A complex fraction is a fraction whose numerator or denominator, or both, contains a fraction. A simple fraction contains no fraction either in its numerator or its denominator. A fraction is in lowest terms if the only factor common to the numerator and the denominator is 1.
An expression which is not in fractional form is an integral expression. An integral expression can always be written in fractional form by giving it the denominator 1. A mixed expression is the algebraic sum of one or more integral expressions and one or more fractional terms.
Rational fractions
If the expressions a and b are polynomials, the algebraic fraction is called a rational algebraic fraction or simply rational fraction. Rational fractions are also known as rational expressions. A rational fraction is called proper if , and improper otherwise. For example, the rational fraction is proper, and the rational fractions and are improper. Any improper rational fraction can be expressed as the sum of a polynomial (possibly constant) and a proper rational fraction. In the first example of an improper fraction one has
where the second term is a proper rational fraction. The sum of two proper rational fractions is a proper rational fraction as well. The reverse process of expressing a proper rational fraction as the sum of two or more fractions is called resolving it into partial fractions. For example,
Here, the two terms on the right are called partial fractions.
Irrational fractions
An irrational fraction is one that contains the variable under a fractional exponent. An example of an irrational fraction is
The process of transforming an irrational fraction to a rational fraction is known as rationalization. Every irrational fraction in which the radicals are monomials may be rationalized by finding the least common multiple of the indices of the roots, and substituting the variable for another variable with the least common multiple as exponent. In the example given, the least common multiple is 6, hence we can substitute to obtain
See also
Partial fraction decomposition
References
Elementary algebra
Fractions (mathematics)
de:Bruchrechnung#Rechnen_mit_Bruchtermen
|
https://en.wikipedia.org/wiki/Geometry%20of%20interaction
|
The Geometry of Interaction (GoI) was introduced by Jean-Yves Girard shortly after his work on linear logic. In linear logic, proofs can be seen as various kinds of networks as opposed to the flat tree structures of sequent calculus. To distinguish the real proof nets from all the possible networks, Girard devised a criterion involving trips in the network. Trips can in fact be seen as some kind of operator acting on the proof. Drawing from this observation, Girard described directly this operator from the proof and has given a formula, the so-called execution formula, encoding the process of cut elimination at the level of operators.
One of the first significant applications of GoI was a better analysis of Lamping's algorithm for optimal reduction for the lambda calculus. GoI had a strong influence on game semantics for linear logic and PCF.
GoI has been applied to deep compiler optimisation for lambda calculi. A bounded version of GoI dubbed the Geometry of Synthesis has been used to compile higher-order programming languages directly into static circuits.
References
Further reading
GoI tutorial given at Siena 07 by Laurent Regnier, in the Linear Logic workshop,
Proof theory
Philosophical logic
Logic in computer science
Semantics
Linear logic
|
https://en.wikipedia.org/wiki/Bai%20He
|
Bai He (, born 19 November 1983) is a former Chinese-born Hong Kong professional footballer who played as a defensive midfielder.
Career statistics in Hong Kong
As of 11 May 2013
International career
He is selected by Hong Kong national football team for 2010 East Asian Football Championship semi-final while South China represent Hong Kong in the competition.
As of 19 November 2013
Honours
Club
South China
Hong Kong First Division: 2006–07, 2007–08, 2008–09, 2009–10
Hong Kong FA Cup: 2010–11
Hong Kong League Cup: 2010–11
Eastern
Hong Kong Premier League: 2015–16
Hong Kong Senior Shield: 2015–16
External links
Bai He at HKFA
1983 births
Living people
Sportspeople from Baoding
Chinese men's footballers
Hong Kong men's footballers
Footballers from Hebei
Chengdu Tiancheng F.C. players
South China AA players
Hong Kong Pegasus FC players
Cangzhou Mighty Lions F.C. players
Eastern Sports Club footballers
R&F (Hong Kong) players
Chinese Super League players
China League One players
Hong Kong Premier League players
Hong Kong men's international footballers
Men's association football midfielders
Hong Kong expatriate men's footballers
|
https://en.wikipedia.org/wiki/Bogdanov%E2%80%93Takens%20bifurcation
|
In bifurcation theory, a field within mathematics, a Bogdanov–Takens bifurcation is a well-studied example of a bifurcation with co-dimension two, meaning that two parameters must be varied for the bifurcation to occur. It is named after Rifkat Bogdanov and Floris Takens, who independently and simultaneously described this bifurcation.
A system y''' = f(y) undergoes a Bogdanov–Takens bifurcation if it has a fixed point and the linearization of f around that point has a double eigenvalue at zero (assuming that some technical nondegeneracy conditions are satisfied).
Three codimension-one bifurcations occur nearby: a saddle-node bifurcation, an Andronov–Hopf bifurcation and a homoclinic bifurcation. All associated bifurcation curves meet at the Bogdanov–Takens bifurcation.
The normal form of the Bogdanov–Takens bifurcation is
There exist two codimension-three degenerate Takens–Bogdanov bifurcations, also known as Dumortier–Roussarie–Sotomayor bifurcations.
References
Bogdanov, R. "Bifurcations of a Limit Cycle for a Family of Vector Fields on the Plane." Selecta Math. Soviet 1, 373–388, 1981.
Kuznetsov, Y. A. Elements of Applied Bifurcation Theory. New York: Springer-Verlag, 1995.
Takens, F. "Forced Oscillations and Bifurcations." Comm. Math. Inst. Rijksuniv. Utrecht 2, 1–111, 1974.
Dumortier F., Roussarie R., Sotomayor J. and Zoladek H., Bifurcations of Planar Vector Fields'', Lecture Notes in Math. vol. 1480, 1–164, Springer-Verlag (1991).
External links
Bifurcation theory
|
https://en.wikipedia.org/wiki/Landau%E2%80%93Kolmogorov%20inequality
|
In mathematics, the Landau–Kolmogorov inequality, named after Edmund Landau and Andrey Kolmogorov, is the following family of interpolation inequalities between different derivatives of a function f defined on a subset T of the real numbers:
On the real line
For k = 1, n = 2 and T = [c,∞) or T = R, the inequality was first proved by Edmund Landau with the sharp constants C(2, 1, [c,∞)) = 2 and C(2, 1, R) = √2. Following contributions by Jacques Hadamard and Georgiy Shilov, Andrey Kolmogorov found the sharp constants and arbitrary n, k:
where an are the Favard constants.
On the half-line
Following work by Matorin and others, the extremising functions were found by Isaac Jacob Schoenberg, explicit forms for the sharp constants are however still unknown.
Generalisations
There are many generalisations, which are of the form
Here all three norms can be different from each other (from L1 to L∞, with p=q=r=∞ in the classical case) and T may be the real axis, semiaxis or a closed segment.
The Kallman–Rota inequality generalizes the Landau–Kolmogorov inequalities from the derivative operator to more general contractions on Banach spaces.
Notes
Inequalities
→
|
https://en.wikipedia.org/wiki/Bruno%20Th%C3%BCring
|
Bruno Jakob Thüring (7 September 1905, in Warmensteinach – 6 May 1989, in Karlsruhe) was a German physicist and astronomer.
Thüring studied mathematics, physics, and astronomy at the University of Munich and received his doctorate in 1928, under Alexander Wilkens and Arnold Sommerfeld. Wilkens was a professor of astronomy and director of the Munich Observatory, which was part of the University. From 1928 to 1933, he was an assistant at the Munich Observatory. From 1934 to 1935, he was an assistant to Heinrich Vogt at the University of Heidelberg. Thüring completed his Habilitation there in 1935, whereupon he became an Observator at the Munich Observatory. In 1937, Thüring became a lecturer (Dozent) at the University of Munich. From 1940 to 1945, he held the chair for astronomy at the University of Vienna and was director of the Vienna Observatory. After 1945, Thüring lived as a private scholar in Karlsruhe.
During the reign of Adolf Hitler, Thüring was a proponent of Deutsche Physik, as were the two Nobel Prize–winning physicists Johannes Stark and Philipp Lenard; Deutsche Physik, was anti-Semitic and had a bias against theoretical physics, especially quantum mechanics. He was also a student of the philosophy of Hugo Dingler.
Thüring was an opponent of Albert Einstein's theory of relativity.
Books
Bruno Thüring (Georg Lüttke Verlag, 1941)
Bruno Thüring (Göller, 1957)
Bruno Thüring (Göller, 1958)
Bruno Thüring (Duncker u. Humblot GmbH, 1967)
Bruno Thüring (Duncker & Humblot GmbH, 1978)
Bruno Thüring (Haag u. Herchen, 1985)
Notes
References
Clark, Ronald W. Einstein: The Life and Times (World, 1971)
1905 births
1989 deaths
20th-century German physicists
20th-century German astronomers
Academic staff of Heidelberg University
Ludwig Maximilian University of Munich alumni
Academic staff of the Ludwig Maximilian University of Munich
Scientists from the Kingdom of Bavaria
Relativity critics
Science teachers
Academic staff of the University of Vienna
|
https://en.wikipedia.org/wiki/Friedrich%20Burmeister
|
Friedrich Burmeister (1890–1969) was a German geophysicist. He was director of the Munich University’s Geomagnetic Observatory.
Burmeister studied mathematics and physics at the University of Munich under Hugo von Seeliger and Arnold Sommerfeld, and he received his doctorate in 1919. Upon graduation, he became Director of the Munich Geomagnetic Observatory, of the Geomagnetism Branch of the Munich Earth Observatory, under the Geophysics Department of Earth and Environmental Sciences, at the University of Munich. Due to the industrialization of Munich, operation of the observatory became more and more difficult, so, in 1927 the Munich Geomagnetic Observatory was closed and moved to a village 25 kilometers west of Munich, and it became the Maisach Geomagnetic Observatory. Due to the construction of a large military air base near Maisach, this facility was closed on October 31, 1937. It was moved to a small town west of Munich, and it became the Fürstenfeldbruck Geomagnetic Observatory where measurements began on 1 January 1939 under Burmeister as its inaugural director. Burmeister retired as director of the observatory in 1958, whereupon Karl Wienert was appointed to the position.
Works
Richard Bock, F. Burmeister, Friedrich Errulat: Magnetische Reichsvermesung 1935. O.T. 1. (Tabellen) (Akademie-Verlag, 1948)
Notes
1890 births
1969 deaths
German geophysicists
Ludwig Maximilian University of Munich alumni
|
https://en.wikipedia.org/wiki/Hiram%20Perkins
|
Hiram Mills Perkins (1833-1924) was Professor of Mathematics and Astronomy at Ohio Wesleyan University and benefactor of the Perkins Telescope in the Perkins Observatory. He helped build to observatory buildings and also left an endowment for the school, and also his house was later used as a dormitory before it was sold off.
Perkins taught at the university from 1873 to 1907.
The Perkins telescope was the 3rd largest telescope in the world when it achieved first light in 1931.
The telescope was eventually moved to Lowell Observatory, and the 69-inch mirror was sent to a museum when it was replaced by a 72 inch one at that observatory.
In 1880 Perkins built a house at 235 W. William St, which was later used as a dorm by OWU.
Perkin's house survived into the 21st century, and was used as a dorm by OWU university. The home (later dorm) was located 235 W. William St. In the 2017 the school sold it off for 170,000 USD, to a developer who planned to convert it into a hotel.
See also
List of largest optical telescopes in the 20th century
References
External links
About the Perkins observatory
Perkins, Hiram
American philanthropists
Ohio Wesleyan University faculty
1833 births
1924 deaths
|
https://en.wikipedia.org/wiki/Mishnat%20ha-Middot
|
The Mishnat ha-Middot (, 'Treatise of Measures') is the earliest known Hebrew treatise on geometry, composed of 49 mishnayot in six chapters. Scholars have dated the work to either the Mishnaic period or the early Islamic era.
History
Date of composition
Moritz Steinschneider dated the Mishnat ha-Middot to between 800 and 1200 CE. Sarfatti and Langermann have advanced Steinschneider's claim of Arabic influence on the work's terminology, and date the text to the early ninth century.
On the other hand, Hermann Schapira argued that the treatise dates from an earlier era, most likely the Mishnaic period, as its mathematical terminology differs from that of the Hebrew mathematicians of the Arab period. Solomon Gandz conjectured that the text was compiled no later than (possibly by Rabbi Nehemiah) and intended to be a part of the Mishnah, but was excluded from its final canonical edition because the work was regarded as too secular. The content resembles both the work of Hero of Alexandria (c. ) and that of al-Khwārizmī (c. ) and the proponents of the earlier dating therefore see the Mishnat ha-Middot linking Greek and Islamic mathematics.
Modern history
The Mishnat ha-Middot was discovered in MS 36 of the Munich Library by Moritz Steinschneider in 1862. The manuscript, copied in Constantinople in 1480, goes as far as the end of Chapter V. According to the colophon, the copyist believed the text to be complete. Steinschneider published the work in 1864, in honour of the seventieth birthday of Leopold Zunz. The text was edited and published again by mathematician Hermann Schapira in 1880.
After the discovery by Otto Neugebauer of a genizah-fragment in the Bodleian Library containing Chapter VI, Solomon Gandz published a complete version of the Mishnat ha-Middot in 1932, accompanied by a thorough philological analysis. A third manuscript of the work was found among uncatalogued material in the Archives of the Jewish Museum of Prague in 1965.
Contents
Although primarily a practical work, the Mishnat ha-Middot attempts to define terms and explain both geometric application and theory. The book begins with a discussion that defines "aspects" for the different kinds of plane figures (quadrilateral, triangle, circle, and segment of a circle) in Chapter I (§1–5), and with the basic principles of measurement of areas (§6–9). In Chapter II, the work introduces concise rules for the measurement of plane figures (§1–4), as well as a few problems in the calculation of volume (§5–12). In Chapters III–V, the Mishnat ha-Middot explains again in detail the measurement of the four types of plane figures, with reference to numerical examples. The text concludes with a discussion of the proportions of the Tabernacle in Chapter VI.
The treatise argues against the common belief that the Tanakh defines the geometric ratio π as being exactly equal to 3 and defines it as 3 instead. The book arrives at this approximation by calculating the area of a circle according t
|
https://en.wikipedia.org/wiki/Complex%20polytope
|
In geometry, a complex polytope is a generalization of a polytope in real space to an analogous structure in a complex Hilbert space, where each real dimension is accompanied by an imaginary one.
A complex polytope may be understood as a collection of complex points, lines, planes, and so on, where every point is the junction of multiple lines, every line of multiple planes, and so on.
Precise definitions exist only for the regular complex polytopes, which are configurations. The regular complex polytopes have been completely characterized, and can be described using a symbolic notation developed by Coxeter.
Some complex polytopes which are not fully regular have also been described.
Definitions and introduction
The complex line has one dimension with real coordinates and another with imaginary coordinates. Applying real coordinates to both dimensions is said to give it two dimensions over the real numbers. A real plane, with the imaginary axis labelled as such, is called an Argand diagram. Because of this it is sometimes called the complex plane. Complex 2-space (also sometimes called the complex plane) is thus a four-dimensional space over the reals, and so on in higher dimensions.
A complex n-polytope in complex n-space is the analogue of a real n-polytope in real n-space.
There is no natural complex analogue of the ordering of points on a real line (or of the associated combinatorial properties). Because of this a complex polytope cannot be seen as a contiguous surface and it does not bound an interior in the way that a real polytope does.
In the case of regular polytopes, a precise definition can be made by using the notion of symmetry. For any regular polytope the symmetry group (here a complex reflection group, called a Shephard group) acts transitively on the flags, that is, on the nested sequences of a point contained in a line contained in a plane and so on.
More fully, say that a collection P of affine subspaces (or flats) of a complex unitary space V of dimension n is a regular complex polytope if it meets the following conditions:
for every , if is a flat in P of dimension i and is a flat in P of dimension k such that then there are at least two flats G in P of dimension j such that ;
for every such that , if are flats of P of dimensions i, j, then the set of flats between F and G is connected, in the sense that one can get from any member of this set to any other by a sequence of containments; and
the subset of unitary transformations of V that fix P are transitive on the flags of flats of P (with of dimension i for all i).
(Here, a flat of dimension −1 is taken to mean the empty set.) Thus, by definition, regular complex polytopes are configurations in complex unitary space.
The regular complex polytopes were discovered by Shephard (1952), and the theory was further developed by Coxeter (1974).
A complex polytope exists in the complex space of equivalent dimension. For example, the vertices of a complex polyg
|
https://en.wikipedia.org/wiki/Arens%E2%80%93Fort%20space
|
In mathematics, the Arens–Fort space is a special example in the theory of topological spaces, named for Richard Friederich Arens and M. K. Fort, Jr.
Definition
The Arens–Fort space is the topological space where is the set of ordered pairs of non-negative integers A subset is open, that is, belongs to if and only if:
does not contain or
contains and also all but a finite number of points of all but a finite number of columns, where a column is a set with fixed.
In other words, an open set is only "allowed" to contain if only a finite number of its columns contain significant gaps, where a gap in a column is significant if it omits an infinite number of points.
Properties
It is
Hausdorff
regular
normal
sequential
It is not:
second-countable
first-countable
metrizable
compact
There is no sequence in that converges to However, there is a sequence in such that is a cluster point of
See also
References
Topological spaces
|
https://en.wikipedia.org/wiki/AMTI
|
AMTI or Amti may refer to one of the following
Association of Mathematics Teachers of India
Airborne moving target indication
Apostolic Missionary Training Institute
Amti a village in Boliney, Abra, the Philippines
|
https://en.wikipedia.org/wiki/Generalized%20Pareto%20distribution
|
In statistics, the generalized Pareto distribution (GPD) is a family of continuous probability distributions. It is often used to model the tails of another distribution. It is specified by three parameters: location , scale , and shape . Sometimes it is specified by only scale and shape and sometimes only by its shape parameter. Some references give the shape parameter as .
Definition
The standard cumulative distribution function (cdf) of the GPD is defined by
where the support is for and for . The corresponding probability density function (pdf) is
Characterization
The related location-scale family of distributions is obtained by replacing the argument z by and adjusting the support accordingly.
The cumulative distribution function of (, , and ) is
where the support of is when , and when .
The probability density function (pdf) of is
,
again, for when , and when .
The pdf is a solution of the following differential equation:
Special cases
If the shape and location are both zero, the GPD is equivalent to the exponential distribution.
With shape , the GPD is equivalent to the continuous uniform distribution .
With shape and location , the GPD is equivalent to the Pareto distribution with scale and shape .
If , , , then . (exGPD stands for the exponentiated generalized Pareto distribution.)
GPD is similar to the Burr distribution.
Generating generalized Pareto random variables
Generating GPD random variables
If U is uniformly distributed on
(0, 1], then
and
Both formulas are obtained by inversion of the cdf.
In Matlab Statistics Toolbox, you can easily use "gprnd" command to generate generalized Pareto random numbers.
GPD as an Exponential-Gamma Mixture
A GPD random variable can also be expressed as an exponential random variable, with a Gamma distributed rate parameter.
and
then
Notice however, that since the parameters for the Gamma distribution must be greater than zero, we obtain the additional restrictions that: must be positive.
Exponentiated generalized Pareto distribution
The exponentiated generalized Pareto distribution (exGPD)
If , , , then is distributed according to the exponentiated generalized Pareto distribution, denoted by , .
The probability density function(pdf) of , is
where the support is for , and for .
For all , the becomes the location parameter. See the right panel for the pdf when the shape is positive.
The exGPD has finite moments of all orders for all and .
The moment-generating function of is
where and denote the beta function and gamma function, respectively.
The expected value of , depends on the scale and shape parameters, while the participates through the digamma function:
Note that for a fixed value for the , the plays as the location parameter under the exponentiated generalized Pareto distribution.
The variance of , depends on the shape parameter only through the polygamma function of order 1 (also called the trig
|
https://en.wikipedia.org/wiki/Great%20grand%20stellated%20120-cell
|
In geometry, the great grand stellated 120-cell or great grand stellated polydodecahedron is a regular star 4-polytope with Schläfli symbol {5/2,3,3}, one of 10 regular Schläfli-Hess 4-polytopes. It is unique among the 10 for having 600 vertices, and has the same vertex arrangement as the regular convex 120-cell.
It is one of four regular star polychora discovered by Ludwig Schläfli. It is named by John Horton Conway, extending the naming system by Arthur Cayley for the Kepler-Poinsot solids, and the only one containing all three modifiers in the name.
With its dual, it forms the compound of great grand stellated 120-cell and grand 600-cell.
Images
As a stellation
The great grand stellated 120-cell is the final stellation of the 120-cell, and is the only Schläfli-Hess polychoron to have the 120-cell for its convex hull. In this sense it is analogous to the three-dimensional great stellated dodecahedron, which is the final stellation of the dodecahedron and the only Kepler-Poinsot polyhedron to have the dodecahedron for its convex hull. Indeed, the great grand stellated 120-cell is dual to the grand 600-cell, which could be taken as a 4D analogue of the great icosahedron, dual of the great stellated dodecahedron.
The edges of the great grand stellated 120-cell are τ6 as long as those of the 120-cell core deep inside the polychoron, and they are τ3 as long as those of the small stellated 120-cell deep within the polychoron.
See also
List of regular polytopes
Convex regular 4-polytope – Set of convex regular polychora
Kepler-Poinsot solids – regular star polyhedron
Star polygon – regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
Zome Model of the Final Stellation of the 120-cell
4-polytopes
|
https://en.wikipedia.org/wiki/Grand%20600-cell
|
In geometry, the grand 600-cell or grand polytetrahedron is a regular star 4-polytope with Schläfli symbol {3, 3, 5/2}. It is one of 10 regular Schläfli-Hess polytopes. It is the only one with 600 cells.
It is one of four regular star 4-polytopes discovered by Ludwig Schläfli. It was named by John Horton Conway, extending the naming system by Arthur Cayley for the Kepler-Poinsot solids.
The grand 600-cell can be seen as the four-dimensional analogue of the great icosahedron (which in turn is analogous to the pentagram); both of these are the only regular n-dimensional star polytopes which are derived by performing stellational operations on the pentagonal polytope which has simplectic faces. It can be constructed analogously to the pentagram, its two-dimensional analogue, via the extension of said (n-1)-D simplex faces of the core nD polytope (tetrahedra for the grand 600-cell, equilateral triangles for the great icosahedron, and line segments for the pentagram) until the figure regains regular faces.
The Grand 600-cell is also dual to the great grand stellated 120-cell, mirroring the great icosahedron's duality with the great stellated dodecahedron (which in turn is also analogous to the pentagram); all of these are the final stellations of the n-dimensional "dodecahedral-type" pentagonal polytope.
Related polytopes
It has the same edge arrangement as the great stellated 120-cell, and grand stellated 120-cell, and same face arrangement as the great icosahedral 120-cell.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
The Great 600-cell, a Zome Model
4-polytopes
|
https://en.wikipedia.org/wiki/Small%20stellated%20120-cell
|
In geometry, the small stellated 120-cell or stellated polydodecahedron is a regular star 4-polytope with Schläfli symbol {5/2,5,3}. It is one of 10 regular Schläfli-Hess polytopes.
Related polytopes
It has the same edge arrangement as the great grand 120-cell, and also shares its 120 vertices with the 600-cell and eight other regular star 4-polytopes. It may also be seen as the first stellation of the 120-cell. In this sense it could be seen as analogous to the three-dimensional small stellated dodecahedron, which is the first stellation of the dodecahedron. Indeed, the small stellated 120-cell is dual to the icosahedral 120-cell, which could be taken as a 4D analogue of the great dodecahedron, dual of the small stellated dodecahedron.
The edges of the small stellated 120-cell are τ2 as long as those of the 120-cell core inside the 4-polytope.
See also
List of regular polytopes
Convex regular 4-polytope - Set of convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
Zome Model of the Final Stellation of the 120-cell
The First Stellation of the 120-cell, A Zome Model
4-polytopes
|
https://en.wikipedia.org/wiki/Icosahedral%20120-cell
|
In geometry, the icosahedral 120-cell, polyicosahedron, faceted 600-cell or icosaplex is a regular star 4-polytope with Schläfli symbol {3,5,5/2}. It is one of 10 regular Schläfli-Hess polytopes.
It is constructed by 5 icosahedra around each edge in a pentagrammic figure. The vertex figure is a great dodecahedron.
Related polytopes
It has the same edge arrangement as the 600-cell, grand 120-cell and great 120-cell, and shares its vertices with all other Schläfli–Hess 4-polytopes except the great grand stellated 120-cell (another stellation of the 120-cell).
As a faceted 600-cell, replacing the simplicial cells of the 600-cell with icosahedral pentagonal polytope cells, it could be seen as a four-dimensional analogue of the great dodecahedron, which replaces the triangular faces of the icosahedron with pentagonal faces. Indeed, the icosahedral 120-cell is dual to the small stellated 120-cell, which could be taken as a 4D analogue of the small stellated dodecahedron, dual of the great dodecahedron.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
4-polytopes
|
https://en.wikipedia.org/wiki/Grand%20120-cell
|
In geometry, the grand 120-cell or grand polydodecahedron is a regular star 4-polytope with Schläfli symbol {5,3,5/2}. It is one of 10 regular Schläfli-Hess polytopes.
It is one of four regular star 4-polytopes discovered by Ludwig Schläfli. It is named by John Horton Conway, extending the naming system by Arthur Cayley for the Kepler-Poinsot solids.
Related polytopes
It has the same edge arrangement as the 600-cell, icosahedral 120-cell and the same face arrangement as the great 120-cell.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
4-polytopes
|
https://en.wikipedia.org/wiki/Great%20stellated%20120-cell
|
In geometry, the great stellated 120-cell or great stellated polydodecahedron is a regular star 4-polytope with Schläfli symbol {5/2,3,5}. It is one of 10 regular Schläfli-Hess polytopes.
It is one of four regular star 4-polytopes discovered by Ludwig Schläfli. It is named by John Horton Conway, extending the naming system by Arthur Cayley for the Kepler-Poinsot solids.
Related polytopes
It has the same edge arrangement as the grand 600-cell, icosahedral 120-cell, and the same face arrangement as the grand stellated 120-cell.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
Paper model of 3D cross-section of Great Stellated 120-cell created using nets generated by Stella4D software
4-polytopes
|
https://en.wikipedia.org/wiki/Great%20grand%20120-cell
|
In geometry, the great grand 120-cell or great grand polydodecahedron is a regular star 4-polytope with Schläfli symbol {5,5/2,3}. It is one of 10 regular Schläfli-Hess polytopes.
Related polytopes
It has the same edge arrangement as the small stellated 120-cell.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot polyhedron – regular star polyhedron
Star polygon – regular star polygons
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
4-polytopes
|
https://en.wikipedia.org/wiki/Great%20icosahedral%20120-cell
|
In geometry, the great icosahedral 120-cell, great polyicosahedron or great faceted 600-cell is a regular star 4-polytope with Schläfli symbol {3,5/2,5}. It is one of 10 regular Schläfli-Hess polytopes.
Related polytopes
It has the same edge arrangement as the great stellated 120-cell, and grand stellated 120-cell, and face arrangement of the grand 600-cell.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
4-polytopes
|
https://en.wikipedia.org/wiki/Great%20120-cell
|
In geometry, the great 120-cell or great polydodecahedron is a regular star 4-polytope with Schläfli symbol {5,5/2,5}. It is one of 10 regular Schläfli-Hess polytopes. It is one of the two such polytopes that is self-dual.
Related polytopes
It has the same edge arrangement as the 600-cell, icosahedral 120-cell as well as the same face arrangement as the grand 120-cell.
Due to its self-duality, it does not have a good three-dimensional analogue, but (like all other star polyhedra and polychora) is analogous to the two-dimensional pentagram.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids regular star polyhedron
Star polygon regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
4-polytopes
|
https://en.wikipedia.org/wiki/Grand%20stellated%20120-cell
|
In geometry, the grand stellated 120-cell or grand stellated polydodecahedron is a regular star 4-polytope with Schläfli symbol {5/2,5,5/2}. It is one of 10 regular Schläfli-Hess polytopes.
It is also one of two such polytopes that is self-dual.
Related polytopes
It has the same edge arrangement as the grand 600-cell, icosahedral 120-cell, and the same face arrangement as the great stellated 120-cell.
Due to its self-duality, it does not have a good three-dimensional analogue, but (like all other star polyhedra and polychora) is analogous to the two-dimensional pentagram.
See also
List of regular polytopes
Convex regular 4-polytope
Kepler-Poinsot solids - regular star polyhedron
Star polygon - regular star polygons
References
Edmund Hess, (1883) Einleitung in die Lehre von der Kugelteilung mit besonderer Berücksichtigung ihrer Anwendung auf die Theorie der Gleichflächigen und der gleicheckigen Polyeder .
H. S. M. Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. .
John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, (Chapter 26, Regular Star-polytopes, pp. 404–408)
External links
Regular polychora
Discussion on names
Reguläre Polytope
The Regular Star Polychora
4-polytopes
|
https://en.wikipedia.org/wiki/Uniform%20polyhedron%20compound
|
In geometry, a uniform polyhedron compound is a polyhedral compound whose constituents are identical (although possibly enantiomorphous) uniform polyhedra, in an arrangement that is also uniform, i.e. the symmetry group of the compound acts transitively on the compound's vertices.
The uniform polyhedron compounds were first enumerated by John Skilling in 1976, with a proof that the enumeration is complete. The following table lists them according to his numbering.
The prismatic compounds of prisms (UC20 and UC21) exist only when , and when and are coprime. The prismatic compounds of antiprisms (UC22, UC23, UC24 and UC25) exist only when , and when and are coprime. Furthermore, when , the antiprisms degenerate into tetrahedra with digonal bases.
References
.
External links
http://www.interocitors.com/polyhedra/UCs/ShortNames.html - Bowers style acronyms for uniform polyhedron compounds
Polyhedral compounds
|
https://en.wikipedia.org/wiki/Giovanni%20Frattini
|
Giovanni Frattini (8 January 1852 – 21 July 1925) was an Italian mathematician, noted for his contributions to group theory.
Biography
Frattini entered the University of Rome in 1869, where he studied mathematics with Giuseppe Battaglini, Eugenio Beltrami, and Luigi Cremona, obtaining his Laurea in 1875.
In 1885 he published a paper where he defined a certain subgroup of a finite group. This subgroup, now known as the Frattini subgroup, is the subgroup generated by all the non-generators of the group . He showed that is nilpotent and, in so doing, developed a method of proof known today as Frattini's argument.
Besides group theory, he also studied
differential geometry and the analysis of second degree indeterminates.
Notes
References
Emaldi, M.; Zacher, G., Giovanni Frattini (1852–1925), matematico (in italian), Advances in group theory 2002, 191–207, Aracne, Rome, 2003.
External links
1852 births
1925 deaths
Scientists from Rome
19th-century Italian mathematicians
20th-century Italian mathematicians
Group theorists
Sapienza University of Rome alumni
|
https://en.wikipedia.org/wiki/List%20of%20Philippine%20provinces%20by%20population
|
This is a list of the Philippines' provinces sorted by population, based on the population census of August 1, 2015 conducted by the Philippine Statistics Authority.
Population of provinces in this list includes population of highly urbanized cities, which are administratively independent of the province.
Population counts for the regions do not add up to the national total.
2020 Census
2015 Census
2000 Census
Showing provinces existing at the time of census. Figures do not add up to total as population in disputed areas are added up to the next higher subdivision.
1995 Census
Showing provinces existing at the time of census.
1975 Census
Showing provinces existing at the time of census.
1903 Census
Showing provinces existing at the time of census.
See also
Demographics of the Philippines
Provinces of the Philippines
List of Philippine provinces by Human Development Index
References
Sources
Census 2000 Final Count
Population
Philippines, population
|
https://en.wikipedia.org/wiki/Institute%20of%20Mathematics%20of%20the%20Romanian%20Academy
|
The "Simion Stoilow" Institute of Mathematics of the Romanian Academy is a research institute in Bucharest, Romania. It is affiliated with the Romanian Academy, and it is named after Simion Stoilow, one of its founders.
History
On December 29, 1945, a group of twenty Romanian mathematicians from various institutions in Bucharest led by Dimitrie Pompeiu held a meeting at the University of Bucharest to establish the Institute of Mathematical Sciences with the aim of "promoting scientific research in mathematical sciences, through communications, talks, publications, congresses, and other means proper to this aim". This group also included Dan Barbilian, Alexandru Froda, Alexandru Ghica, Gheorghe Mihoc, Grigore Moisil, Miron Nicolescu, Octav Onicescu, Stoilow, Gabriel Sudan, Victor Vâlcovici, and Gheorghe Vrănceanu. In January 1946 they registered the Institute as a legal person, specifically an NGO, with the Ilfov County Court.
On June 9, 1948 the new Communist regime revamped the Romanian Academy to an institution modeled on the Academy of Sciences of the USSR, increasing by 1966 the number of its member research centers and institutes from 7 to 56. Among the newly created institutes was the Institute of Mathematics of the Romanian Academy, established in 1949 on the basis of the previous NGO with the contribution of Simion Stoilow, one of the twenty founding members from 1945.
In 1974, Zoia Ceaușescu, a graduate of the Faculty of Mathematics of the University of Bucharest and the daughter of Nicolae Ceaușescu, the communist head of state, was hired by the institute. Her parents were not happy with her choice in studying Mathematics. Provoked by a verbal disagreement with Miron Nicolescu, in April 1975, Ceaușescu issued a decree to close down the Institute. The ensuing disruption of scientific life led to the eventual departure from Romania of a number of leading mathematicians, including Ciprian Foias and Dan-Virgil Voiculescu. In 1978, with help from Zoia Ceaușescu, some of the former members of the Institute were hired into a newly established Mathematical section of the National Institute Scientific and Technical Creation (Institutul Național pentru Creație Științifică și Tehnică, INCREST), previously known as the Institute of Fluid Mechanics and Aerospace Research (Institutul de Mecanică a Fluidelor și Cercetări Aerospațiale, IMFCA), currently a private company (INAV S.A.), owned by Grupul S.C.R.
After the Romanian Revolution of 1989, the Institute of Mathematics of the Romanian Academy (abbreviated IMAR) was re-established on March 8, 1990 by a decree of the post-Communist Romanian government. It was placed under the aegis of the Romanian Academy, itself partially reorganized by a decree of the same government on January 5, 1990.
Current situation
Currently, IMAR is the leading Romanian institution in Mathematics research, with about 100 full- and part-time researchers. In 2000–2004, the Institute was a Centre of Excellence in Rese
|
https://en.wikipedia.org/wiki/Charlotte%20Scott
|
Charlotte Angas Scott (8 June 1858 – 10 November 1931) was a British mathematician who made her career in the United States and was influential in the development of American mathematics, including the mathematical education of women. Scott played an important role in Cambridge changing the rules for its famous Mathematical Tripos exam.
Early life
She was the second of seven children to Caleb Scott, a minister of the Congregational Church, and Eliza Exley Scott. Educated at Girton College, Cambridge from 1876 to 1880 on a scholarship, she was then a Resident Lecturer in Mathematics there until 1884. In 1885 she became one of the first British women to receive a doctorate, and the first British woman to receive a doctorate in mathematics, which she received from the University of London. She did her graduate research under Arthur Cayley at Cambridge University, but since Cambridge did not begin issuing degrees to women until 1948, Scott received her BSc (1882) and D.Sc. (1885) from the University of London through external examinations.
Passing the Tripos
In 1880, Scott obtained special permission to take the Cambridge Mathematical Tripos Exam, as women were not normally allowed to sit for the exam. She came eighth on the Tripos of all students taking them, but due to her sex, the title of "eighth wrangler," a high honour, went officially to a male student.
At the ceremony, however, after the seventh wrangler had been announced, all the students in the audience shouted her name.
Because she could not attend the award ceremony, Scott celebrated her accomplishment at Girton College where there were cheers and clapping at dinner, a special evening ceremony where the students sang "See the Conquering Hero Comes", received an ode written by a staff member, and was crowned with laurels.
After this incident women were allowed to formally take the exam and their exam scores listed, although separately from the men's and thus not included in the rankings. Women obtaining the necessary score also received a special certificate instead of the BA degree with honours. In 1922, James Harkness remarked that Scott's achievement marked "the turning point in England from the theoretical feminism of Mill and others to the practical education and political advances of the present time".
Work
Moving to the United States in 1885, she became one of eight founding faculty and Associate Professor of Mathematics at Bryn Mawr College, and Professor from 1888 to 1917. She was the first mathematician at Bryn Mawr College and the first department head. During this period she directed the PhD theses of some pioneering women mathematicians. Of the nine other women to earn doctorates in mathematics in the nineteenth century, three studied with Scott.
Her mathematical speciality was the study of specific algebraic curves of degree higher than two. Her book An Introductory Account of Certain Modern Ideas and Methods in Plane Analytical Geometry was published in 1894 and
|
https://en.wikipedia.org/wiki/Relative%20interior
|
In mathematics, the relative interior of a set is a refinement of the concept of the interior, which is often more useful when dealing with low-dimensional sets placed in higher-dimensional spaces.
Formally, the relative interior of a set (denoted ) is defined as its interior within the affine hull of In other words,
where is the affine hull of and is a ball of radius centered on . Any metric can be used for the construction of the ball; all metrics define the same set as the relative interior.
A set is relatively open iff it is equal to its relative interior. Note that when is a closed subspace of the full vector space (always the case when the full vector space is finite dimensional) then being relatively closed is equivalent to being closed.
For any convex set the relative interior is equivalently defined as
where means that there exists some such that .
Comparison to interior
The interior of a point in an at least one-dimensional ambient space is empty, but its relative interior is the point itself.
The interior of a line segment in an at least two-dimensional ambient space is empty, but its relative interior is the line segment without its endpoints.
The interior of a disc in an at least three-dimensional ambient space is empty, but its relative interior is the same disc without its circular edge.
Properties
See also
References
Further reading
Topology
|
https://en.wikipedia.org/wiki/Faceting
|
Stella octangula as a faceting of the cube
In geometry, faceting (also spelled facetting) is the process of removing parts of a polygon, polyhedron or polytope, without creating any new vertices.
New edges of a faceted polyhedron may be created along face diagonals or internal space diagonals. A faceted polyhedron will have two faces on each edge and creates new polyhedra or compounds of polyhedra.
Faceting is the reciprocal or dual process to stellation. For every stellation of some convex polytope, there exists a dual faceting of the dual polytope.
Faceted polygons
For example, a regular pentagon has one symmetry faceting, the pentagram, and the regular hexagon has two symmetric facetings, one as a polygon, and one as a compound of two triangles.
Faceted polyhedra
The regular icosahedron can be faceted into three regular Kepler–Poinsot polyhedra: small stellated dodecahedron, great dodecahedron, and great icosahedron. They all have 30 edges.
The regular dodecahedron can be faceted into one regular Kepler–Poinsot polyhedron, three uniform star polyhedra, and three regular polyhedral compound. The uniform stars and compound of five cubes are constructed by face diagonals. The excavated dodecahedron is a facetting with star hexagon faces.
History
Faceting has not been studied as extensively as stellation.
In 1568 Wenzel Jamnitzer published his book Perspectiva Corporum Regularium, showing many stellations and facetings of polyhedra.
In 1619, Kepler described a regular compound of two tetrahedra which fits inside a cube, and which he called the Stella octangula.
In 1858, Bertrand derived the regular star polyhedra (Kepler–Poinsot polyhedra) by faceting the regular convex icosahedron and dodecahedron.
In 1974, Bridge enumerated the more straightforward facetings of the regular polyhedra, including those of the dodecahedron.
In 2006, Inchbald described the basic theory of faceting diagrams for polyhedra. For a given vertex, the diagram shows all the possible edges and facets (new faces) which may be used to form facetings of the original hull. It is dual to the dual polyhedron's stellation diagram, which shows all the possible edges and vertices for some face plane of the original core.
References
Notes
Bibliography
Bertrand, J. Note sur la théorie des polyèdres réguliers, Comptes rendus des séances de l'Académie des Sciences, 46 (1858), pp. 79–82.
Bridge, N.J. Facetting the dodecahedron, Acta crystallographica A30 (1974), pp. 548–552.
Inchbald, G. Facetting diagrams, The mathematical gazette, 90 (2006), pp. 253–261.
Alan Holden, Shapes, Space, and Symmetry. New York: Dover, 1991. p.94
External links
Polyhedra
Polygons
Polytopes
|
https://en.wikipedia.org/wiki/Roy%27s%20safety-first%20criterion
|
Roy's safety-first criterion is a risk management technique, devised by A. D. Roy, that allows an investor to select one portfolio rather than another based on the criterion that the probability of the portfolio's return falling below a minimum desired threshold is minimized.
For example, suppose there are two available investment strategies—portfolio A and portfolio B, and suppose the investor's threshold return level (the minimum return that the investor is willing to tolerate) is −1%. Then, the investor would choose the portfolio that would provide the maximum probability of the portfolio return being at least as high as −1%.
Thus, the problem of an investor using Roy's safety criterion can be summarized symbolically as:
where is the probability of (the actual return of asset i) being less than (the minimum acceptable return).
Normally distributed return and SFRatio
If the portfolios under consideration have normally distributed returns, Roy's safety-first criterion can be reduced to the maximization of the safety-first ratio, defined by:
where is the expected return (the mean return) of the portfolio, is the standard deviation of the portfolio's return and is the minimum acceptable return.
Example
If Portfolio A has an expected return of 10% and standard deviation of 15%, while portfolio B has a mean return of 8% and a standard deviation of 5%, and the investor is willing to invest in a portfolio that maximizes the probability of a return no lower than 0%:
SFRatio(A) = = 0.67,
SFRatio(B) = = 1.6
By Roy's safety-first criterion, the investor would choose portfolio B as the correct investment opportunity.
Similarity to Sharpe ratio
Under normality,
The Sharpe ratio is defined as excess return per unit of risk, or in other words:
.
The SFRatio has a striking similarity to the Sharpe ratio. Thus for normally distributed returns, Roy's Safety-first criterion—with the minimum acceptable return equal to the risk-free rate—provides the same conclusions about which portfolio to invest in as if we were picking the one with the maximum Sharpe ratio.
Asset Pricing
Roy’s work is the foundation of asset pricing under loss aversion. His work was followed by Lester G. Telser’s proposal of maximizing expected return subject to the constraint that the be less than a certain safety level.
See also Chance-constrained portfolio selection.
See also
Omega ratio
Value at risk
References
Financial risk management
Portfolio theories
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.