source stringlengths 31 168 | text stringlengths 51 3k |
|---|---|
https://en.wikipedia.org/wiki/Otto%20Buek | Otto Buek (19 November 1873 – 1966) was a German philosopher and translator born in St. Petersburg.
He studied philosophy, chemistry and mathematics at the University of Heidelberg, and obtained his doctorate from the University of Marburg. Later he worked as a journalist in Berlin, where he translated works of Tolstoy, Unamuno and Alexander Herzen. Additionally, with Kurt Wildhagen (1871–1949), he edited works by Turgenev, Gogol and two volumes of Ernst Cassirer's edition of Kant's collected writings. During the 1920s, he worked as a correspondent for the Argentine newspaper La Nación.
From a philosophical standpoint, Buek was an advocate of neo-Kantianism, and as a young man was a disciple of Marburg philosopher Hermann Cohen (1848–1918). He was friends to physiologist and pacifist Georg Friedrich Nicolai (1874–1964), and only one of three intellectuals in Germany who signed Nicolai's 1914 anti-war counter-manifesto, Manifesto to the Europeans (Aufruf an die Europäer). The other two being physicist Albert Einstein and astronomer Wilhelm Julius Foerster.
Selected publications
Die Atomistik und Faradays Begriff der Materie ("Atomism and Faraday's concept of matter") in: Archiv für die Geschichte der Philosophie 18: 1904, 65-139; als Separatdruck auch: Reimer, Berlin 1905.
Kritik des Marxismus ("Critique of Marxism") in: Die Aktion 1911, Spalte 1029–1033.
Faradays System der Natur und seine begrifflichen Grundlagen ("Faraday's system of nature and its conceptual foundations") in: Philosophische Abhandlungen. Hermann Cohen zum 70sten Geburtstag dargebracht (4. Juli 1912); Cassirer, Berlin 1912.
References
Citations
Bibliography
19th-century German philosophers
German translators
1873 births
1966 deaths
German male non-fiction writers
Date of death missing
Place of death missing
20th-century German philosophers
Emigrants from the Russian Empire to the German Empire |
https://en.wikipedia.org/wiki/Harold%20Schoen | Harold Schoen (born in Fort Recovery, Ohio) is a retired mathematics educator and former college basketball player.
College basketball career
Dr. Schoen played basketball at the University of Dayton from 1960 to 1963. He appeared in 70 games, scored 448 career points (6.4 avg.), and had 352 rebounds. He was starting forward on the 1962 NIT Champion Flyer team that finished with a record of 24-6 and scored 12 points in the final game against St. Johns in Madison Square Garden. He was also captain of the Flyers in his senior year, 1962–63, when he won the team free-throw award (79%) and the scholarship award for being the senior with the highest grade point average.
Professional life
Dr. Schoen is a Professor Emeritus of mathematics and education at The University of Iowa where he served on the faculty from 1974 to 2005. He received his doctorate from Ohio State University in 1971. He was very active in the National Council of Teachers of Mathematics as a leader and writer of scholarly publications. His professional work has focused on teaching mathematics and mathematics curriculum. Dr. Schoen has authored over 100 scholarly publications as well as 15 textbooks.
In 2015, Dr. Schoen first published his memories of growing up on a small Ohio farm in a family of 13 children through his years at the University of Dayton and winning the 1962 NIT. He writes of family, farm work and play, basketball at the University of Dayton for legendary coach Tom Blackburn, and winning the 1962 NIT. In May 2016, Growing Up was named a Finalist for a Second Generation Indie Book Award. Growing Up is available as a paperback or Kindle e-book.
References
Living people
20th-century American mathematicians
21st-century American mathematicians
American educators
American men's basketball players
Dayton Flyers men's basketball players
People from Fort Recovery, Ohio
Year of birth missing (living people) |
https://en.wikipedia.org/wiki/Nir%20Nachum | Nir Nachum (; born September 2, 1983) is an Israeli football player who plays as an attacking midfielder for Maccabi Ironi Sderot.
Club career statistics
(correct as of October 2011)
References
1983 births
Living people
Israeli Jews
Israeli men's footballers
F.C. Ashdod players
Hapoel Marmorek F.C. players
Maccabi Sha'arayim F.C. players
Sektzia Ness Ziona F.C. players
Beitar Jerusalem F.C. players
Hapoel Ashkelon F.C. players
Maccabi Netanya F.C. players
Hapoel Tel Aviv F.C. players
Hapoel Rishon LeZion F.C. players
Hapoel Bnei Lod F.C. players
Israeli Premier League players
Liga Leumit players
Footballers from Ashdod
Men's association football forwards |
https://en.wikipedia.org/wiki/Bad%20Science%20%28Goldacre%20book%29 | Bad Science is a book written by Ben Goldacre which criticises certain physicians and the media for a lack of critical thinking and misunderstanding of evidence and statistics which is detrimental to the public understanding of science. In Bad Science, Goldacre explains basic scientific principles to demonstrate the importance of robust research methods, experimental design, and analysis to make informed judgements and conclusions of evidence-based medicine. Bad Science is described as an engaging and inspirational book, written in simple language and occasional humour, to effectively explain academic concepts to the reader.
Bad Science was originally published in the UK by Fourth Estate in September 2008 and later editions have since been published through HarperCollins Publishers. The book has generally been well-received with positive reviews by the British Medical Journal and the Daily Telegraph. Bad Science reached the Top 10 bestseller list for Amazon Books and was shortlisted for the BBC Samuel Johnson Prize for Non-Fiction 2009.
Synopsis
Each chapter deals with a specific aspect of bad science, often to illustrate a wider point. For example, the chapter on homeopathy becomes the point where he explains the placebo effect, regression to the mean (that is, the natural cycle of the disease), placebo-controlled trials (including the need for randomisation and double blinding), meta-analyses like the Cochrane Collaboration and publication bias.
Introduction
Goldacre begins by highlighting that the substandard understanding of statistics and evidence-based medicine within our society leads to the misrepresentation of science within the media. Ultimately, constant bombardment from the media and advertising is detrimental and misleading public understanding of science. Throughout this book, Goldacre aims to provide knowledge of bad science practices to help readers differentiate the truth from the lies and form their own judgement of science.
Chapter 1: "Matter"
Goldacre recognised that the scientific knowledge of marketers and journalists is often rudimentary, relying on basic notions and ideas from GCSE-level science. Consequently, through basic reasoning errors, they have the power to misinterpret evidence and mislead public understanding. Throughout this chapter, Goldacre debunks pseudoscientific claims within the alternative medicine phenomenon of detoxification, using simple science experiments to demonstrate the basic principles of experimental design and analysis. With three examples Goldacre shows that theatre is a common feature of detox products and that detox itself has no scientific meaning but instead is a marketing innovation based on cultural rituals.
Chapter 2: "Brain Gym"
Goldacre slams Brain Gym as transparent pseudoscientific nonsense. At the time the book was written, Brain Gym was promoted by local education authorities and practised in hundreds of schools across the country. Goldacre explains how the use of quackery |
https://en.wikipedia.org/wiki/Validity | Validity or Valid may refer to:
Mathematics and statistics
Validity (logic), a property of a logical argument
Science
Internal validity, the validity of causal inferences within scientific studies, usually based on experiments
External validity, the validity of generalized causal inferences in scientific studies, usually based on experiments
Valid name (zoology), in animal taxonomy
Validly published name, in plant taxonomy
Validity (statistics), the application of the principles of statistics to arrive at valid conclusions
Statistical conclusion validity, establishes the existence and strength of the co-variation between the cause and effect variables
Test validity, validity in educational and psychological testing
Face validity, the property of a test intended to measure something
Construct validity, refers to whether a scale measures or correlates with the theorized psychological construct it measures
Content validity, the extent to which a measure represents all facets of a given construct
Concurrent validity, the extent to which a test correlates with another measure
Predictive validity, the extent to which a score on a scale or test predicts scores on some other measure
Discriminant validity, the degree to which results a test of one concept can be expected to differ from tests of other concepts that should not be correlated with this one
Criterion validity, the extent to which a measure is related to an outcome
Convergent validity, the degree to which multiple measures of the same construct lead to the same conclusion
Other uses
Valid (number format), a universal number format (unum type III)
Valid (engraving company), a Brazilian company
VALID (Video Audio Line-up & IDentification), part of the GLITS broadcast television protocol
Validity and liceity (Catholic Church), concepts in the Catholic Church.
See also
Validation (disambiguation) |
https://en.wikipedia.org/wiki/P-adically%20closed%20field | In mathematics, a p-adically closed field is a field that enjoys a closure property that is a close analogue for p-adic fields to what real closure is to the real field. They were introduced by James Ax and Simon B. Kochen in 1965.
Definition
Let be the field of rational numbers and be its usual -adic valuation (with ). If is a (not necessarily algebraic) extension field of , itself equipped with a valuation , we say that is formally p-adic when the following conditions are satisfied:
extends (that is, for all ),
the residue field of coincides with the residue field of (the residue field being the quotient of the valuation ring by its maximal ideal ),
the smallest positive value of coincides with the smallest positive value of (namely 1, since v was assumed to be normalized): in other words, a uniformizer for remains a uniformizer for .
(Note that the value group of K may be larger than that of F since it may contain infinitely large elements over the latter.)
The formally p-adic fields can be viewed as an analogue of the formally real fields.
For example, the field (i) of Gaussian rationals, if equipped with the valuation w given by (and ) is formally 5-adic (the place v=5 of the rationals splits in two places of the Gaussian rationals since factors over the residue field with 5 elements, and w is one of these places). The field of 5-adic numbers (which contains both the rationals and the Gaussian rationals embedded as per the place w) is also formally 5-adic. On the other hand, the field of Gaussian rationals is not formally 3-adic for any valuation, because the only valuation w on it which extends the 3-adic valuation is given by and its residue field has 9 elements.
When F is formally p-adic but that there does not exist any proper algebraic formally p-adic extension of F, then F is said to be p-adically closed. For example, the field of p-adic numbers is p-adically closed, and so is the algebraic closure of the rationals inside it (the field of p-adic algebraic numbers).
If F is p-adically closed, then:
there is a unique valuation w on F which makes F p-adically closed (so it is legitimate to say that F, rather than the pair , is p-adically closed),
F is Henselian with respect to this place (that is, its valuation ring is so),
the valuation ring of F is exactly the image of the Kochen operator (see below),
the value group of F is an extension by (the value group of K) of a divisible group, with the lexicographical order.
The first statement is an analogue of the fact that the order of a real-closed field is uniquely determined by the algebraic structure.
The definitions given above can be copied to a more general context: if K is a field equipped with a valuation v such that
the residue field of K is finite (call q its cardinal and p its characteristic),
the value group of v admits a smallest positive element (call it 1, and say π is a uniformizer, i.e. ),
K has finite absolute ramification, i.e., is |
https://en.wikipedia.org/wiki/List%20of%20Sport%20Club%20Corinthians%20Paulista%20records%20and%20statistics |
All-time top 10 goalscorers
As of February 2, 2009
All-time top 10 appearances
As of January 29, 2022
Records
Brazilian football club statistics
Corinthians |
https://en.wikipedia.org/wiki/1908%E2%80%9309%20Manchester%20United%20F.C.%20season | The 1908–09 season was Manchester United's 17th season in the Football League and fourth in the First Division.
First Division
FA Cup
Squad statistics
References
Manchester United F.C. seasons
Manchester United |
https://en.wikipedia.org/wiki/Pricing%20science | Pricing science is the application of social and business science methods to the problem of setting prices. Methods include economic modeling, statistics, econometrics, mathematical programming. This discipline had its origins in the development of yield management in the airline industry in the 1980s, and has since spread to many other sectors and pricing contexts, including yield management in other travel industry sectors, media, retail, manufacturing and distribution.
Pricing science work is effectuated in a variety of ways, from strategic advice on pricing on defining segments for which pricing strategies may vary, to enterprise-class software applications, integrated into price quoting and selling processes.
History
Pricing science has its roots in the development of yield management programs developed by the airline industry shortly after deregulation of the industry in the early 1980s. These programs provided model-based support to answer the central question faced by deregulated airlines: "How many bookings should I accept, for each fare product that I offer on each flight departure that I operate, so that I maximize my revenue?" Finding the best answers required developing statistical algorithms to predict the number of booked passengers who would show up and to predict the number of additional bookings to expect for each fare product. It also required developing optimization algorithms and formulations to find the best solution, given the characteristics of the forecasts. And for airlines operating hundreds to thousands of flights every day, and selling tickets for daily departures 300 days into the future, the computational challenges are extreme.
The yield management programs provided dramatic financial benefits to their early adopters in the early- to mid-1980s, and the approach spread rapidly to firms in the related sectors of hotel, rental car, and cruise line industries. While there are important differences between these industries, the dominant drivers of the solutions were the perishable nature of the resource being sold, demand patterns that were time-variable, and the limited capacity available for sale. For a good overview of pricing science methods and applications related to yield or revenue management, see Phillips and the references cited therein. Williams shows the connection between many of these problems and standard micro-economics.
Beginning in the early to mid-1990s, these successes spawned efforts to apply the methods, or develop new methods, to support pricing and related decisions in a variety of other settings. Yield management has been applied successfully to broadcast and cable television, online media, oil and gas producers, sporting and theatrical providers, online media, apartment and timeshare rental properties, credit card, and retail settings.
Since about 2000, the application of pricing science to the problems of quoting prices in business-to-business transactions has taken off |
https://en.wikipedia.org/wiki/Topos | In mathematics, a topos (, ; plural topoi or , or toposes) is a category that behaves like the category of sheaves of sets on a topological space (or more generally: on a site). Topoi behave much like the category of sets and possess a notion of localization; they are a direct generalization of point-set topology. The Grothendieck topoi find applications in algebraic geometry; the more general elementary topoi are used in logic.
The mathematical field that studies topoi is called topos theory.
Grothendieck topos (topos in geometry)
Since the introduction of sheaves into mathematics in the 1940s, a major theme has been to study a space by studying sheaves on a space. This idea was expounded by Alexander Grothendieck by introducing the notion of a "topos". The main utility of this notion is in the abundance of situations in mathematics where topological heuristics are very effective, but an honest topological space is lacking; it is sometimes possible to find a topos formalizing the heuristic. An important example of this programmatic idea is the étale topos of a scheme. Another illustration of the capability of Grothendieck toposes to incarnate the “essence” of different mathematical situations is given by their use as bridges for connecting theories which, albeit written in possibly very different languages, share a common mathematical content.
Equivalent definitions
A Grothendieck topos is a category C which satisfies any one of the following three properties. (A theorem of Jean Giraud states that the properties below are all equivalent.)
There is a small category D and an inclusion C ↪ Presh(D) that admits a finite-limit-preserving left adjoint.
C is the category of sheaves on a Grothendieck site.
C satisfies Giraud's axioms, below.
Here Presh(D) denotes the category of contravariant functors from D to the category of sets; such a contravariant functor is frequently called a presheaf.
Giraud's axioms
Giraud's axioms for a category C are:
C has a small set of generators, and admits all small colimits. Furthermore, fiber products distribute over coproducts. That is, given a set I, an I-indexed coproduct mapping to A, and a morphism A → A, the pullback is an I-indexed coproduct of the pullbacks:
Sums in C are disjoint. In other words, the fiber product of X and Y over their sum is the initial object in C.
All equivalence relations in C are effective.
The last axiom needs the most explanation. If X is an object of C, an "equivalence relation" R on X is a map R → X × X in C
such that for any object Y in C, the induced map Hom(Y, R) → Hom(Y, X) × Hom(Y, X) gives an ordinary equivalence relation on the set Hom(Y, X). Since C has colimits we may form the coequalizer of the two maps R → X; call this X/R. The equivalence relation is "effective" if the canonical map
is an isomorphism.
Examples
Giraud's theorem already gives "sheaves on sites" as a complete list of examples. Note, however, that nonequivalent sites often give
rise |
https://en.wikipedia.org/wiki/SimFiT | Simfit is a free open-source Windows package for simulation, curve fitting, statistics, and plotting, using a library of models or user-defined mathematical equations. Simfit has been developed by Bill Bardsley of the University of Manchester. Although it is written for Windows, it can easily be installed and used on Linux machines via WINE.
Simfit is developed using Silverfrost Limited's FTN95 Fortran Compiler and is currently featured on their website as a showcased application. The graphical functionality in Simfit has been released as a Fortran library called Simdem which allows the programmer to produce charts and graphs with just a few lines of Fortran. A version of Simdem is shipped with the Windows version of the NAG Fortran Builder.
A Spanish-language version of Simfit is maintained by a team in Salamanca.
References
External links
Main Website
Website of the Silverfrost version
Website of the Spanish version
Free statistical software
Regression and curve fitting software |
https://en.wikipedia.org/wiki/Racetrack%20principle | In calculus,
the racetrack principle describes the movement and growth of two functions in terms of their derivatives.
This principle is derived from the fact that if a horse named Frank Fleetfeet always runs faster than a horse named Greg Gooseleg, then if Frank and Greg start a race from the same place and the same time, then Frank will win. More briefly, the horse that starts fast and stays fast wins.
In symbols:
if for all , and if , then for all .
or, substituting ≥ for > produces the theorem
if for all , and if , then for all .
which can be proved in a similar way
Proof
This principle can be proven by considering the function . If we were to take the derivative we would notice that for ,
Also notice that . Combining these observations, we can use the mean value theorem on the interval and get
By assumption, , so multiplying both sides by gives . This implies .
Generalizations
The statement of the racetrack principle can slightly generalized as follows;
if for all , and if , then for all .
as above, substituting ≥ for > produces the theorem
if for all , and if , then for all .
Proof
This generalization can be proved from the racetrack principle as follows:
Consider functions and .
Given that for all , and ,
for all , and , which by the proof of the racetrack principle above means for all so for all .
Application
The racetrack principle can be used to prove a lemma necessary to show that the exponential function grows faster than any power function. The lemma required is that
for all real . This is obvious for but the racetrack principle is required for . To see how it is used we consider the functions
and
Notice that and that
because the exponential function is always increasing (monotonic) so . Thus by the racetrack principle . Thus,
for all .
References
Deborah Hughes-Hallet, et al., Calculus.
Differential calculus
Mathematical principles |
https://en.wikipedia.org/wiki/Kreso%20Kovacec | Kreso Kovacec (born 20 July 1969) is a German retired professional footballer who played as a forward. He spent three seasons in the Bundesliga with Hansa Rostock.
Career statistics
References
External links
1969 births
Living people
People from Krapina
German people of Croatian descent
German men's footballers
Men's association football forwards
Bundesliga players
2. Bundesliga players
VfL Bochum players
VfL Bochum II players
SC Concordia von 1907 players
Hannover 96 players
Tennis Borussia Berlin players
FC Hansa Rostock players
FC Augsburg players
SV Elversberg players |
https://en.wikipedia.org/wiki/Tarski%27s%20high%20school%20algebra%20problem | In mathematical logic, Tarski's high school algebra problem was a question posed by Alfred Tarski. It asks whether there are identities involving addition, multiplication, and exponentiation over the positive integers that cannot be proved using eleven axioms about these operations that are taught in high-school-level mathematics. The question was solved in 1980 by Alex Wilkie, who showed that such unprovable identities do exist.
Statement of the problem
Tarski considered the following eleven axioms about addition ('+'), multiplication ('·'), and exponentiation to be standard axioms taught in high school:
x + y = y + x
(x + y) + z = x + (y + z)
x · 1 = x
x · y = y · x
(x · y) · z = x · (y · z)
x · (y + z) = x · y + x ·z
1x = 1
x1 = x
xy + z = xy · xz
(x · y)z = xz · yz
(xy)z = xy · z
These eleven axioms, sometimes called the high school identities, are related to the axioms of a bicartesian closed category or an exponential ring. Tarski's problem then becomes: are there identities involving only addition, multiplication, and exponentiation, that are true for all positive integers, but that cannot be proved using only the axioms 1–11?
Example of a provable identity
Since the axioms seem to list all the basic facts about the operations in question, it is not immediately obvious that there should be anything provably true one can state using only the three operations, but cannot prove with the axioms. However, proving seemingly innocuous statements can require long proofs using only the above eleven axioms. Consider the following proof that
Strictly we should not write sums of more than two terms without brackets, and therefore a completely formal proof would prove the identity (or ) and would have an extra set of brackets in each line from onwards.
The length of proofs is not an issue; proofs of similar identities to that above for things like would take a lot of lines, but would really involve little more than the above proof.
History of the problem
The list of eleven axioms can be found explicitly written down in the works of Richard Dedekind, although they were obviously known and used by mathematicians long before then. Dedekind was the first, though, who seemed to be asking if these axioms were somehow sufficient to tell us everything we could want to know about the integers. The question was put on a firm footing as a problem in logic and model theory sometime in the 1960s by Alfred Tarski, and by the 1980s it had become known as Tarski's high school algebra problem.
Solution
In 1980 Alex Wilkie proved that not every identity in question can be proved using the axioms above. He did this by explicitly finding such an identity. By introducing new function symbols corresponding to polynomials that map positive numbers to positive numbers he proved this identity, and showed that these functions together with the eleven axioms above were both sufficient and necessary to prove it. The identity in question is
This |
https://en.wikipedia.org/wiki/Unisolvent%20functions | In mathematics, a set of n functions f1, f2, ..., fn is unisolvent (meaning "uniquely solvable") on a domain Ω if the vectors
are linearly independent for any choice of n distinct points x1, x2 ... xn in Ω. Equivalently, the collection is unisolvent if the matrix F with entries fi(xj) has nonzero determinant: det(F) ≠ 0 for any choice of distinct xj's in Ω. Unisolvency is a property of vector spaces, not just particular sets of functions. That is, a vector space of functions of dimension n is unisolvent if given any basis (equivalently, a linearly independent set of n functions), the basis is unisolvent (as a set of functions). This is because any two bases are related by an invertible matrix (the change of basis matrix), so one basis is unisolvent if and only if any other basis is unisolvent.
Unisolvent systems of functions are widely used in interpolation since they guarantee a unique solution to the interpolation problem. The set of polynomials of degree at most (which form a vector space of dimension ) are unisolvent by the unisolvence theorem.
Examples
1, x, x2 is unisolvent on any interval by the unisolvence theorem
1, x2 is unisolvent on [0, 1], but not unisolvent on [−1, 1]
1, cos(x), cos(2x), ..., cos(nx), sin(x), sin(2x), ..., sin(nx) is unisolvent on [−π, π]
Unisolvent functions are used in linear inverse problems.
Unisolvence in the finite element method
When using "simple" functions to approximate an unknown function, such as in the finite element method, it is useful to consider a set of functionals that act on a finite dimensional vector space of functions, usually polynomials. Often, the functionals are given by evaluation at points in Euclidean space or some subset of it.
For example, let be the space of univariate polynomials of degree or less, and let for be defined by evaluation at equidistant points on the unit interval . In this context, the unisolvence of with respect to means that is a basis for , the dual space of . Equivalently, and perhaps more intuitively, unisolvence here means that given any set of values , there exists a unique polynomial such that . Results of this type are widely applied in polynomial interpolation; given any function on , by letting , we can find a polynomial that interpolates at each of the points: .
Dimensions
Systems of unisolvent functions are much more common in 1 dimension than in higher dimensions. In dimension d = 2 and higher (Ω ⊂ Rd), the functions f1, f2, ..., fn cannot be unisolvent on Ω if there exists a single open set on which they are all continuous. To see this, consider moving points x1 and x2 along continuous paths in the open set until they have switched positions, such that x1 and x2 never intersect each other or any of the other xi. The determinant of the resulting system (with x1 and x2 swapped) is the negative of the determinant of the initial system. Since the functions fi are continuous, the intermediate value theorem implies that some inter |
https://en.wikipedia.org/wiki/Scene%20statistics | Scene statistics is a discipline within the field of perception. It is concerned with the statistical regularities related to scenes. It is based on the premise that a perceptual system is designed to interpret scenes.
Biological perceptual systems have evolved in response to physical properties of natural environments. Therefore natural scenes receive a great deal of attention.
Natural scene statistics are useful for defining the behavior of an ideal observer in a natural task, typically by incorporating signal detection theory, information theory, or estimation theory.
Within-domain versus across-domain
Geisler (2008) distinguishes between four kinds of domains: (1) Physical environments, (2) Images/Scenes, (3) Neural responses, and (4) Behavior.
Within the domain of images/scenes, one can study the characteristics of information related to redundancy and efficient coding.
Across-domain statistics determine how an autonomous system should make inferences about its environment, process information, and control its behavior. To study these statistics, it is necessary to sample or register information in multiple domains simultaneously.
Applications
Prediction of picture and video quality
One of the most successful applications of Natural Scenes Statistics Models has been perceptual picture and video quality prediction. For example, the Visual Information Fidelity (VIF) algorithm, which is used to measure the degree of distortion of pictures and videos, is used extensively by the image and video processing communities to assess perceptual quality, often after processing, such as compression, which can degrade the appearance of a visual signal. The premise is that the scene statistics are changed by distortion, and that the visual system is sensitive to the changes in the scene statistics. VIF is heavily used in the streaming television industry. Other popular picture quality models that use natural scene statistics include BRISQUE, and NIQE both of which are no-reference, since they do not require any reference picture to measure quality against.
References
Bibliography
Field, D. J. (1987). Relations between the statistics of natural images and the response properties of cortical cells. Journal of the Optical Society of America A 4, 2379–2394.
Ruderman, D. L., & Bialek, W. (1994). Statistics of Natural Images – Scaling in the Woods. Physical Review Letters, 73(6), 814–817.
Brady, N., & Field, D. J. (2000). Local contrast in natural images: normalisation and coding efficiency. Perception, 29, 1041–1055.
Frazor, R.A., Geisler, W.S. (2006) Local luminance and contrast in natural images. Vision Research, 46, 1585–1598.
Mante et al. (2005) Independence of luminance and contrast in natural scenes and in the early visual system. Nature Neuroscience, 8 (12) 1690–1697.
Bell, A. J., & Sejnowski, T. J. (1997). The "independent components" of natural scenes are edge filters. Vision Research, 37, 3327–3338.
Olshausen, B. A., & Field, D. J. (1 |
https://en.wikipedia.org/wiki/2008%E2%80%9309%20NK%20Dinamo%20Zagreb%20season | This article shows statistics of individual players for the football club Dinamo Zagreb It also lists all matches that Dinamo Zagreb will play in the 2008–09 season.
Events
26 April: Midfielder Luka Modrić agrees to a five-year contract with Premier League club Tottenham Hotspur in a transfer worth €21m, meaning he will leave Dinamo after the last game of the 2007–08 season, the Croatian Cup second leg final on 14 May.
9 May: Dinamo board of directors hold a press conference concerning the Zagreb Mayor Milan Bandić's announcement that the Maksimir stadium could be torn down and replaced by a new stadium built at a different location. Club's chairman Mirko Barišić, club's legends Slaven Zambata and Igor Cvitanović as well as the current vice-captain Igor Bišćan and Bad Blue Boys spokesman all express the view that the new stadium should be built in Maksimir.
14 May: Immediately after the cup final game against Hajduk Split at Poljud stadium, manager Zvonimir Soldo resigns, citing "bad atmosphere at the club".
20 May: Branko Ivanković appointed as manager again, only five months after being sacked and replaced by Zvonimir Soldo.
21 May: Defender Hrvoje Čale signs a four-year contract with the Turkish club Trabzonspor in a transfer worth €2.2m.
23 May: Goalkeeper Tomislav Butina signs a two-year contract with Dinamo on a free transfer after being released from Greek club Olympiacos. Butina returns to the club after spending five years playing abroad, and 15 years after his first top-flight debut for the Blues in 1993.
25 May: Defensive midfielder Ognjen Vukojević signs a five-year contract with Ukrainian club Dynamo Kiev on a transfer worth €6m.
4 June: Attacking midfielder Guillermo Suárez signs for Dinamo from Argentina's Tigre for an undisclosed fee.
5 June: Defender Etto agrees to a new four-year contract with Dinamo.
5 June: The 6th edition of the annual Mladen Ramljak Memorial Tournament hosted by Dinamo begins, featuring youth squads from Croatia and abroad. Dinamo youngsters beat Bulgarian side Litex Lovech 3–1.
6 June: UEFA commission inspect all venues expected to host the following Prva HNL season games and assigns Maksimir stadium a category 3 rating, declaring it fit to host European games.
6 June: Dinamo youth squad beats Osijek 4–1 and secures a place in the tournament final.
6 June: Forward Davor Vugrinec agrees to leave the club for local rivals NK Zagreb on a free transfer.
8 June: Dinamo youngsters win the Mladen Ramljak Memorial by beating Hajduk Split 1–0.
12 June: Dinamo sign midfielder Pedro Morales from Universidad de Chile for €1.6m.
13 June: Dinamo sign defender Luis Ibáñez from Boca Juniors on a five-year contract in a transfer worth for €650,000.
15 June: Goalkeeper Georg Koch is released and joins Austrian club Rapid Vienna on a free transfer.
3 July: Fearing hooliganism, Austrian organisers cancel the already scheduled pre-season friendly with Polish side Lech Poznań.
4 July: Two months after Newcastle United offered € |
https://en.wikipedia.org/wiki/Colette%20Rolland | Colette Rolland (born 1943, in Dieupentale, Tarn-et-Garonne, France) is a French computer scientist and Professor of Computer Science in the department of Mathematics and Informatics at the University of Paris 1 Pantheon-Sorbonne, and a leading researcher in the area of information and knowledge systems, known for her work on meta-modeling, particularly goal modelling and situational method engineering.
Biography
In 1966 she studied applied mathematics at the University of Nancy, where she received her PhD in 1971. In 1973 she was appointed Professor at the University of Nancy, Department of Computer Science. In 1979 she became professor at University of Paris 1 Pantheon-Sorbonne Department of Mathematics and Informatics.
She has been involved in a large number of European research projects and used to lead cooperative research projects with companies. She is currently Professor Emeritus of Computer Science in the department of Mathematics and Informatics.
Rolland is in the editorial board of a number of journals including Journal of Information Systems, Journal on Information and Software Technology, Requirements Engineering Journal, Journal of Networking and Information Systems, Data and Knowledge Engineering Journal, Journal of Data Base Management and Journal of Intelligent Information Systems. She is the French representative in IFIP TC8 on Information Systems and has been the co chair and chairperson of the IFIP WG8.1 during nine years.
Rolland has been awarded a number of prizes including the IFIP Silver Core, IFIP service award, the Belgium prize ‘de la Fondation Franqui’ and the European prize of ‘Information Systems’.
Work
Roland's research interests are in the areas of information modeling, databases, temporal data modeling, object-oriented analysis and design, requirements engineering and specially change engineering, method engineering, CASE and CAME tools, change management and enterprise knowledge development.
Publications
Rolland is the co-author of 7 textbooks; editor of 25 proceedings and author or co-author of over 280 invited and referred papers.
Books, a selection:
1991. Automatic Tools for Designing Office Information Systems: The Todos Approach. Research Reports ESPRIT, Project 813, Todos, Vol. 1. With B. Pernici.
1992. Information System Concepts: Improving the Understanding, Proceedings. With Eckhard D. Falkenberg. IFIP Transactions a, Computer Science and Technology.
1993. Advanced Information Systems Engineering. With F. Bodart. Springer.
1994. A Natural Language Approach For Requirements Engineering. With C. Proix.
1996. Facilitating "Fuzzy to Formal" Requirements Modelling. With Janis Bubenko, P. Loucopoulos and V. Deantonellis.
1988. Temporal Aspects in Information Systems. With F. Bodart. Elsevier Science Ltd.
1998. A framework of information system concepts. The FRISCO report. With Eckhard D. Falkenberg, Paul Lindgreen, Björn E. Nilsson, J.L. Han Oei, Ronald Stamper, Frans J M Van Assche, Alexa |
https://en.wikipedia.org/wiki/RSNF | RSNF may refer to:
Royal Saudi Naval Forces
Ring sum normal form, a special normal form in Boolean mathematics |
https://en.wikipedia.org/wiki/Double%20wedge | In geometry, a double wedge is the (closure of) the symmetric difference of two half-spaces whose boundaries are not parallel to each other. For instance, in the Cartesian plane, the union of the positive and negative quadrants forms a double wedge, and more generally in two dimensions a double wedge consists of the set of points within two vertical angles defined by a pair of lines. In projective geometry double wedges are the projective duals of line segments.
References
Euclidean geometry |
https://en.wikipedia.org/wiki/StatView | StatView is a statistics application originally released for Apple Macintosh computers in 1985.
StatView was one of the first statistics applications to have a graphical user interface, capitalizing on the Macintosh's. A user saw a spreadsheet of his or her data, comprising columns that could be integers, long integers, real numbers, strings, or categories, and rows that were usually cases (such as individual people for psychology data). Columns had informative headings; rows were numbered. Category data looked like strings (e.g., a column headed "sex" would have entries of "male" and "female", but these were coded by the application as integers). Category data were used to perform inferential statistical tests such as t tests, ANOVAs, and chi square tests. To calculate statistics, a user clicked on particular column headings, designating them as an x value and one or more y values. Then the user used the application's menus to choose descriptive statistics or inferential statistics.
For example, a user's spreadsheet might contain columns for names of a participant in a survey (a string), sex (a category variable), IQ (integer), and years using a PC (real). By designating number of years using a PC as an x variable and IQ as a y variable, the user could then choose from a menu to perform a regression. The user then had to choose from another menu how to view the regression in a separate window, either as a table, in which case the regression equation and ANOVA were displayed, or as a scattergram, in which case a graph of the data and the regression line were shown. Contents of the analysis window could be copied either as text or as a PICT.
StatView was initially distributed by BrainPower Inc from California. It grew up with the Macintosh, changing owners along the way. StatView 3 to 5 were distributed by Abacus Corporation. It was then bought by SAS which discontinued it in favor of JMP. The application continued to run under Classic emulation with Apple's Mac OS X, but could not run on Intel Macintoshes. , it still runs under OS 10.7.5 emulation using Basilisk II.
StatView 2 was called StatView SE + Graphics. It included ANOVA with one repeated-measure and, remarkably, a factor analysis. In StatView 4, the user approach changed from touching the to-be-analyzed data in the spreadsheet to clicking on column names in a separate window. This lack of immediacy was compensated for by an increase in the number of statistical tests that could be performed and in the power of existing tests. For example, multiway repeated-measures factors could be included in ANOVAs, with the only limit being the memory allocated to the application. There were ANCOVA and MANOVA too. StatView 4 also became available for PCs.
Statview 5.01 for Windows runs without issue on Windows XP, Windows 7 Home and Pro, both 32- and 64-bit systems. (This does not appear to actually be the case, the only method on Windows 7 appears to be using XP Mode.) It appears to run without |
https://en.wikipedia.org/wiki/Kullback%27s%20inequality | In information theory and statistics, Kullback's inequality is a lower bound on the Kullback–Leibler divergence expressed in terms of the large deviations rate function. If P and Q are probability distributions on the real line, such that P is absolutely continuous with respect to Q, i.e. P << Q, and whose first moments exist, then
where is the rate function, i.e. the convex conjugate of the cumulant-generating function, of , and is the first moment of
The Cramér–Rao bound is a corollary of this result.
Proof
Let P and Q be probability distributions (measures) on the real line, whose first moments exist, and such that P << Q. Consider the natural exponential family of Q given by
for every measurable set A, where is the moment-generating function of Q. (Note that Q0 = Q.) Then
By Gibbs' inequality we have so that
Simplifying the right side, we have, for every real θ where
where is the first moment, or mean, of P, and is called the cumulant-generating function. Taking the supremum completes the process of convex conjugation and yields the rate function:
Corollary: the Cramér–Rao bound
Start with Kullback's inequality
Let Xθ be a family of probability distributions on the real line indexed by the real parameter θ, and satisfying certain regularity conditions. Then
where is the convex conjugate of the cumulant-generating function of and is the first moment of
Left side
The left side of this inequality can be simplified as follows:
which is half the Fisher information of the parameter θ.
Right side
The right side of the inequality can be developed as follows:
This supremum is attained at a value of t=τ where the first derivative of the cumulant-generating function is but we have so that
Moreover,
Putting both sides back together
We have:
which can be rearranged as:
See also
Kullback–Leibler divergence
Cramér–Rao bound
Fisher information
Large deviations theory
Convex conjugate
Rate function
Moment-generating function
Notes and references
Information theory
Statistical inequalities
Estimation theory |
https://en.wikipedia.org/wiki/1932%E2%80%9333%20Toronto%20Maple%20Leafs%20season | The 1932–33 Toronto Maple Leafs season was the team’s 16th season in the National Hockey League (NHL).
Regular season
Final standings
Record vs. opponents
Schedule and results
Player statistics
Regular season
Scoring
Goaltending
Playoffs
Scoring
Goaltending
Playoffs
The Maple Leafs met the Boston Bruins in the second round in a best of five series and won 3–2. In the finals, they lost to the Rangers in a best of five series 3–1.
New York wins best-of-five series 3–1.
Awards and records
King Clancy, Defense, Second Team NHL All-Star
Charlie Conacher, Right Wing, Second Team NHL All-Star
Busher Jackson, Left Wing, Second Team NHL All-Star
Dick Irvin, Coach, Second Team NHL All-Star
Transactions
January 3, 1933: Acquired Bill Thoms from the Boston Bruins for Harold Darragh
See also
1932–33 NHL season
References
Maple Leafs on Hockey Database
Toronto Maple Leafs seasons
Toronto
Toronto |
https://en.wikipedia.org/wiki/Van%20H.%20Vu | Van H. Vu () is a Vietnamese mathematician, Percey F. Smith Professor of Mathematics at Yale University.
Education and career
Vu was born in Hanoi (Vietnam) in 1970. He went to special math classes for gifted children at Chu Van An and Hanoi-Amsterdam high schools.
In 1987, he went to Hungary for his undergraduate studies, and in 1994, obtained his M.Sc in mathematics at the Faculty of Sciences of the Eötvös University, Budapest. His thesis supervisor was Tamás Szőnyi. He received his Ph.D. at Yale University in 1998 under the direction of László Lovász. He worked as a postdoc at IAS and Microsoft Research (1998-2001). He joined the University of California, San Diego as an assistant professor in 2001 and was promoted to full professor in 2005. In Fall 2005, he moved to Rutgers University and stayed there until he joined Yale in Fall 2011. Vu was a member at IAS on three occasions (1998, 2005, 2007), the last time, in 2007, as the leader of the special program Arithmetic Combinatorics.
Contributions
In his PhD thesis, Vu, together with Kim, developed a theory for concentration of measure of polynomials (and non-Lipschitz functions in general).
Later, as an application, he established a refinement of Waring's problem.
In 2003, Vu and Szemeredi solved the Erdos-Folkman problem, answering the following question: How dense a set of positive integers should be so every sufficiently large integer can be represented as a subsum?
In 2006, with Tao and Vu published their book "Additive Combinatorics.” Together, they developed the Inverse Littlewood-Offord theory for anti-concentration.
In 2007, with Johansson and Kahn, Vu solved the Shamir conjecture in random graph theory. Among others, they established the sharp threshold for the existence of a perfect matching in a random hypergraph.
In 2010, Terence Tao and Vu solved the circular law conjecture in random matrix theory, which established the non-Hermitian version of Wigner semi-circle law.
In 2011, they proved the "four moment" theorem, establishing
universality of local law of eigenvalues of random matrices. Similar results were obtained around the same time by László Erdös, Horng-Tzer Yau, and Jun Yin.
Awards and honors
As a junior researcher, Vu was a recipient of an NSF Career Award and a Sloan fellowship.
In 2008 he was awarded the Pólya Prize of the Society for Industrial and Applied Mathematics for his work on concentration of measure.
In 2012, Vu was awarded the Fulkerson Prize (jointly with Anders Johansson and Jeff Kahn) for the solution of Shamir problem. Also in 2012, he became a fellow of the American Mathematical Society. In the same year, he was a Medallion lecturer at the 8th World congress in Probability and Statistics, Istanbul.
In 2014, he was an invited speaker at the ICM (Seoul). In 2020, he became a fellow of the Institute of Mathematical Statistics.
By Mathscinet statistics (as in 2022), he ranks third among the most cited mathematicians with PhD in 1 |
https://en.wikipedia.org/wiki/Integrally%20closed%20domain | In commutative algebra, an integrally closed domain A is an integral domain whose integral closure in its field of fractions is A itself. Spelled out, this means that if x is an element of the field of fractions of A which is a root of a monic polynomial with coefficients in A, then x is itself an element of A. Many well-studied domains are integrally closed, as shown by the following chain of class inclusions:
An explicit example is the ring of integers Z, a Euclidean domain. All regular local rings are integrally closed as well.
Basic properties
Let A be an integrally closed domain with field of fractions K and let L be a field extension of K. Then x∈L is integral over A if and only if it is algebraic over K and its minimal polynomial over K has coefficients in A. In particular, this means that any element of L integral over A is root of a monic polynomial in A[X] that is irreducible in K[X].
If A is a domain contained in a field K, we can consider the integral closure of A in K (i.e. the set of all elements of K that are integral over A). This integral closure is an integrally closed domain.
Integrally closed domains also play a role in the hypothesis of the Going-down theorem. The theorem states that if A⊆B is an integral extension of domains and A is an integrally closed domain, then the going-down property holds for the extension A⊆B.
Examples
The following are integrally closed domains.
A principal ideal domain (in particular: the integers and any field).
A unique factorization domain (in particular, any polynomial ring over a field, over the integers, or over any unique factorization domain).
A GCD domain (in particular, any Bézout domain or valuation domain).
A Dedekind domain.
A symmetric algebra over a field (since every symmetric algebra is isomorphic to a polynomial ring in several variables over a field).
Let be a field of characteristic not 2 and a polynomial ring over it. If is a square-free nonconstant polynomial in , then is an integrally closed domain. In particular, is an integrally closed domain if .
To give a non-example, let k be a field and (A is the subalgebra generated by t2 and t3.) A is not integrally closed: it has the field of fractions , and the monic polynomial in the variable X has root t which is in the field of fractions but not in A. This is related to the fact that the plane curve has a singularity at the origin.
Another domain which is not integrally closed is ; it does not contain the element of its field of fractions, which satisfies the monic polynomial .
Noetherian integrally closed domain
For a noetherian local domain A of dimension one, the following are equivalent.
A is integrally closed.
The maximal ideal of A is principal.
A is a discrete valuation ring (equivalently A is Dedekind.)
A is a regular local ring.
Let A be a noetherian integral domain. Then A is integrally closed if and only if (i) A is the intersection of all localizations over prime ideals of height 1 and ( |
https://en.wikipedia.org/wiki/Gaussian%20logarithm | In mathematics, addition and subtraction logarithms or Gaussian logarithms can be utilized to find the logarithms of the sum and difference of a pair of values whose logarithms are known, without knowing the values themselves.
Their mathematical foundations trace back to Zecchini Leonelli and Carl Friedrich Gauss in the early 1800s.
The operations of addition and subtraction can be calculated by the formula:
where , , the "sum" function is defined by , and the "difference" function by . The functions and are also known as Gaussian logarithms.
For natural logarithms with the following identities with hyperbolic functions exist:
This shows that has a Taylor expansion where all but the first term are rational and all odd terms except the linear one are zero.
The simplification of multiplication, division, roots, and powers is counterbalanced by the cost of evaluating these functions for addition and subtraction.
See also
Softplus operation in neural networks
Zech's logarithm
Logarithm table
Logarithmic number system (LNS)
References
Further reading
(NB. Contains a table of Gaussian logarithms lg(1+10−x).)
Logarithms |
https://en.wikipedia.org/wiki/Anton%20Vriesde | Anton Vriesde (born 18 October 1968) is a retired Dutch footballer. A defender, he played for several clubs in the Netherlands, and for KFC Uerdingen and VfL Bochum in Germany.
Career statistics
References
External links
Profile at vi.nl
1968 births
Living people
Dutch men's footballers
ADO Den Haag players
MVV Maastricht players
KFC Uerdingen 05 players
VfL Bochum players
Helmond Sport players
Dutch sportspeople of Surinamese descent
Footballers from The Hague
Men's association football defenders |
https://en.wikipedia.org/wiki/1977%E2%80%9378%20Detroit%20Red%20Wings%20season | The 1977–78 Detroit Red Wings season was the Red Wings' 46th season, 52nd overall for the franchise.
Regular season
Final standings
Record vs. opponents
Schedule and results
Player statistics
Regular season
Scoring
Goaltending
Playoffs
Scoring
Goaltending
Note: GP = Games played; G = Goals; A = Assists; Pts = Points; +/- = Plus-minus PIM = Penalty minutes; PPG = Power-play goals; SHG = Short-handed goals; GWG = Game-winning goals;
MIN = Minutes played; W = Wins; L = Losses; T = Ties; GA = Goals against; GAA = Goals-against average; SO = Shutouts;
Transactions
The Red Wings were involved in the following transactions during the 1977–78 season.
Trades
Free Agents
Draft picks
Detroit's picks at the 1977 NHL amateur draft in Mount Royal Hotel in Montreal, Quebec.
See also
1977–78 NHL season
References
Detroit Red Wings seasons
Detroit
Detroit
Detroit Red
Detroit Red |
https://en.wikipedia.org/wiki/Concentration%20dimension | In mathematics — specifically, in probability theory — the concentration dimension of a Banach space-valued random variable is a numerical measure of how "spread out" the random variable is compared to the norm on the space.
Definition
Let (B, || ||) be a Banach space and let X be a Gaussian random variable taking values in B. That is, for every linear functional ℓ in the dual space B∗, the real-valued random variable 〈ℓ, X〉 has a normal distribution. Define
Then the concentration dimension d(X) of X is defined by
Examples
If B is n-dimensional Euclidean space Rn with its usual Euclidean norm, and X is a standard Gaussian random variable, then σ(X) = 1 and E[||X||2] = n, so d(X) = n.
If B is Rn with the supremum norm, then σ(X) = 1 but E[||X||2] (and hence d(X)) is of the order of log(n).
References
.
.
Dimension
Statistical randomness |
https://en.wikipedia.org/wiki/CR%20Rao%20Advanced%20Institute%20of%20Mathematics%2C%20Statistics%20and%20Computer%20Science | CR Rao Advanced Institute of Mathematics, Statistics and Computer Science (also called AIMSCS) was founded in 2007 as an institute for basic research in statistics, computer science and mathematics. It is located on the campus of the University of Hyderabad.
It is named after CR Rao, statistician, as it was built on his suggestion. The government funded institute intends to improve teaching methods and to encourage basic research in mathematical and social sciences. S.B. Rao was the first director.
It has received 10 million rupees in grants from the State Government and individual donors and 2008 it was seeking a further 50 million rupees from the DST.
References
External links
AIMSCS
Research institutes in Hyderabad, India
2007 establishments in Andhra Pradesh
Computer science institutes
Research institutes established in 2007 |
https://en.wikipedia.org/wiki/Zoran%20Kuli%C4%87 | Zoran Kulić (Serbian Cyrillic: Зоран Кулић; born 19 August 1975) is a former Serbian professional footballer.
Statistics
External links
FFU profile
1975 births
Living people
Serbian men's footballers
Men's association football midfielders
FK Hajduk Kula players
OFK Bečej 1918 players
FK Sileks players
FK Mladost Lučani players
FK Smederevo 1924 players
FK Budućnost Banatski Dvor players
Serbia and Montenegro expatriate men's footballers
Serbia and Montenegro men's footballers
Expatriate men's footballers in North Macedonia
Expatriate men's footballers in Ukraine
Serbia and Montenegro expatriate sportspeople in Ukraine
People from Zemun
Footballers from Belgrade
FC Dynamo-2 Kyiv players
FC Dynamo-3 Kyiv players |
https://en.wikipedia.org/wiki/Emil%20Grosswald | Emil Grosswald (December 15, 1912 – April 11, 1989) was a mathematician who worked primarily in number theory.
Life and education
Grosswald was born on December 15, 1912, in Bucharest, Romania. He received a master's degree in both mathematics and electrical engineering from the University of Bucharest in 1933, spent six months in Italy and then received a Diplôme from École supérieure d'électricité in Paris.
Grosswald was Jewish. When war broke out, he fled from Paris in June, 1940 to the University of Montpellier, where he began doctoral studies in mathematics. He fled at the end of 1941, through Spain and Lisbon to Cuba. He moved to Puerto Rico in 1946 and then to the United States in 1948. He received his Ph.D. under Hans Rademacher from the University of Pennsylvania in 1950. He was visiting professor at the University of Paris in 1964–1965 and one of his books, The Theory of Numbers, was written that year.
He met his wife Elisabeth (Lissy) Rosenthal in Cuba, probably in 1941 or 1942. They were married in 1950 in Saskatoon, Canada, where he had his first teaching position after receiving his Ph.D. They spent two years at the Institute for Advanced Study in Princeton, New Jersey, in 1951 and 1959. During their first stay, they met Albert Einstein, with whom Emil had a correspondence, later bequeathed to the University of Texas, and formed many friendships, among others with the physicist Freeman Dyson.
Emil and Lissy had two daughters, Blanche, who became a professor of Social Work at Rutgers University but died in 2003 at the age of 50, and Vivian, a professor of law at the University of Pittsburgh. Vivian was decorated in 2007 by the Republic of Austria for her work as the United States appointee to the Austrian General Settlement Fund Committee for Nazi-era property compensation, and in 2013 by the government of France for her services in promotion of the French language and culture in the United States. Emil is the uncle of Pamela Ronald, a member of the National Academy of Sciences, whose father Robert Ronald (né Rosenthal) describes the family's escape from the Nazis in his memoir, "Last Train to Freedom". The son of Lissy's second cousin (Ernest Beutler) is 2011 Nobel Laureate Bruce Beutler. Emil was also the nephew of the French composer Marcel Mihalovici, who arrived in Paris in the 1920s with Georges Enesco.
After Grosswald's death, the American Mathematical Society held a national meeting in his honor, and in 1991 a Festschrift was published in his honor: "A Tribute to Emil Grosswald: Number Theory and Related Analysis." Of his attitude towards mathematics, one of the volume's editors noted the following: "In Grosswald's world, mathematics is challenge demanding dedication and long hours of work; it is science combined with art, truth with beauty. It is passionate and eternal pursuit of excellence. It is humility in the face of a powerful and proud history. Above all, it is meaning, a reason to go on..." Another colleague wr |
https://en.wikipedia.org/wiki/Graham%20Allan | Graham Robert Allan (1936–2007) was an English mathematician, specializing in Banach algebras. He was Reader in functional analysis and Vice-Master of Churchill College at Cambridge University.
Life
Allan was born on 13 August 1936 in Southgate, Middlesex, England. After serving in the Royal Air Force from 1955 to 1957, he entered Sidney Sussex College, Cambridge, and continued at Cambridge for his graduate studies, receiving a PhD in 1964 under the supervision of Frank Smithies.
Allan spent most of his career at Cambridge, with interludes as a Lecturer in Pure Mathematics at Newcastle University from 1967 to 1969 and as Professor of Pure Mathematics at the University of Leeds from 1970 to 1978.
Back at Cambridge, he was promoted to Reader in 1980 and was Vice-Master of Churchill College from 1990 to 1993. Allan supervised the theses of over 20 Cambridge PhD students. He retired in 2003, but continued teaching after his retirement. He died on 9 August 2007 in Cambridge.
In 1969, Allan won the Junior Berwick Prize of the London Mathematical Society.
He contributed to section III.86 in the book The Princeton Companion to Mathematics edited by Timothy Gowers, but did not live to see his article "The Spectrum" in print form published in 2008.
References
1936 births
2007 deaths
English mathematicians
Fellows of Churchill College, Cambridge
Mathematicians from London
Alumni of Sidney Sussex College, Cambridge
Academics of the University of Leeds
Academics of Newcastle University |
https://en.wikipedia.org/wiki/Dedekind%20number | In mathematics, the Dedekind numbers are a rapidly growing sequence of integers named after Richard Dedekind, who defined them in 1897. The Dedekind number M(n) is the number of monotone boolean functions of n variables. Equivalently, it is the number of antichains of subsets of an n-element set, the number of elements in a free distributive lattice with n generators, and one more than the number of abstract simplicial complexes on a set with n elements.
Accurate asymptotic estimates of M(n) and an exact expression as a summation are known. However Dedekind's problem of computing the values of M(n) remains difficult: no closed-form expression for M(n) is known, and exact values of M(n) have been found only for n ≤ 9 .
Definitions
A Boolean function is a function that takes as input n Boolean variables (that is, values that can be either false or true, or equivalently binary values that can be either 0 or 1), and produces as output another Boolean variable. It is monotonic if, for every combination of inputs, switching one of the inputs from false to true can only cause the output to switch from false to true and not from true to false. The Dedekind number M(n) is the number of different monotonic Boolean functions on n variables.
An antichain of sets (also known as a Sperner family) is a family of sets, none of which is contained in any other set. If V is a set of n Boolean variables, an antichain A of subsets of V defines a monotone Boolean function f, where the value of f is true for a given set of inputs if some subset of the true inputs to f belongs to A and false otherwise. Conversely every monotone Boolean function defines in this way an antichain, of the minimal subsets of Boolean variables that can force the function value to be true. Therefore, the Dedekind number M(n) equals the number of different antichains of subsets of an n-element set.
A third, equivalent way of describing the same class of objects uses lattice theory. From any two monotone Boolean functions f and g we can find two other monotone Boolean functions f ∧ g and f ∨ g, their logical conjunction and logical disjunction respectively. The family of all monotone Boolean functions on n inputs, together with these two operations, forms a distributive lattice, the lattice given by Birkhoff's representation theorem from the partially ordered set of subsets of the n variables with set inclusion as the partial order. This construction produces the free distributive lattice with n generators. Thus, the Dedekind numbers count the elements in free distributive lattices.
The Dedekind numbers also count one more than the number of abstract simplicial complexes on a set with n elements, families of sets with the property that any non-empty subset of a set in the family also belongs to the family. Any antichain (except {Ø}) determines a simplicial complex, the family of subsets of antichain members, and conversely the maximal simplices in a complex form an antichain.
Example
For |
https://en.wikipedia.org/wiki/Richard%20Lesh | Richard Arthur Lesh, Jr. is a professor of learning sciences, cognitive science, and mathematics education at Indiana University in Bloomington, Indiana. He retired from the IU system in 2012. He graduated from Indiana University in 1971 with a Ph.D. in mathematics, cognitive psychology, and statistics for research in the social sciences. He is also a graduate of Hanover College, where he received a B.A. in mathematics and physics.
Lesh is the originator of the Models and Modeling Perspectives research area of Mathematics education and as the creator of the model-eliciting activity, which is designed to help reveal thinking processes to students, teachers, and researchers.
In his work life, Lesh has worked at a variety of career positions, including as a National Science Foundation official, dean and professor at Northwestern University, principal research scientist at Educational Testing Services, and endowed professor at both Purdue University and Indiana University, where he tried to develop various alternative assessment techniques that could be used to detect learning traditional assessment strategies did not.
See also
Systemics
Design-Based Research
References
External links
Indiana University Learning Sciences Program
Rational Number Project
Indiana University alumni
Indiana University faculty
Year of birth missing (living people)
Living people
Hanover College alumni |
https://en.wikipedia.org/wiki/Normal%20element | In mathematics, an element x of a *-algebra is normal if it satisfies
This definition stems from the definition of a normal linear operator in functional analysis, where a linear operator A from a Hilbert space into itself is called unitary if where the adjoint of A is A and the domain of A is the same as that of A. See normal operator for a detailed discussion. If the Hilbert space is finite-dimensional and an orthonormal basis has been chosen, then the operator A is normal if and only if the matrix describing A with respect to this basis is a normal matrix.
See also
References
Abstract algebra
Linear algebra |
https://en.wikipedia.org/wiki/Paranormal%20operator | In mathematics, especially operator theory, a paranormal operator is a generalization of a normal operator. More precisely, a bounded linear operator T on a complex Hilbert space H is said to be paranormal if:
for every unit vector x in H.
The class of paranormal operators was introduced by V. Istratescu in 1960s, though the term "paranormal" is probably due to Furuta.
Every hyponormal operator (in particular, a subnormal operator, a quasinormal operator and a normal operator) is paranormal. If T is a paranormal, then Tn is paranormal. On the other hand, Halmos gave an example of a hyponormal operator T such that T2 isn't hyponormal. Consequently, not every paranormal operator is hyponormal.
A compact paranormal operator is normal.
References
Operator theory
Linear operators |
https://en.wikipedia.org/wiki/Analytic%20confidence | Analytic confidence is a rating employed by intelligence analysts to convey doubt to decision makers about a statement of estimative probability. The need for analytic confidence ratings arise from analysts' imperfect knowledge of a conceptual model. An analytic confidence rating pairs with a statement using a word of estimative probability to form a complete analytic statement. Scientific methods for determining analytic confidence remain in infancy.
Levels of analytic confidence in national security reports
In an effort to apply more rigorous standards to National Intelligence Estimates, the National Intelligence Council includes explanations of the three levels of analytic confidence made in estimative statements.
High confidence generally indicates judgments based on high-quality information, and/or the nature of the issue makes it possible to render a solid judgment. A “high confidence” judgment is not a fact or a certainty, however, and still carries a risk of being wrong.
Moderate confidence generally means credibly sourced and plausible information, but not of sufficient quality or corroboration to warrant a higher level of confidence.
Low confidence generally means questionable or implausible information was used, the information is too fragmented or poorly corroborated to make solid analytic inferences, or significant concerns or problems with sources existed.
Origins and early history
Analytic confidence beginnings coincide with the cognitive psychology movement, especially in psychological decision theory. This branch of psychology did not set out to study analytic confidence as it pertains to intelligence reporting. Rather, the advances in cognitive psychology established a groundwork for understanding well calibrated confidence levels in decision making.
Early accounts of explaining analytic confidence focused on certainty forecasts, as opposed to the overall confidence the analyst had in the analysis itself. This highlights the degree of confusion among scholars about the difference between psychological and analytic confidence. Analysts often lessened certainty statements when confronted with challenging analysis, instead of proscribing a level of analytic confidence to explain those concerns. By lessening certainty levels due to a lack of confidence, a dangerous possibility of misrepresenting the target existed.
Intelligence Reform and Terrorism Prevention Act of 2004
The Intelligence Reform and Terrorism Prevention Act of 2004 establishes some guidelines for conveying the analytic confidence in an intelligence product. The summary document states each review should include, among other things, whether the product or products concerned were based on all sources of available intelligence, properly describe the quality and reliability of underlying sources, properly caveat and express uncertainties or confidence in analytic judgments, and properly distinguish between underlying intelligence and the assumptions and j |
https://en.wikipedia.org/wiki/Statistics%20Iceland | Statistics Iceland () is the main official institute providing statistics on the nation of Iceland. It was created by the Althing in 1913, began operations in 1914 and became an independent government agency under the Prime Minister's Office on 1 January 2008.
See also
Minister of Statistics Iceland
References
External links
1914 establishments in Iceland
Organizations established in 1914
Iceland
Government agencies of Iceland |
https://en.wikipedia.org/wiki/Inversion%20%28discrete%20mathematics%29 | In computer science and discrete mathematics, an inversion in a sequence is a pair of elements that are out of their natural order.
Definitions
Inversion
Let be a permutation.
There is an inversion of between and if and . The inversion is indicated by an ordered pair containing either the places or the elements .
The inversion set is the set of all inversions. A permutation's inversion set using place-based notation is the same as the inverse permutation's inversion set using element-based notation with the two components of each ordered pair exchanged. Likewise, a permutation's inversion set using element-based notation is the same as the inverse permutation's inversion set using place-based notation with the two components of each ordered pair exchanged.
Inversions are usually defined for permutations, but may also be defined for sequences:Let be a sequence (or multiset permutation). If and , either the pair of places or the pair of elements is called an inversion of .
For sequences, inversions according to the element-based definition are not unique, because different pairs of places may have the same pair of values.
Inversion number
The inversion number of a sequence , is the cardinality of the inversion set. It is a common measure of sortedness (sometimes called presortedness) of a permutation or sequence. The inversion number is between 0 and inclusive. A permutation and its inverse have the same inversion number.
For example since the sequence is ordered. Also, when is even, (because each pair is an inversion). This last example shows that a set that is intuitively "nearly sorted" can still have a quadratic number of inversions.
The inversion number is the number of crossings in the arrow diagram of the permutation, the permutation's Kendall tau distance from the identity permutation, and the sum of each of the inversion related vectors defined below.
Other measures of sortedness include the minimum number of elements that can be deleted from the sequence to yield a fully sorted sequence, the number and lengths of sorted "runs" within the sequence, the Spearman footrule (sum of distances of each element from its sorted position), and the smallest number of exchanges needed to sort the sequence. Standard comparison sorting algorithms can be adapted to compute the inversion number in time .
Inversion related vectors
Three similar vectors are in use that condense the inversions of a permutation into a vector that uniquely determines it. They are often called inversion vector or Lehmer code. (A list of sources is found here.)
This article uses the term inversion vector () like Wolfram. The remaining two vectors are sometimes called left and right inversion vector, but to avoid confusion with the inversion vector this article calls them left inversion count () and right inversion count (). Interpreted as a factorial number the left inversion count gives the permutations reverse colexicographic, and the rig |
https://en.wikipedia.org/wiki/Riemann%E2%80%93Siegel%20formula | In mathematics, the Riemann–Siegel formula is an asymptotic formula for the error of the approximate functional equation of the Riemann zeta function, an approximation of the zeta function by a sum of two finite Dirichlet series. It was found by in unpublished manuscripts of Bernhard Riemann dating from the 1850s. Siegel derived it from the Riemann–Siegel integral formula, an expression for the zeta function involving contour integrals. It is often used to compute values of the Riemann–Siegel formula, sometimes in combination with the Odlyzko–Schönhage algorithm which speeds it up considerably. When used along the critical line, it is often useful to use it in a form where it becomes a formula for the Z function.
If M and N are non-negative integers, then the zeta function is equal to
where
is the factor appearing in the functional equation , and
is a contour integral whose contour starts and ends at +∞ and circles the singularities of absolute value at most . The approximate functional equation gives an estimate for the size of the error term. and derive the Riemann–Siegel formula from this by applying the method of steepest descent to this integral to give an asymptotic expansion for the error term R(s) as a series of negative powers of Im(s). In applications s is usually on the critical line, and the positive integers M and N are chosen to be about . found good bounds for the error of the Riemann–Siegel formula.
Riemann's integral formula
Riemann showed that
where the contour of integration is a line of slope −1 passing between 0 and 1 .
He used this to give the following integral formula for the zeta function:
References
Reprinted in Gesammelte Abhandlungen, Vol. 1. Berlin: Springer-Verlag, 1966.
External links
Zeta and L-functions
Theorems in analytic number theory
Bernhard Riemann |
https://en.wikipedia.org/wiki/Cycle%20double%20cover | In graph-theoretic mathematics, a cycle double cover is a collection of cycles in an undirected graph that together include each edge of the graph exactly twice. For instance, for any polyhedral graph, the faces of a convex polyhedron that represents the graph provide a double cover of the graph: each edge belongs to exactly two faces.
It is an unsolved problem, posed by George Szekeres and Paul Seymour and known as the cycle double cover conjecture, whether every bridgeless graph has a cycle double cover. The conjecture can equivalently be formulated in terms of graph embeddings, and in that context is also known as the circular embedding conjecture.
Formulation
The usual formulation of the cycle double cover conjecture asks whether every bridgeless undirected graph has a collection of cycles such that each edge of the graph is contained in exactly two of the cycles. The requirement that the graph be bridgeless is an obvious necessary condition for such a set of cycles to exist, because a bridge cannot belong to any cycle. A collection of cycles satisfying the condition of the cycle double cover conjecture is called a cycle double cover. Some graphs such as cycle graphs and bridgeless cactus graphs can only be covered by using the same cycle more than once, so this sort of duplication is allowed in a cycle double cover.
Reduction to snarks
A snark is a special case of a bridgeless graph, having the additional properties that every vertex has exactly three incident edges (that is, the graph is cubic) and that it is not possible to partition the edges of the graph into three perfect matchings (that is, the graph has no 3-edge coloring, and by Vizing's theorem has chromatic index 4). It turns out that snarks form the only difficult case of the cycle double cover conjecture: if the conjecture is true for snarks, it is true for any graph.
observes that, in any potential minimal counterexample to the cycle double cover conjecture, all vertices must have three or more incident edges. For, a vertex with only one edge incident forms a bridge, while if two edges are incident on a vertex, one can contract them to form a smaller graph such that any double cover of the smaller graph extends to one of the original graph. On the other hand, if a vertex v has four or more incident edges, one may “split off” two of those edges by removing them from the graph and replacing them by a single edge connecting their two other endpoints, while preserving the bridgelessness of the resulting graph. Again, a double cover of the resulting graph may be extended in a straightforward way to a double cover of the original graph: every cycle of the split off graph corresponds either to a cycle of the original graph, or to a pair of cycles meeting at v. Thus, every minimal counterexample must be cubic. But if a cubic graph can have its edges 3-colored (say with the colors red, blue, and green), then the subgraph of red and blue edges, the subgraph of blue and green edges, |
https://en.wikipedia.org/wiki/Tetsuya%20Miyamoto | is a Japanese mathematics teacher who invented the numerical logic puzzle KenKen. (It is called Kashikoku-Naru-Puzzle in Japanese, which literally means "a puzzle that makes you smarter." It is also known as Keisan Block.)
Miyamoto developed KenKen in 2003 to help his students improve their calculation skills, logical thinking and patience. His puzzle series has sold over 1.5 million copies in Japan. It was introduced to the rest of the world at the 2007 Bologna Book Fair as KenKen and has been translated into Korean, Thai, German, French, Czech, Mandarin Chinese, Simplified Chinese, Slovene, Spanish, Portuguese, and Icelandic. KenKen made its debut in The Times (London) in March 2008, and the New York Times in February 2009. The first U.S. KenKen tournament was held in March 2009 in Brooklyn, with Miyamoto in attendance.
Miyamoto graduated from Waseda University in Tokyo. He worked as an instructor at a juku (university preparatory cramming school) in Yokohama. In 1993 he founded his own school named Miyamoto Sansuu Kyoushitsu (Miyamoto Math Classroom) in Yokohama, and established his unique "non-teaching classroom" methodology called "The Art of Teaching Without Teaching". He moved his classroom to Tokyo (near Tokyo station) in 2009, and moved his classroom to Manhattan in 2015. His Manhattan class is named Miyamoto Mathematics Inc. He currently spends 8 months in New York and 4 months in Japan. He teaches KenKen to children on weekends.
He wrote over 180 books in Japanese, including his teaching methodology book called "Kyouikuron" that has been sold over 100,000 copies and the "Kashikoku-Naru-Sansuu" series that is consist of 96 math problem books that scaffold gradually.
References
External links
KenKen Web
Kenken.com
Puzzle Guru Will Shortz, Time Magazine, March 2, 2009
Miyamoto Mathematics.
20th-century Japanese mathematicians
21st-century Japanese mathematicians
Living people
Puzzle designers
1959 births
Waseda University alumni
ja:賢くなるパズル |
https://en.wikipedia.org/wiki/Hyponormal%20operator | In mathematics, especially operator theory, a hyponormal operator is a generalization of a normal operator. In general, a bounded linear operator T on a complex Hilbert space H is said to be p-hyponormal () if:
(That is to say, is a positive operator.) If , then T is called a hyponormal operator. If , then T is called a semi-hyponormal operator. Moreover, T is said to be log-hyponormal if it is invertible and
An invertible p-hyponormal operator is log-hyponormal. On the other hand, not every log-hyponormal is p-hyponormal.
The class of semi-hyponormal operators was introduced by Xia, and the class of p-hyponormal operators was studied by Aluthge, who used what is today called the Aluthge transformation.
Every subnormal operator (in particular, a normal operator) is hyponormal, and every hyponormal operator is a paranormal convexoid operator. Not every paranormal operator is, however, hyponormal.
See also
Putnam’s inequality
References
Operator theory
Linear operators |
https://en.wikipedia.org/wiki/Convexoid%20operator | In mathematics, especially operator theory, a convexoid operator is a bounded linear operator T on a complex Hilbert space H such that the closure of the numerical range coincides with the convex hull of its spectrum.
An example of such an operator is a normal operator (or some of its generalization).
A closely related operator is a spectraloid operator: an operator whose spectral radius coincides with its numerical radius. In fact, an operator T is convexoid if and only if is spectraloid for every complex number .
See also
Aluthge transform
References
T. Furuta. Certain convexoid operators
Operator theory |
https://en.wikipedia.org/wiki/St%20Mary%27s%20Catholic%20High%20School%2C%20Chesterfield | St Mary's Catholic High School is a Catholic, co-educational, secondary school with academy status in Upper Newbold, Chesterfield, Derbyshire, which specialises in the teaching of Maths and History.
The school was rated Outstanding in all areas by Ofsted in October 2012.
History
Beginnings
The school opened on 8 January 1856. It was part of the Church of the Annunciation, a Roman Catholic church in Chesterfield built by the Society of Jesus (Jesuits) and completed in 1854. The church at the time was also known as St Mary's. The school later moved to a site on Cross Street, also in Chesterfield, around 100 meters from the church. The buildings on Cross Street are now home to St Mary's Roman Catholic Primary School, a feeder school to St Mary's Roman Catholic High School. Both schools are linked with the Church of the Annunciation.
Present campus
The first brick was laid on 30 May 1978. Phase 1, consisting of the Gym, Drama Hall and Music room (the current IT2) Block and the Admin, Design Technology and Canteen Blocks were handed over by the developer on 27 November 1980. Work was underway on Phase 2 (Art, Chemistry, and Humanities) and this was completed by the summer of 1981. The school grounds opened to pupils in September 1981, although construction of the Biology and Language Blocks (Phase 3) was still underway. Phase 4, the Sixth Form block, was completed in 1984 along with its architectural highlight 'the Bridge'. Phase 5 was never constructed. The large playing fields were finally handed over by the developer in the mid-1980s, much later than planned, due to issues with landscaping and coal mining shafts being discovered. A primary school was marked on plans for the East 'mound' area of the field, but this was cancelled in the late 1980s.
The present campus was officially opened in 1982, one year after its completion. In 1996, a new Music Block and Maths/Physics Block were built, and in 2000 a new building was built to house the school's I.C.T. facilities and sixth form study.
In the middle of 2002, work in building an all-weather astro-turf sports pitch was completed. The pitch has facilities for football and hockey, and is floodlit for use during the evening, particularly for after-school sports fixtures and the Chesterfield Hockey Club.
In September 2003, work on a £1.6 million Sports Hall was started, the building was completed in June 2004. The building was dedicated, and blessed in the same month.
In 2004, work started on a new building to house the school's English Department; this was completed in the middle of 2005 and features five classrooms, three offices, and toilet facilities.
On the morning of 15 March 2011, a bus carrying pupils to school collided with a bridge near Barrow Hill. The single decker bus, which usually operated the route, was not operating that morning and so a replacement bus - a double decker prohibited from the route due to low bridges - was run instead. One student was seriously injured, with seven |
https://en.wikipedia.org/wiki/Johann%20Matth%C3%A4us%20Hassencamp | Johann Matthäus Hassencamp (28 July 1743 – 6 October 1797) was a German Orientalist and Protestant theologian born in Marburg.
He studied philology, mathematics, theology and philosophy at the Universities of Marburg and Göttingen. Afterwards, he continued his studies in France, Holland and England, followed by a return to Marburg, where in 1768 he received his habilitation. Later, he became a professor of Oriental languages and mathematics at the university in Rinteln, where in 1777 he was given additional responsibilities as head of the university library.
Among his published works was a treatise on the Pentateuch, titled "Commentatio philologico-critica de Pentateucho LXX interpretum graeco non ex hebraeo sed samaritano textu converso", and the autobiography of theologian Johann David Michaelis, "Lebensbeschreibung von ihm selbst abgefasst, mit Anmerkungen von Hassencamp" (Written biography of himself, with the notes of Hassencamp; 1793). From 1789 up until 1797, he was an editor of the influential weekly magazine "Annalen der neuesten theologischen Litteratur und Kirchengeschichte" (Annals of the Latest Theological Literature and Church History; afterwards continued by Ludwig Wachler). In the fields of mathematics and physics, he published a work on the history involving efforts to determine longitude, titled "Kurze Geschichte der Bemühungen die Meereslänge zu erfinden" (1769).
References
Wikisource, ADB Hassencamp, Johann Matthäus (translated from German).
1743 births
1797 deaths
People from Marburg
18th-century German Protestant theologians
German orientalists
University of Marburg alumni
German male non-fiction writers
18th-century German male writers |
https://en.wikipedia.org/wiki/Glasgow%20Mathematical%20Journal | The Glasgow Mathematical Journal is a mathematics journal that publishes original research papers in any branch of pure and applied mathematics. It covers a wide variety of research areas, which in recent issues have included ring theory, group theory, functional analysis, combinatorics, differential equations, differential geometry, number theory, algebraic topology, and the application of such methods in applied mathematics.
The editor-in-chief is currently I. A. B. Strachan (University of Glasgow).
References
Mathematics education in the United Kingdom
Mathematics journals
Cambridge University Press academic journals |
https://en.wikipedia.org/wiki/Olympiacos%20B.C.%20in%20international%20competitions | Olympiacos B.C. in international competitions is the history and statistics of Olympiacos B.C. in FIBA Europe and Euroleague Basketball Company competitions.
1960s
1960–61 FIBA European Champions Cup, 1st–tier
The 1960–61 FIBA European Champions Cup was the 4th installment of the European top-tier level professional basketball club competition FIBA European Champions Cup (now called EuroLeague), running from November 29, 1960 to July 26, 1961. The trophy was won by CSKA Moscow, who defeated the title holder Rīgas ASK by a result of 141–128 in a two-legged final on a home and away basis. Overall, Olympiacos achieved in the present competition a record of 0 wins against 2 defeats, in only one round. More detailed:
First round
Tie played on November 23*, 1960 and on December 11, 1960.
|}
*The game conducted six days before the official opening of the competition.
1970s
1972–73 FIBA European Cup Winners' Cup, 2nd–tier
The 1972–73 FIBA European Cup Winners' Cup was the 7th installment of FIBA's 2nd-tier level European-wide professional club basketball competition FIBA European Cup Winners' Cup (lately called FIBA Saporta Cup), running from October 18, 1972 to March 20, 1973. The trophy was won by Spartak Leningrad, who defeated Jugoplastika by a result of 77–62 at Alexandreio Melathron in Thessaloniki, Greece. Overall, Olympiacos achieved in the present competition a record of 1 win against 3 defeats, in three successive rounds. More detailed:
First round
Bye
Second round
Tie played on November 8, 1972 and on November 15, 1972.
|}
*The score in the second leg at the end of regulation was 89–69 for Olympiacos, so it was necessary to play an extra-time to decide the winner of this match.
Top 12
Tie played on December 6, 1972 and on December 13, 1972.
|}
1973–74 FIBA European Cup Winners' Cup, 2nd–tier
The 1973–74 FIBA European Cup Winners' Cup was the 8th installment of FIBA's 2nd-tier level European-wide professional club basketball competition FIBA European Cup Winners' Cup (lately called FIBA Saporta Cup), running from October 17, 1973 to April 2, 1974. The trophy was won by Crvena zvezda, who defeated Spartak ZJŠ Brno by a result of 86–75 at Palasport Primo Carnera in Udine, Italy. Overall, Olympiacos achieved in the present competition a record of 2 wins against 1 defeat, plus 1 draw, in three successive rounds. More detailed:
First round
Bye
Second round
Tie played on November 7, 1973 and on November 14, 1973.
|}
Top 12
Tie played on November 28, 1973 and on December 5, 1973.
|}
1975–76 FIBA European Cup Winners' Cup, 2nd–tier
The 1973–74 FIBA European Cup Winners' Cup was the 10th installment of FIBA's 2nd-tier level European-wide professional club basketball competition FIBA European Cup Winners' Cup (lately called FIBA Saporta Cup), running from October 29, 1975 to March 17, 1976. The trophy was won by Cinzano Milano, who defeated ASPO Tours by a result of 88–83 at Palasport Parco Ruffini in Turin, Italy. Overall, Olymp |
https://en.wikipedia.org/wiki/Jerry%20L.%20Bona | Jerry Lloyd Bona (born February 5, 1945) is an American mathematician, known for his work in fluid mechanics, partial differential equations, and computational mathematics, and active in some other branches of pure and applied mathematics.
Bona received his PhD in 1971 from Harvard University under supervision of Garrett Birkhoff and worked from 1970 to 1972 at the Fluid Mechanics Research Institute University of Essex, where along with Brooke Benjamin and J. J. Mahony, he published on Model Equations for Long Waves in Non-linear Dispersive Systems, known as Benjamin–Bona–Mahony equation. He is probably best known for his statement about equivalent statements of the Axiom of Choice: “The Axiom of Choice is obviously true, the Well–ordering theorem is obviously false; and who can tell about Zorn’s Lemma?"
Jerry Bona has worked at University of Chicago, Pennsylvania State University, University of Texas at Austin and is a Professor of Mathematics at the University of Illinois at Chicago. In 2012 he became a fellow of the American Mathematical Society. In 2013 he became a fellow of the Society for Industrial and Applied Mathematics.
Quotes
This is a joke: although the three are all mathematically equivalent, many mathematicians find the axiom of choice to be intuitive, the well-ordering principle to be counterintuitive, and Zorn's lemma to be too complex for any intuition.
Selected publications
with S. M. Sun and Bing-Yu Zhang:
See also
Benjamin–Bona–Mahony equation
References
External links
Jerry Bona Web-site at University of Illinois at Chicago.
1945 births
Living people
20th-century American mathematicians
21st-century American mathematicians
Harvard University alumni
University of Chicago faculty
University of Texas at Austin faculty
University of Illinois Chicago faculty
Fellows of the American Mathematical Society
Fellows of the Society for Industrial and Applied Mathematics
Mathematicians from Arkansas |
https://en.wikipedia.org/wiki/Pasting%20lemma | In topology, the pasting or gluing lemma, and sometimes the gluing rule, is an important result which says that two continuous functions can be "glued together" to create another continuous function. The lemma is implicit in the use of piecewise functions. For example, in the book Topology and Groupoids, where the condition given for the statement below is that and
The pasting lemma is crucial to the construction of the fundamental group or fundamental groupoid of a topological space; it allows one to concatenate continuous paths to create a new continuous path.
Formal statement
Let be both closed (or both open) subsets of a topological space such that , and let also be a topological space. If is continuous when restricted to both and then is continuous.
This result allows one to take two continuous functions defined on closed (or open) subsets of a topological space and create a new one.
Proof: if is a closed subset of then and are both closed since each is the preimage of when restricted to and respectively, which by assumption are continuous. Then their union, is also closed, being a finite union of closed sets.
A similar argument applies when and are both open.
The infinite analog of this result (where ) is not true for closed For instance, the inclusion map from the integers to the real line (with the integers equipped with the cofinite topology) is continuous when restricted to an integer, but the inverse image of a bounded open set in the reals with this map is at most a finite number of points, so not open in
It is, however, true if the form a locally finite collection since a union of locally finite closed sets is closed. Similarly, it is true if the are instead assumed to be open since a union of open sets is open.
References
Brown, Ronald; Topology and Groupoids (Booksurge) 2006 .
Theory of continuous functions
Theorems in topology |
https://en.wikipedia.org/wiki/Regular%20map%20%28graph%20theory%29 | In mathematics, a regular map is a symmetric tessellation of a closed surface. More precisely, a regular map is a decomposition of a two-dimensional manifold (such as a sphere, torus, or real projective plane) into topological disks such that every flag (an incident vertex-edge-face triple) can be transformed into any other flag by a symmetry of the decomposition. Regular maps are, in a sense, topological generalizations of Platonic solids. The theory of maps and their classification is related to the theory of Riemann surfaces, hyperbolic geometry, and Galois theory. Regular maps are classified according to either: the genus and orientability of the supporting surface, the underlying graph, or the automorphism group.
Overview
Regular maps are typically defined and studied in three ways: topologically, group-theoretically, and graph-theoretically.
Topological approach
Topologically, a map is a 2-cell decomposition of a compact connected 2-manifold.
The genus g, of a map M is given by Euler's relation which is equal to if the map is orientable, and if the map is non-orientable. It is a crucial fact that there is a finite (non-zero) number of regular maps for every orientable genus except the torus.
Group-theoretical approach
Group-theoretically, the permutation representation of a regular map M is a transitive permutation group C, on a set of flags, generated by three fixed-point free involutions r0, r1, r2 satisfying (r0r2)2= I. In this definition the faces are the orbits of F = <r0, r1>, edges are the orbits of E = <r0, r2>, and vertices are the orbits of V = <r1, r2>. More abstractly, the automorphism group of any regular map is the non-degenerate, homomorphic image of a <2,m,n>-triangle group.
Graph-theoretical approach
Graph-theoretically, a map is a cubic graph with edges coloured blue, yellow, red such that: is connected, every vertex is incident to one edge of each colour, and cycles of edges not coloured yellow have length 4. Note that is the flag graph or graph-encoded map (GEM) of the map, defined on the vertex set of flags and is not the skeleton G = (V,E) of the map. In general, || = 4|E|.
A map M is regular if Aut(M) acts regularly on the flags. Aut(M) of a regular map is transitive on the vertices, edges, and faces of M. A map M is said to be reflexible iff Aut(M) is regular and contains an automorphism that fixes both a vertex v and a face f, but reverses the order of the edges. A map which is regular but not reflexible is said to be chiral.
Examples
The great dodecahedron is a regular map with pentagonal faces in the orientable surface of genus 4.
The hemicube is a regular map of type {4,3} in the projective plane.
The hemi-dodecahedron is a regular map produced by pentagonal embedding of the Petersen graph in the projective plane.
The p-hosohedron is a regular map of type {2,p}.
The Dyck map is a regular map of 12 octagons on a genus-3 surface. Its underlying graph, the Dyck graph, can also form a regula |
https://en.wikipedia.org/wiki/Computable%20analysis | In mathematics and computer science, computable analysis is the study of mathematical analysis from the perspective of computability theory. It is concerned with the parts of real analysis and functional analysis that can be carried out in a computable manner. The field is closely related to constructive analysis and numerical analysis.
A notable result is that integration (in the sense of the Riemann integral) is computable. This might be considered surprising as an integral is (loosely speaking) an infinite sum. While this result could be explained by the fact that every computable function from to is uniformly continuous, the notable thing is that the modulus of continuity can always be computed without being explicitly given. A similarly surprising fact is that differentiation of complex functions is also computable, while the same result is false for real functions.
The above motivating results have no counterpart in Bishop's constructive analysis. Instead, it is the stronger form of constructive analysis developed by Brouwer that provides a counterpart in constructive logic.
Basic constructions
A popular model for doing computable analysis is Turing machines. The tape configuration and interpretation of mathematical structures are described as follows.
Type 2 Turing Machines
A Type 2 Turing machine is a Turing machine with three tapes: An input tape, which is read-only; a working tape, which can be written to and read from; and, notably, an output tape, which is "append-only".
Real numbers
In this context, real numbers are represented as arbitrary infinite sequences of symbols. These sequences could for instance represent the digits of a real number. Such sequences need not be computable — this allowance is both important and philosophically unproblematic. Note that the programs that act on these sequences do need to be computable in a reasonable sense.
In the case of real numbers, the usual decimal or binary representations are not appropriate. Instead a signed digit representation first suggested by Brouwer often gets used: The number system is base 2, but the digits are (representing ), 0 and 1. In particular, this means can be represented both as and .
To understand why decimal notation is inappropriate, consider the problem of computing where and , and giving the result in decimal notation. The value of is either or . If the latter result were given for instance, then a finite number of digits of would be read before choosing the digit before the decimal point in — but then if the th digit of were decreased to 2, then the result for would be wrong. Similarly, the former choice for would be wrong sometimes. This is essentially the tablemaker's dilemma.
As well as signed digits, there are analogues of Cauchy sequences and Dedekind cuts that could in principle be used instead.
Computable functions
Computable functions are represented as programs on a Type 2 Turing machine. A program is considered total (in the |
https://en.wikipedia.org/wiki/Modulus%20of%20convergence | In real analysis, a branch of mathematics, a modulus of convergence is a function that tells how quickly a convergent sequence converges. These moduli are often employed in the study of computable analysis and constructive mathematics.
If a sequence of real numbers converges to a real number , then by definition, for every real there is a natural number such that if then . A modulus of convergence is essentially a function that, given , returns a corresponding value of .
Definition
Suppose that is a convergent sequence of real numbers with limit . There are two ways of defining a modulus of convergence as a function from natural numbers to natural numbers:
As a function such that for all , if then .
As a function such that for all , if then .
The latter definition is often employed in constructive settings, where the limit may actually be identified with the convergent sequence. Some authors use an alternate definition that replaces with .
See also
Modulus of continuity
References
Klaus Weihrauch (2000), Computable Analysis.
Constructivism (mathematics)
Computable analysis
Real analysis |
https://en.wikipedia.org/wiki/Cellular%20approximation%20theorem | In algebraic topology, the cellular approximation theorem states that a map between CW-complexes can always be taken to be of a specific type. Concretely, if X and Y are CW-complexes, and f : X → Y is a continuous map, then f is said to be cellular, if f takes the n-skeleton of X to the n-skeleton of Y for all n, i.e. if for all n. The content of the cellular approximation theorem is then that any continuous map f : X → Y between CW-complexes X and Y is homotopic to a cellular map, and if f is already cellular on a subcomplex A of X, then we can furthermore choose the homotopy to be stationary on A. From an algebraic topological viewpoint, any map between CW-complexes can thus be taken to be cellular.
Idea of proof
The proof can be given by induction after n, with the statement that f is cellular on the skeleton Xn. For the base case n=0, notice that every path-component of Y must contain a 0-cell. The image under f of a 0-cell of X can thus be connected to a 0-cell of Y by a path, but this gives a homotopy from f to a map which is cellular on the 0-skeleton of X.
Assume inductively that f is cellular on the (n − 1)-skeleton of X, and let en be an n-cell of X. The closure of en is compact in X, being the image of the characteristic map of the cell, and hence the image of the closure of en under f is also compact in Y. Then it is a general result of CW-complexes that any compact subspace of a CW-complex meets (that is, intersects non-trivially) only finitely many cells of the complex. Thus f(en) meets at most finitely many cells of Y, so we can take to be a cell of highest dimension meeting f(en). If , the map f is already cellular on en, since in this case only cells of the n-skeleton of Y meets f(en), so we may assume that k > n. It is then a technical, non-trivial result (see Hatcher) that the restriction of f to can be homotoped relative to Xn-1 to a map missing a point p ∈ ek. Since Yk − {p} deformation retracts onto the subspace Yk-ek, we can further homotope the restriction of f to to a map, say, g, with the property that g(en) misses the cell ek of Y, still relative to Xn-1. Since f(en) met only finitely many cells of Y to begin with, we can repeat this process finitely many times to make miss all cells of Y of dimension larger than n.
We repeat this process for every n-cell of X, fixing cells of the subcomplex A on which f is already cellular, and we thus obtain a homotopy (relative to the (n − 1)-skeleton of X and the n-cells of A) of the restriction of f to Xn to a map cellular on all cells of X of dimension at most n. Using then the homotopy extension property to extend this to a homotopy on all of X, and patching these homotopies together, will finish the proof. For details, consult Hatcher.
Applications
Some homotopy groups
The cellular approximation theorem can be used to immediately calculate some homotopy groups. In particular, if then Give and their canonical CW-structure, with one 0-cell each, and with one -ce |
https://en.wikipedia.org/wiki/BMC%20Medical%20Informatics%20and%20Decision%20Making | BMC Medical Informatics and Decision Making is an open-access scientific journal covering all areas of medical informatics, biostatistics, and computer science.
According to the Journal Citation Reports, the journal had a 2020 impact factor of 2.796. The editor-in-chief is Piero Lo Monaco.
References
External links
BMC Medical Informatics and Decision Making official website
Scopus.com - Source details: BMC Medical Informatics and Decision Making
Scimagojr.com - BMC Medical Informatics and Decision Making
Biomedical informatics journals
English-language journals
BioMed Central academic journals
Academic journals established in 2000
Computer science journals |
https://en.wikipedia.org/wiki/Transversal%20plane | In geometry, a transversal plane is a plane that intersects (not contains) two or more lines or planes. A transversal plane may also form dihedral angles.
Theorems
Transversal plane theorem for lines: Lines that intersect a transversal plane are parallel if and only if their alternate interior angles formed by the points of intersection are congruent.
Transversal plane theorem for planes: Planes intersected by a transversal plane are parallel if and only if their alternate interior dihedral angles are congruent.
Transversal line containment theorem: If a transversal line is contained in any plane other than the plane containing all the lines, then the plane is a transversal plane.
Multi-dimensional geometry |
https://en.wikipedia.org/wiki/Reduction%20formula | A reduction formula is used to represent some expression in a simpler form.
It may refer to:
Mathematics
Formulas of reduction, the decomposition of multiple integrals
Integration by reduction formulae, expressing an integral in terms of the same integral but in lower powers
Physics
LSZ reduction formula, a method to calculate S-matrix elements from the time-ordered correlation functions of a quantum field theory |
https://en.wikipedia.org/wiki/Ministry%20of%20Planning%20%28Cambodia%29 | The Ministry of Planning (MoP; ) is a government ministry responsible for socioeconomic planning and statistics management in Cambodia. The Ministry consists of two main parts; the General Directorate of Planning, and the National Institute of Statistics. The Ministry is located in Phnom Penh.
See also
Census
Demographics
Government of Cambodia
References
External links
Ministry of Planning homepage
National Institute of Statistics
Planning
Phnom Penh
Cambodia |
https://en.wikipedia.org/wiki/W.%20V.%20V.%20B.%20Ramalingam | Wunnava Venkata Varaha Buchi Ramalingam was an Indian independence activist from Berhampur in the Ganjam district of the erstwhile Madras Presidency of British India. He was a mathematics teacher and vice-principal at Khallikote College, president of the Bengal and Nagpur Railway Associations and also a road contractor.
References
1884 births
Indian schoolteachers
Indian independence activists from Tamil Nadu
People from Odisha
1962 deaths
Presidency College, Chennai alumni |
https://en.wikipedia.org/wiki/Helical%20boundary%20conditions | In mathematics, helical boundary conditions are a variation on periodic boundary conditions. Helical boundary conditions provide a method for determining the index of a lattice site's neighbours when each lattice site is indexed by just a single coordinate. On a lattice of dimension d where the lattice sites are numbered from 1 to N and L is the width (i.e. number of elements per row) of the lattice in all but the last dimension, the neighbors of site i are:
where the modulo operator is used. It is not necessary that N = Ld. Helical boundary conditions make it possible to use only one coordinate to describe arbitrary-dimensional lattices.
References
Boundary conditions |
https://en.wikipedia.org/wiki/Ran%20Raz | Ran Raz () is a computer scientist who works in the area of computational complexity theory. He was a professor in the faculty of mathematics and computer science at the Weizmann Institute. He is now a professor of computer science at Princeton University.
Ran Raz received his Ph.D. at the Hebrew University of Jerusalem in 1992 under Avi Wigderson and Michael Ben-Or.
Ran Raz is well known for his work on interactive proof systems. His two most-cited papers are on multi-prover interactive proofs and on probabilistically checkable proofs.
Ran Raz received the Erdős Prize in 2002. His work has been awarded in the top conferences in theoretical computer science. In 2004, he received the best paper award in ACM Symposium on Theory of Computing (STOC) for , and the best paper award in IEEE Conference on Computational Complexity (CCC) for . In 2008, the work received the best paper award in IEEE Symposium on Foundations of Computer Science (FOCS).
Selected publications
.
.
.
.
.
Notes
Year of birth missing (living people)
Living people
Theoretical computer scientists
Academic staff of Weizmann Institute of Science
Israeli computer scientists
Erdős Prize recipients |
https://en.wikipedia.org/wiki/Equidimensionality | In mathematics, especially in topology, equidimensionality is a property of a space that the local dimension is the same everywhere.
Definition (topology)
A topological space X is said to be equidimensional if for all points p in X, the dimension at p, that is dim p(X), is constant. The Euclidean space is an example of an equidimensional space. The disjoint union of two spaces X and Y (as topological spaces) of different dimension is an example of a non-equidimensional space.
Definition (algebraic geometry)
A scheme S is said to be equidimensional if every irreducible component has the same Krull dimension. For example, the affine scheme Spec k[x,y,z]/(xy,xz), which intuitively looks like a line intersecting a plane, is not equidimensional.
Cohen–Macaulay ring
An affine algebraic variety whose coordinate ring is a Cohen–Macaulay ring is equidimensional.
References
Mathematical terminology
Topology |
https://en.wikipedia.org/wiki/Pak%20Chol-ryong | Pak Chol-ryong (born 3 November 1988) is a North Korean football player.
Between 2008–10 he played for FC Concordia Basel.
Club career statistics
References
External links
football.ch profile
Living people
North Korean men's footballers
North Korea men's international footballers
North Korean expatriate sportspeople in Switzerland
FC Concordia Basel players
1988 births
Men's association football midfielders |
https://en.wikipedia.org/wiki/S.%20L.%20Loney | Sydney Luxton Loney, M.A. (16 March 1860 – 16 May 1939) was a Professor of Mathematics at the Royal Holloway College, Egham, Surrey, and a fellow of Sidney Sussex College, Cambridge, England. He authored a number of mathematics texts, some of which have been reprinted numerous times. He is known as an early influence on Srinivasa Ramanujan.
Loney was educated at Maidstone Grammar School, in Tonbridge and at Sidney Sussex College, Cambridge, where he graduated with a B.A. as 3rd Wrangler in 1882. His books on Plane Trigonometry and Coordinate Geometry are very popular among senior high school students in India who are preparing for engineering entrance exams like JEE Advanced.
Bibliography
References
Sources
External links
Plane Trigonometry, 1st Edition (1893) at the Internet Archive
Plane Trigonometry, 2nd Edition (1895) at the Internet Archive
1860 births
1939 deaths
19th-century English mathematicians
20th-century British mathematicians
Academics of Royal Holloway, University of London
Fellows of Sidney Sussex College, Cambridge |
https://en.wikipedia.org/wiki/Dudley%27s%20theorem | In probability theory, Dudley's theorem is a result relating the expected upper bound and regularity properties of a Gaussian process to its entropy and covariance structure.
History
The result was first stated and proved by V. N. Sudakov, as pointed out in a paper by Richard M. Dudley. Dudley had earlier credited Volker Strassen with making the connection between entropy and regularity.
Statement
Let (Xt)t∈T be a Gaussian process and let dX be the pseudometric on T defined by
For ε > 0, denote by N(T, dX; ε) the entropy number, i.e. the minimal number of (open) dX-balls of radius ε required to cover T. Then
Furthermore, if the entropy integral on the right-hand side converges, then X has a version with almost all sample path bounded and (uniformly) continuous on (T, dX).
References
(See chapter 11)
Entropy
Theorems regarding stochastic processes |
https://en.wikipedia.org/wiki/Sample%20maximum%20and%20minimum | In statistics, the sample maximum and sample minimum, also called the largest observation and smallest observation, are the values of the greatest and least elements of a sample. They are basic summary statistics, used in descriptive statistics such as the five-number summary and Bowley's seven-figure summary and the associated box plot.
The minimum and the maximum value are the first and last order statistics (often denoted X(1) and X(n) respectively, for a sample size of n).
If the sample has outliers, they necessarily include the sample maximum or sample minimum, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum need not be outliers, if they are not unusually far from other observations.
Robustness
The sample maximum and minimum are the least robust statistics: they are maximally sensitive to outliers.
This can either be an advantage or a drawback: if extreme values are real (not measurement errors), and of real consequence, as in applications of extreme value theory such as building dikes or financial loss, then outliers (as reflected in sample extrema) are important. On the other hand, if outliers have little or no impact on actual outcomes, then using non-robust statistics such as the sample extrema simply cloud the statistics, and robust alternatives should be used, such as other quantiles: the 10th and 90th percentiles (first and last decile) are more robust alternatives.
Derived statistics
In addition to being a component of every statistic that uses all elements of the sample, the sample extrema are important parts of the range, a measure of dispersion, and mid-range, a measure of location. They also realize the maximum absolute deviation: one of them is the furthest point from any given point, particularly a measure of center such as the median or mean.
Applications
Smooth maximum
For a sample set, the maximum function is non-smooth and thus non-differentiable. For optimization problems that occur in statistics it often needs to be approximated by a smooth function that is close to the maximum of the set.
A smooth maximum, for example,
g(x1, x2, …, xn) = log( exp(x1) + exp(x2) + … + exp(xn) )
is a good approximation of the sample maximum.
Summary statistics
The sample maximum and minimum are basic summary statistics, showing the most extreme observations, and are used in the five-number summary and a version of the seven-number summary and the associated box plot.
Prediction interval
The sample maximum and minimum provide a non-parametric prediction interval:
in a sample from a population, or more generally an exchangeable sequence of random variables, each observation is equally likely to be the maximum or minimum.
Thus if one has a sample and one picks another observation then this has probability of being the largest value seen so far, probability of being the smallest value seen so far, and thus the other of the time, falls between the sample maximum and sa |
https://en.wikipedia.org/wiki/Quantity%20calculus | Quantity calculus is the formal method for describing the mathematical relations between abstract physical quantities.
Its roots can be traced to Fourier's concept of dimensional analysis (1822). The basic axiom of quantity calculus is Maxwell's description of a physical quantity as the product of a "numerical value" and a "reference quantity" (i.e. a "unit quantity" or a "unit of measurement"). De Boer summarized the multiplication, division, addition, association and commutation rules of quantity calculus and proposed that a full axiomatization has yet to be completed.
Measurements are expressed as products of a numeric value with a unit symbol, e.g. "12.7 m". Unlike algebra, the unit symbol represents a measurable quantity such as a meter, not an algebraic variable.
A careful distinction needs to be made between abstract quantities and measurable quantities. The multiplication and division rules of quantity calculus are applied to SI base units (which are measurable quantities) to define SI derived units, including dimensionless derived units, such as the radian (rad) and steradian (sr) which are useful for clarity, although they are both algebraically equal to 1. Thus there is some disagreement about whether it is meaningful to multiply or divide units. Emerson suggests that if the units of a quantity are algebraically simplified, they then are no longer units of that quantity. Johansson proposes that there are logical flaws in the application of quantity calculus, and that the so-called dimensionless quantities should be understood as "unitless quantities".
How to use quantity calculus for unit conversion and keeping track of units in algebraic manipulations is explained in the handbook Quantities, Units and Symbols in Physical Chemistry.
Notes
References
Further reading
International Organization for Standardization. ISO 80000-1:2009 Quantities and Units. Part 1 - General.. ISO. Geneva
Physical quantities |
https://en.wikipedia.org/wiki/Discretization%20of%20continuous%20features | In statistics and machine learning, discretization refers to the process of converting or partitioning continuous attributes, features or variables to discretized or nominal attributes/features/variables/intervals. This can be useful when creating probability mass functions – formally, in density estimation. It is a form of discretization in general and also of binning, as in making a histogram. Whenever continuous data is discretized, there is always some amount of discretization error. The goal is to reduce the amount to a level considered negligible for the modeling purposes at hand.
Typically data is discretized into partitions of K equal lengths/width (equal intervals) or K% of the total data (equal frequencies).
Mechanisms for discretizing continuous data include Fayyad & Irani's MDL method, which uses mutual information to recursively define the best bins, CAIM, CACC, Ameva, and many others
Many machine learning algorithms are known to produce better models by discretizing continuous attributes.
Software
This is a partial list of software that implement MDL algorithm.
discretize4crf tool designed to work with popular CRF implementations (C++)
mdlp in the R package discretization
Discretize in the R package RWeka
See also
Density estimation
Continuity correction
References
Estimation of densities
Statistical data coding |
https://en.wikipedia.org/wiki/Douglas%27%20lemma | In operator theory, an area of mathematics, Douglas' lemma relates factorization, range inclusion, and majorization of Hilbert space operators. It is generally attributed to Ronald G. Douglas, although Douglas acknowledges that aspects of the result may already have been known. The statement of the result is as follows:
Theorem: If and are bounded operators on a Hilbert space , the following are equivalent:
for some
There exists a bounded operator on such that .
Moreover, if these equivalent conditions hold, then there is a unique operator such that
.
A generalization of Douglas' lemma for unbounded operators on a Banach space was proved by Forough (2014).
See also
Positive operator
References
Operator theory |
https://en.wikipedia.org/wiki/Fibonacci%20cube | In the mathematical field of graph theory, the Fibonacci cubes or Fibonacci networks are a family of undirected graphs with rich recursive properties derived from its origin in number theory. Mathematically they are similar to the hypercube graphs, but with a Fibonacci number of vertices. Fibonacci cubes were first explicitly defined in in the context of interconnection topologies for connecting parallel or distributed systems. They have also been applied in chemical graph theory.
The Fibonacci cube may be defined in terms of Fibonacci codes and Hamming distance, independent sets of vertices in path graphs, or via distributive lattices.
Definition
Like the hypercube graph, the vertices of the Fibonacci cube of order n may be labeled with bitstrings of length n, in such a way that two vertices are adjacent whenever their labels differ in a single bit. However, in a Fibonacci cube, the only labels that are allowed are bitstrings with no two consecutive 1 bits. If the labels of the hypercube are interpreted as binary numbers, the labels in the Fibonacci cube are a subset, the fibbinary numbers. There are Fn + 2 labels possible, where Fn denotes the nth Fibonacci number, and therefore there are Fn + 2 vertices in the Fibonacci cube of order n.
The nodes of such a network may be assigned consecutive integers from 0 to Fn + 2 − 1; the bitstrings corresponding to these numbers are given by their Zeckendorf representations.
Algebraic structure
The Fibonacci cube of order n is the simplex graph of the complement graph of an n-vertex path graph. That is, each vertex in the Fibonacci cube represents a clique in the path complement graph, or equivalently an independent set in the path itself; two Fibonacci cube vertices are adjacent if the cliques or independent sets that they represent differ by the addition or removal of a single element. Therefore, like other simplex graphs, Fibonacci cubes are median graphs and more generally partial cubes. The median of any three vertices in a Fibonacci cube may be found by computing the bitwise majority function of the three labels; if each of the three labels has no two consecutive 1 bits, the same is true of their majority.
The Fibonacci cube is also the graph of a distributive lattice that may be obtained via Birkhoff's representation theorem from a zigzag poset, a partially ordered set defined by an alternating sequence of order relations a < b > c < d > e < f > ... There is also an alternative graph-theoretic description of the same lattice: the independent sets of any bipartite graph may be given a partial order in which one independent set is less than another if they differ by removing elements from one side of the bipartition and adding elements to the other side of the bipartition; with this order, the independent sets form a distributive lattice, and applying this construction to a path graph results in the lattice associated with the Fibonacci cube.
Properties and algorithms
The Fibonacci cube of o |
https://en.wikipedia.org/wiki/Supersolvable%20arrangement | In mathematics, a supersolvable arrangement is a hyperplane arrangement which has a maximal flag with only modular elements. Equivalently, the intersection semilattice of the arrangement is a
supersolvable lattice, in the sense of Richard P. Stanley. As shown by Hiroaki Terao,
a complex hyperplane arrangement is supersolvable if and only if its complement is fiber-type.
Examples include arrangements associated with Coxeter groups of type A and B.
It is known that the Orlik–Solomon algebra of a supersolvable arrangement is a Koszul algebra; whether the converse is true is an open problem.
References
Discrete geometry
Matroid theory |
https://en.wikipedia.org/wiki/Comonotonicity | In probability theory, comonotonicity mainly refers to the perfect positive dependence between the components of a random vector, essentially saying that they can be represented as increasing functions of a single random variable. In two dimensions it is also possible to consider perfect negative dependence, which is called countermonotonicity.
Comonotonicity is also related to the comonotonic additivity of the Choquet integral.
The concept of comonotonicity has applications in financial risk management and actuarial science, see e.g. and . In particular, the sum of the components is the riskiest if the joint probability distribution of the random vector is comonotonic. Furthermore, the -quantile of the sum equals of the sum of the -quantiles of its components, hence comonotonic random variables are quantile-additive. In practical risk management terms it means that there is minimal (or eventually no) variance reduction from diversification.
For extensions of comonotonicity, see and .
Definitions
Comonotonicity of subsets of
A subset of is called comonotonic (sometimes also nondecreasing) if, for all and in with for some }, it follows that for all }.
This means that is a totally ordered set.
Comonotonicity of probability measures on
Let be a probability measure on the -dimensional Euclidean space and let denote its multivariate cumulative distribution function, that is
Furthermore, let denote the cumulative distribution functions of the one-dimensional marginal distributions of , that means
for every }. Then is called comonotonic, if
Note that the probability measure is comonotonic if and only if its support is comonotonic according to the above definition.
Comonotonicity of -valued random vectors
An -valued random vector is called comonotonic, if its multivariate distribution (the pushforward measure) is comonotonic, this means
Properties
An -valued random vector is comonotonic if and only if it can be represented as
where stands for equality in distribution, on the right-hand side are the left-continuous generalized inverses of the cumulative distribution functions , and is a uniformly distributed random variable on the unit interval. More generally, a random vector is comonotonic if and only if it agrees in distribution with a random vector where all components are non-decreasing functions (or all are non-increasing functions) of the same random variable.
Upper bounds
Upper Fréchet–Hoeffding bound for cumulative distribution functions
Let be an -valued random vector. Then, for every },
hence
with equality everywhere if and only if is comonotonic.
Upper bound for the covariance
Let be a bivariate random vector such that the expected values of , and the product exist. Let be a comonotonic bivariate random vector with the same one-dimensional marginal distributions as . Then it follows from Höffding's formula for the covariance and the upper Fréchet–Hoeffding bound that
and, correspondingly,
w |
https://en.wikipedia.org/wiki/Denny%20Gulick | Denny Gulick, born Sidney Lewis Gulick III, is a professor of mathematics at University of Maryland, College Park.
Life
Gulick obtained his PhD from Yale University, with his main interest of operator theory. He is the leader of College Mathematics in Maryland, and is active in statewide college education and policies.
He has written several textbooks, including Encounters with Chaos (1992) and six editions of Calculus with Analytic Geometry, with Robert Ellis.
Works
References
External links
American mathematicians
Living people
Year of birth missing (living people)
University of Maryland, College Park faculty
Yale University alumni |
https://en.wikipedia.org/wiki/Jack%20Silver | Jack Howard Silver (23 April 1942 – 22 December 2016) was a set theorist and logician at the University of California, Berkeley.
Born in Montana, he earned his Ph.D. in Mathematics at Berkeley in 1966 under Robert Vaught before taking a position at the same institution the following year. He held an Alfred P. Sloan Research Fellowship from 1970 to 1972. Silver made several contributions to set theory in the areas of large cardinals and the constructible universe L.
Contributions
In his 1975 paper "On the Singular Cardinals Problem", Silver proved that if a cardinal κ is singular with uncountable cofinality and 2λ = λ+ for all infinite cardinals λ < κ, then 2κ = κ+. Prior to Silver's proof, many mathematicians believed that a forcing argument would yield that the negation of the theorem is consistent with ZFC. He introduced the notion of a master condition, which became an important tool in forcing proofs involving large cardinals.
Silver proved the consistency of Chang's conjecture using the Silver collapse (which is a variation of the Levy collapse). He proved that, assuming the consistency of a supercompact cardinal, it is possible to construct a model where 2κ=κ++ holds for some measurable cardinal κ. With the introduction of the so-called Silver machines he was able to give a fine structure free proof of Jensen's covering lemma. He is also credited with discovering Silver indiscernibles and generalizing the notion of a Kurepa tree (called Silver's Principle). He discovered 0# ("zero sharp") in his 1966 Ph.D. thesis, discussed in the graduate textbook Set Theory: An Introduction to Large Cardinals by Frank R. Drake.
Silver's original work involving large cardinals was perhaps motivated by the goal of showing the inconsistency of an uncountable measurable cardinal; instead he was led to discover indiscernibles in L assuming a measurable cardinal exists.
Selected publications
Silver, Jack H. (1971). "Some applications of model theory in set theory". Annals of Mathematical Logic 3(1), pp. 45–110.
Silver, Jack H. (1973). "The bearing of large cardinals on constructibility". In Studies in Model Theory, MAA Studies in Mathematics 8, pp. 158–182.
Silver, Jack H. (1974). "Indecomposable ultrafilters and 0#". In Proceedings of the Tarski Symposium, Proceedings of Symposia in Pure Mathematics XXV, pp. 357–363.
Silver, Jack (1975). "On the singular cardinals problem". In Proceedings of the International Congress of Mathematicians 1, pp. 265–268.
Silver, Jack H. (1980). "Counting the number of equivalence classes of Borel and coanalytic equivalence relations". Annals of Mathematical Logic 18(1), pp. 1–28.
References
External links
Jack Silver at Berkeley
1942 births
2016 deaths
20th-century American mathematicians
21st-century American mathematicians
American logicians
Set theorists
University of California, Berkeley alumni
University of California, Berkeley faculty |
https://en.wikipedia.org/wiki/Montgomery%27s%20pair%20correlation%20conjecture | In mathematics, Montgomery's pair correlation conjecture is a conjecture made by that the pair correlation between pairs of zeros of the Riemann zeta function (normalized to have unit average spacing) is
which, as Freeman Dyson pointed out to him, is the same as the pair correlation function of random Hermitian matrices.
Conjecture
Under the assumption that the Riemann Hypothesis is true.
Let be fixed, as
and we count over , where each is the imaginary part of the non-trivial zeros of zeta function, that is . Also denotes the delta measure supported at 0.
Explanation
Informally, this means that the chance of finding a zero in a very short interval of length 2πL/log(T) at a distance 2πu/log(T) from a zero 1/2+iT is about L times the expression above. (The factor 2π/log(T) is a normalization factor that can be thought of informally as the average spacing between zeros with imaginary part about T.) showed that the conjecture was supported by large-scale computer calculations of the zeros. The conjecture has been extended to correlations of more than two zeros, and also to zeta functions of automorphic representations . In 1982 a student of Montgomery's, Ali Erhan Özlük, proved the pair correlation conjecture for some of Dirichlet's L-functions.
The connection with random unitary matrices could lead to a proof of the Riemann hypothesis (RH). The Hilbert–Pólya conjecture asserts that the zeros of the Riemann Zeta function correspond to the eigenvalues of a linear operator, and implies RH. Some people think this is a promising approach ().
Montgomery was studying the Fourier transform F(x) of the pair correlation function, and showed (assuming the Riemann hypothesis) that it was
equal to |x| for |x| < 1. His methods were unable to determine it for |x| ≥ 1, but he conjectured that it was equal to 1 for these x, which implies that the pair correlation function is as above. He was also motivated by the notion that the Riemann hypothesis is not a brick wall, and one should feel free to make stronger conjectures.
F(α) conjecture or strong pair correlation conjecture
Let again and stand for non-trivial zeros of the Riemann zeta function. Montgomery introduced the function
for and some weight function .
Montgomery and Goldston proved under the Riemann hypothesis, that for this function converges uniformly
Montgomery conjectured, which is now known as the F(α) conjecture or strong pair correlation conjecture, that for we have uniform convergence
for in a bounded interval.
Numerical calculation by Odlyzko
In the 1980s, motivated by the Montgomery's conjecture, Odlyzko began an intensive numerical study of the statistics of the zeros of ζ(s). He confirmed that the distribution of the spacings between non-trivial zeros using detail numerical calculation and demonstrated that the Montgomery's conjecture would be true and the distribution would agree with the distribution of spacings of GUE random matrix eigenvalues using Cray X |
https://en.wikipedia.org/wiki/Omar%20Nazar | Omar Nazar () is an Afghan footballer who last played for VfL Lohbrügge.
National team statistics
References
Afghan men's footballers
Afghan expatriate men's footballers
1978 births
Living people
Men's association football midfielders
Footballers from Kabul
Afghanistan men's international footballers |
https://en.wikipedia.org/wiki/Najib%20Naderi | Najib Naderi (; born 22 February 1984) is an Afghan former footballer who played as a defender and made four appearances for the Afghanistan national team in 2003.
Career statistics
References
Living people
1984 births
Afghan men's footballers
Men's association football defenders
Afghanistan men's international footballers
Hamburger SV players
Altonaer FC von 1893 players
Afghan expatriate men's footballers
Afghan expatriate sportspeople in Germany
Expatriate men's footballers in Germany |
https://en.wikipedia.org/wiki/Davoud%20Yaqoubi | Davoud Yaqoubi (; December 17, 1982) is an Afghan footballer who last played for SC Concordia Hamburg.
National team statistics
External links
Afghan men's footballers
Afghanistan men's international footballers
Afghan expatriate men's footballers
1982 births
Living people
Men's association football forwards |
https://en.wikipedia.org/wiki/1933%E2%80%9334%20French%20Division%202 | Statistics of Division 2 in the 1933–34 season.
Overview
It was contested by 23 teams, and Red Star Paris and Olympique Alès won the championship.
League tables
Group North
Group South
External links
French Division 2 – List of Final Tables at Rec.Sport.Soccer Statistics Foundation
Ligue 2 seasons
France
2 |
https://en.wikipedia.org/wiki/1934%E2%80%9335%20French%20Division%202 | Statistics of Division 2 in the 1934–35 season.
Overview
It was contested by 16 teams, and Metz won the championship.
League standings
References
France - List of final tables (RSSSF)
Ligue 2 seasons
France
2 |
https://en.wikipedia.org/wiki/1935%E2%80%9336%20French%20Division%202 | Statistics of Division 2 in the 1935–36 football season.
Overview
It was contested by 19 teams, and Rouen won the championship.
League standings
References
France - List of final tables (RSSSF)
Ligue 2 seasons
France
2 |
https://en.wikipedia.org/wiki/1936%E2%80%9337%20French%20Division%202 | Statistics of Division 2 in the 1936–37 season.
Overview
It was contested by 17 teams, and Lens won the championship.
League standings
References
France - List of final tables (RSSSF)
Ligue 2 seasons
France
2 |
https://en.wikipedia.org/wiki/1937%E2%80%9338%20French%20Division%202 | Statistics of Division 2 in the 1937–38 season.
Overview
It was contested by 25 teams, and Le Havre won the championship.
Group stage
Nord
Ouest
Est
Sud
Playoff
Promotion group
Relegation group
References
France - List of final tables (RSSSF)
Ligue 2 seasons
France
2 |
https://en.wikipedia.org/wiki/1938%E2%80%9339%20French%20Division%202 | Statistics of Division 2 in the 1938–39 season.
Overview
It was contested by 23 teams, and Red Star Paris won the championship.
League standings
References
France - List of final tables (RSSSF)
Ligue 2 seasons
France
2 |
https://en.wikipedia.org/wiki/1945%E2%80%9346%20French%20Division%202 | Statistics of Division 2 in the 1945–46 season.
Overview
It was contested by 28 teams, and Nancy and Montpellier won the championship.
League tables
Group North
Group South
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1946%E2%80%9347%20French%20Division%202 | Statistics of Division 2 in the 1946–47 season.
Overview
It was contested by 22 teams, and Sochaux-Montbéliard won the championship.
League standings
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1947%E2%80%9348%20French%20Division%202 | Statistics of Division 2 in the 1947–48 season.
Overview
It was contested by 20 teams, and Nice won the championship.
Teams
A total of twenty teams contested the league, including sixteen sides from the 1946–47 season and four sides relegated from the 1946–47 French Division 1. The league was contested in a double round robin format, with each club playing every other club twice, for a total of 38 rounds. Two points were awarded for wins and one point for draws.
League table
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1948%E2%80%9349%20French%20Division%202 | Statistics of Division 2 in the 1948–49 season.
Overview
It was contested by 19 teams, and Lens won the championship.
League standings
1. FC Saarbrücken took part in the competition as a guest team and finished in first place with a reported record of 26 wins, 7 draws and 5 defeats from 38 games, but were refused promotion or further participation.
References
External links
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1949%E2%80%9350%20French%20Division%202 | Statistics of Division 2 in the 1949–50 season.
Overview
It was contested by 18 teams, and Nîmes Olympique won the championship.
League standings
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1950%E2%80%9351%20French%20Division%202 | Statistics of Division 2 in the 1950–51 season.
Overview
It was contested by 18 teams, and Olympique Lyonnais won the championship.
League standings
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1951%E2%80%9352%20French%20Division%202 | Statistics of Division 2 in the 1951–52 season.
Overview
It was contested by 18 teams, and Stade Français won the championship.
League standings
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1952%E2%80%9353%20French%20Division%202 | Statistics of Division 2 in the 1952–53 season.
Overview
It was contested by 18 teams, and Toulouse won the championship.
League standings
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1953%E2%80%9354%20French%20Division%202 | Statistics of Division 2 in the 1953–54 season.
Overview
It was contested by 20 teams, and Olympique Lyonnais won the championship.
League standings
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1954%E2%80%9355%20French%20Division%202 | Statistics of Division 2 in the 1954–55 season.
Overview
It was contested by 20 teams, and Sedan Torcy won the championship.
League standings
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1955%E2%80%9356%20French%20Division%202 | Statistics of Division 2 in the 1955–56 season.
Overview
It was contested by 20 teams, and Stade Rennais won the championship.
League standings
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1956%E2%80%9357%20French%20Division%202 | Statistics of Division 2 in the 1956–57 season.
Overview
It was contested by 20 teams, and Olympique Alès won the championship.
League standings
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1957%E2%80%9358%20French%20Division%202 | Statistics of Division 2 in the 1957–58 season.
Overview
It was contested by 22 teams, and Nancy won the championship.
League standings
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1958%E2%80%9359%20French%20Division%202 | Statistics of Division 2 in the 1958/1959 season.
Overview
It was contested by 20 teams, and Le Havre won the championship.
League standings
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
https://en.wikipedia.org/wiki/1959%E2%80%9360%20French%20Division%202 | Statistics of Division 2 in the 1959–60 season.
Overview
It was contested by 20 teams, and Grenoble won the championship.
League standings
References
France - List of final tables (RSSSF)
French
2
Ligue 2 seasons |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.