source stringlengths 31 168 | text stringlengths 51 3k |
|---|---|
https://en.wikipedia.org/wiki/Common-method%20variance | In applied statistics, (e.g., applied to the social sciences and psychometrics), common-method variance (CMV) is the spurious "variance that is attributable to the measurement method rather than to the constructs the measures are assumed to represent" or equivalently as "systematic error variance shared among variables measured with and introduced as a function of the same method and/or source". For example, an electronic survey method might influence results for those who might be unfamiliar with an electronic survey interface differently than for those who might be familiar. If measures are affected by CMV or common-method bias, the intercorrelations among them can be inflated or deflated depending upon several factors. Although it is sometimes assumed that CMV affects all variables, evidence suggests that whether or not the correlation between two variables is affected by CMV is a function of both the method and the particular constructs being measured.
Remedies
Ex ante remedies
Several ex ante remedies exist that help to avoid or minimize possible common method variance. Important remedies have been compiled and discussed by Chang et al. (2010), Lindell & Whitney (2001) and Podsakoff et al. (2003).
Ex post remedies
Using simulated data sets, Richardson et al. (2009) investigate three ex post techniques to test for common method variance: the correlational marker technique, the confirmatory factor analysis (CFA) marker technique, and the unmeasured latent method construct (ULMC) technique. Only the CFA marker technique turns out to provide some value, whereas the commonly used Harman test does not turn out to provide such value. A comprehensive example of this technique has been demonstrated by Williams et al. (2010). Kock (2015) discusses a full collinearity test that is successful in the identification of common method bias with a model that nevertheless passes standard convergent and discriminant validity assessment criteria based on a CFA.
References
Latent variable models
Statistical deviation and dispersion
Psychometrics |
https://en.wikipedia.org/wiki/2012%20Dhivehi%20League | Statistics of Dhivehi League in the 2012 season. According to the FAM Calendar 2012, Dhiraagu Dhivehi League will start on April 18 and is set to end on September 27. The winner will qualify for the AFC Cup and 2nd place will qualify for AFC Cup play-offs.
Teams
Club All Youth Linkage
Club Eagles
Club Valencia
Club Vyansa
Maziya S&RC
New Radiant SC
VB Addu FC (name rebranded from VB Sports Club)
Victory Sports Club
Personnel
Note: Flags indicate national team as has been defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.
1 MediaNet on the front right side of the jersey, just opposite to the club logo.
2 Happy Market sponsor at the back, just below the jersey number. Happy Market sponsored their imported product "Oldenburger Milk".
League table
Format: In Round 1 and Round 2, all eight teams play against each other. Top six teams after Round 2 play against each other in Round 3. Teams with most total points after Round 3 are crowned the Dhivehi League champions and qualified for the AFC Cup. The top four teams qualify for the President's Cup. Bottom two teams after Round 2 play against top two teams of Second Division in Dhivehi League Qualification for places in next year's Dhivehi League.
Standings of round 1
Standings of round 2
Standings of round 3
Final standings
Results
Round 1 results
A total of 28 matches will be played in this round.
Round 2 results
A total of 28 matches will be played in this round.
Round 3 results
A total of 15 matches will be played in this round.
Season statistics
Scorers
Assists
Hat-tricks
4 Player scored 4 goals
Scoring
First goal of the season: Ashfaq for New Radiant SC against VB Addu FC (18 April 2012)
Fastest goal of the season: 3 minutes 8 seconds – Shamweel for VB Addu FC against Victory Sports Club (1 June 2012)
Largest winning margin: 5 goals
Club All Youth Linkage 0–5 Vyansa (30 May 2012)
Highest scoring game: 9 goals
VB Addu FC 6–3 Club Eagles (18 May 2012)
Most goals scored in a match by a single team: 6 goals
VB Addu FC 6–3 Club Eagles (18 May 2012)
Most goals scored in a match by a losing team: 3 goals
VB Addu FC 6–3 Club Eagles (18 May 2012)
Clean sheets by club
Most clean sheets: 5
New Radiant SC
Fewest clean sheets: 0
Club All Youth Linkage
Clean sheets by goalkeepers
Most clean sheets: 4
Imran Mohamed (New Radiant SC)
Fewest clean sheets: 0
Hussain Habeeb (VB Addu FC)
Ibrahim Siyad (Club All Youth Linkage)
Abdulla Ziyazan (VB Addu FC)
Athif Ahmed (Maziya S&RC)
Discipline
Worst overall disciplinary record (1 pt per yellow card, 2 pts per red card):
VB Addu FC – 23 points (17 yellow & 3 red card)
Best overall disciplinary record:
Maziya S&RC – 9 points (7 yellow & 1 red card)
Vyansa – 9 points (9 yellow cards)
Most yellow cards (club): 17 – VB Addu FC
Most yellow cards (player):
4 – Ali Imran (Club Valencia)
Most red cards (club): 3 – VB Addu FC
Most red cards/suspensions (player):
2 – Sobah Mohamed (VB Addu FC)
Promotion/relegatio |
https://en.wikipedia.org/wiki/Unevenly%20spaced%20time%20series | In statistics, signal processing, and econometrics, an unevenly (or unequally or irregularly) spaced time series is a sequence of observation time and value pairs (tn, Xn) in which the spacing of observation times is not constant.
Unevenly spaced time series naturally occur in many industrial and scientific domains: natural disasters such as earthquakes, floods, or volcanic eruptions typically occur at irregular time intervals. In observational astronomy, measurements such as spectra of celestial objects are taken at times determined by weather conditions, availability of observation time slots, and suitable planetary configurations. In clinical trials (or more generally, longitudinal studies), a patient's state of health may be observed only at irregular time intervals, and different patients are usually observed at different points in time. Wireless sensors in the Internet of things often transmit information only when a state changes to conserve battery life. There are many more examples in climatology, ecology, high-frequency finance, geology, and signal processing.
Analysis
A common approach to analyzing unevenly spaced time series is to transform the data into equally spaced observations using some form of interpolation - most often linear - and then to apply existing methods for equally spaced data. However, transforming data in such a way can introduce a number of significant and hard to quantify biases, especially if the spacing of observations is highly irregular.
Ideally, unevenly spaced time series are analyzed in their unaltered form. However, most of the basic theory for time series analysis was developed at a time when limitations in computing resources favored an analysis of equally spaced data, since in this case efficient linear algebra routines can be used and many problems have an explicit solution. As a result, fewer methods currently exist specifically for analyzing unevenly spaced time series data.
The least-squares spectral analysis methods are commonly used for computing a frequency spectrum from such time series without any data alterations.
Software
Traces is a Python library for analysis of unevenly spaced time series in their unaltered form.
CRAN Task View: Time Series Analysis is a list describing many R (programming language) packages dealing with both unevenly (or irregularly) and evenly spaced time series and many related aspects, including uncertainty.
MessyTimeSeries and MessyTimeSeriesOptim are Julia packages dedicated to incomplete time series.
See also
Least-squares spectral analysis
Non-uniform discrete Fourier transform
References
Statistical signal processing
Time series |
https://en.wikipedia.org/wiki/Low-rank%20approximation | In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix (the data) and an approximating matrix (the optimization variable), subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g., non-negativity and Hankel structure.
Low-rank approximation is closely related to numerous other techniques, including principal component analysis, factor analysis, total least squares, latent semantic analysis, orthogonal regression, and dynamic mode decomposition.
Definition
Given
structure specification ,
vector of structure parameters ,
norm , and
desired rank ,
Applications
Linear system identification, in which case the approximating matrix is Hankel structured.
Machine learning, in which case the approximating matrix is nonlinearly structured.
Recommender systems, in which cases the data matrix has missing values and the approximation is categorical.
Distance matrix completion, in which case there is a positive definiteness constraint.
Natural language processing, in which case the approximation is nonnegative.
Computer algebra, in which case the approximation is Sylvester structured.
Basic low-rank approximation problem
The unstructured problem with fit measured by the Frobenius norm, i.e.,
has analytic solution in terms of the singular value decomposition of the data matrix. The result is referred to as the matrix approximation lemma or Eckart–Young–Mirsky theorem. This problem was originally solved by Erhard Schmidt in the infinite dimensional context of integral operators (although his methods easily generalize to arbitrary compact operators on Hilbert spaces) and later rediscovered by C. Eckart and G. Young. L. Mirsky generalized the result to arbitrary unitarily invariant norms. Let
be the singular value decomposition of , where is the rectangular diagonal matrix with the singular values . For a given , partition , , and as follows:
where is , is , and is . Then the rank- matrix, obtained from the truncated singular value decomposition
is such that
The minimizer is unique if and only if .
Proof of Eckart–Young–Mirsky theorem (for spectral norm)
Let be a real (possibly rectangular) matrix with . Suppose that
is the singular value decomposition of . Recall that and are orthogonal matrices, and is an diagonal matrix with entries such that .
We claim that the best rank- approximation to in the spectral norm, denoted by , is given by
where and denote the th column of and , respectively.
First, note that we have
Therefore, we need to show that if where and have columns then .
Since has columns, then there must be a nontrivial linea |
https://en.wikipedia.org/wiki/International%20Documentary%20Association%20top%2025%20documentaries | The International Documentary Association produced in 2007 a list of the top 25 documentary films as voted by members.
Statistics
Night and Fog is the oldest and shortest entry on the list while An Inconvenient Truth is the most recent as of 2022.
Six Academy Award for Best Documentary Feature winners are featured on the list: Bowling for Columbine, Harlan County, U.S.A., An Inconvenient Truth, The Fog of War, Born into Brothels and Woodstock.
Michael Moore, Errol Morris and The Maysles Brothers have multiple entries on the list.
See also
50 Documentaries to See Before You Die
List of films considered the best
Documentary Now-Emmy-nominated mockumentary series parodying some of the documentary films featured on the list
References
External links
The official list on IndieWire
Top film lists
2007 in film |
https://en.wikipedia.org/wiki/List%20of%20Esteghlal%20F.C.%20players | This is a list of players who have played for Esteghlal Football Club.
Key
The list is ordered first by date of debut, and then if necessary in alphabetical order.
Statistics are correct up to and including the match played on 14 February 2021. Where a player left the club permanently after this date, his statistics are updated to his date of leaving.
Players highlighted in bold are still actively playing at Esteghlal.
World Cup Players
1978 FIFA World Cup
Iraj Danaeifard
Andranik Eskandarian
Hassan Nazari
Hassan Rowshan
1998 FIFA World Cup
Parviz Boroumand
Alireza Mansourian
Mehdi Pashazadeh
Javad Zarincheh
2006 FIFA World Cup
Reza Enayati
Amir Hossein Sadeghi
Vahid Talebloo
2014 FIFA World Cup
Khosro Heydari
Amir Hossein Sadeghi
Andranik Teymourian
Hashem Beikzadeh
2018 FIFA World Cup
Rouzbeh Cheshmi
Omid Ebrahimi
Pejman Montazeri
Majid Hosseini
2022 FIFA World Cup
Hossein Hosseini
Rouzbeh Cheshmi
Abolfazl Jalali
Olympic Players
1972 Summer Olympics
Ali Jabbari
Mansour Rashidi
Javad Ghorab
Javad Allahverdi
1976 Summer Olympics
Mansour Rashidi
Hassan Nazari
Andranik Eskandarian
Hassan Rowshan
Nasser Hejazi
List of players
List of goalscorers
Footnotes
References
Esteghlal F.C.
Association football player non-biographical articles
Esteghlal |
https://en.wikipedia.org/wiki/Robert%20W.%20Brooks | Robert Wolfe Brooks (Washington, D.C., September 16, 1952 – Montreal, September 5, 2002) was a mathematician known for his work in spectral geometry, Riemann surfaces, circle packings, and differential geometry.
He received his Ph.D. from Harvard University in 1977; his thesis, The smooth cohomology of groups of diffeomorphisms, was written under the supervision of Raoul Bott. He worked at the University of Maryland (1979–1984), then at the University of Southern California, and then, from 1995, at the Technion in Haifa.
Work
In an influential paper , Brooks proved that the bounded cohomology of a topological space is isomorphic to the bounded cohomology of its fundamental group.
Honors
Alfred P. Sloan fellowship
Guastella fellowship
Selected publications
Reviewer Maung Min-Oo for MathSciNet wrote: "This is a well written survey article on the construction of isospectral manifolds which are not isometric with emphasis on hyperbolic Riemann surfaces of constant negative curvature."
Brooks, Robert, "Form in Topology", The Magicians of Form, ed. by Robert M. Weiss. Laurelhurst Publications, 2003.
References
External links
Memorial page (Technion)
20th-century American mathematicians
21st-century American mathematicians
20th-century American Jews
1952 births
2002 deaths
American emigrants to Israel
Harvard Graduate School of Arts and Sciences alumni
Israeli Jews
Israeli mathematicians
Academic staff of Technion – Israel Institute of Technology
University of Southern California faculty
Mathematicians from Washington, D.C.
Differential geometers
Topologists
Sloan Research Fellows
21st-century American Jews |
https://en.wikipedia.org/wiki/TwitterCounter | Twitter Counter was an analytics service for Twitter, that ceased operations on November 5, 2018. It used to provide statistics of Twitter usage, and also offered a variety of widgets and buttons that people could add to their blogs, websites or social network profiles to show recent Twitter visitors and number of followers.
History
Twitter Counter started as a self funded startup based in Amsterdam, Netherlands on June 12, 2008. It is a third-party application and the Twitter name is licensed from Twitter, Inc.
Acquisitions
The service has received some attention in the past for acquiring Qwitter and Twitaholic, a competing company.
Issues and controversies
In 2017 TwitterCounter was held responsible for being hacked by Turkish hackers, using the service to advertise for a yes in the Turkish constitutional referendum. The hackers posted tweets with the hashtags nazialmanya and nazihollanda from Twitter accounts of prominent users who had granted TwitterCounter access to their Twitter accounts.
TwitterCounter took its service offline following that hack and later blocked any option for actions to be taken by its users until an exhaustive external investigation and audit were performed on the company's upgraded security measures by Fox-IT of the NRC Group. Those auditors found that such abuse of TwitterCounter's service was highly unlikely following the cyber-security related work the company has put in.
References
Internet properties established in 2008
Twitter services and applications |
https://en.wikipedia.org/wiki/Dynamic%20contagion%20process | In applied probability, a dynamic contagion process is a point process with stochastic intensity that generalises the Hawkes process and Cox process with exponentially decaying shot noise intensity.
See also
Point process
Cox process
Doubly stochastic model
References
Point processes |
https://en.wikipedia.org/wiki/Giuseppe%20Battaglini |
Giuseppe Battaglini (11 January 1826 – 29 April 1894) was an Italian mathematician.
He studied mathematics at the Scuola d'Applicazione di Ponti e Strade (School of Bridges and Roads) of Naples. In 1860 he was appointed professor of Geometria superiore at the University of Naples. Alfredo Capelli and Giovanni Frattini were his Laurea students.
See also
8155 Battaglini
Notes
References
E. D'Ovidio, Commemorazione del Socio Giuseppe Battaglini, Mem. Reale Accad. Lincei Cl. Sci. Fis. 1(5) (1895), 558–610.
A. Capelli, Giuseppe Battaglini, Giornale di matematiche 20 (1894), 205-208.
F.G. Tricomi, Matematici Italiani del Primo Secolo dello Stato Unitario, Mem. Acc. Sci. Torino Cl. Sci. Fis. Mat. Nat., serie VI, t. 1, (1962–66) 1–120.
External links
An Italian short biography of Giuseppe Battaglini at University of Turin
1826 births
1894 deaths
19th-century Italian mathematicians
Geometers
Scientists from Naples
Giornale di matematiche editors |
https://en.wikipedia.org/wiki/Zanran | Zanran is a search engine for data and statistics. Zanran's focus is on finding graphs, charts and tables on the Internet, which distinguishes it from other search engines such as Google, Bing, etc. Unlike a typical search engine, the results—graphs, tables, etc.—can be previewed by mouse-hovering over the thumbnails.
History
In 2006, the founders, Dr Yves Dassas and Jon Goldhill, started developing the technology that makes Zanran possible. A limited beta ran, starting in November 2010. The service was launched as a public beta version in May 2011.
Technology
Zanran has developed two technologies specifically for this application:
Image ‘classification’ is the ability for a computer to decide whether an image is a graph, a pie-chart etc. as opposed to a photograph or a cartoon. The Zanran algorithms work to over 95% accuracy. This is important because most images on the web are not graphs and otherwise there would be a large number of false positives.
Text extraction is the process of taking the most appropriate text to describe the graph. This contrasts with a normal search engine where an entire HTML page might be included.
These processes are the subject of Zanran's UK patent. The image processing in particular takes a great deal of computing power. Zanran runs on the Amazon cloud, and uses hundreds of machines at a time.
The service is English-language only as of December 2011.
The company
Zanran Ltd is based in London, UK. It was financed by the founders prior to a private angel investment round in March 2010.
Other data-search on the Internet
Other specialists in the data-search space include WolframAlpha, Infochimps, and Timetric.
References
External links
Official website
Internet search engines |
https://en.wikipedia.org/wiki/Vogan%20diagram | In mathematics, a Vogan diagram, named after David Vogan, is a variation of the Dynkin diagram of a real semisimple Lie algebra that indicates the maximal compact subgroup. Although they resemble Satake diagrams they are a different way of classifying simple Lie algebras.
References
Lie algebras |
https://en.wikipedia.org/wiki/AP%20Physics | There are four Advanced Placement (AP) Physics courses administered by the College Board as part of its Advanced Placement program: the algebra-based Physics 1 and Physics 2 and the calculus-based Physics C: Mechanics and Physics C: Electricity and Magnetism. All are intended to be at the college level. Each AP Physics course has an exam for which high-performing students may receive credit toward their college coursework.
AP Physics 1 and 2
AP Physics 1 and AP Physics 2 were introduced in 2015, replacing AP Physics B. The courses were designed to emphasize critical thinking and reasoning as well as learning through inquiry. They are algebra-based and do not require any calculus knowledge.
AP Physics 1
AP Physics 1 covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Dynamics
Unit 3: Circular Motion and Gravitation
Unit 4: Energy
Unit 5: Momentum
Unit 6: Simple Harmonic Motion
Unit 7: Torque and Rotational Motion
Until 2020, the course also covered topics in electricity (including Coulomb's Law and resistive DC circuits), mechanical waves, and sound. These units were removed because they are included in AP Physics 2.
AP Physics 2
AP Physics 2 covers the following topics:
Unit 1: Fluids
Unit 2: Thermodynamics
Unit 3: Electric Force, Field, and Potential
Unit 4: Electric Circuits
Unit 5: Magnetism and Electromagnetic Induction
Unit 6: Geometric and Physical Optics
Unit 7: Quantum, Atomic, and Nuclear Physics
AP Physics C
From 1969 to 1972, AP Physics C was a single course with a single exam that covered all standard introductory university physics topics, including mechanics, fluids, electricity and magnetism, optics, and modern physics. In 1973, the College Board split the course into AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism. The exam was also split into two separate 90-minute tests, each equivalent to a semester-length calculus-based college course. Until 2006, both exams could be taken for a single fee; since then, a separate fee is charged for each exam.
The two Physics C courses can be combined to create a year-long Physics C course that prepares students for both exams.
AP Physics C: Mechanics
AP Physics C: Mechanics covers Newtonian mechanics, including:
Unit 1: Kinematics
Unit 2: Newton’s Laws of Motion
Unit 3: Work, Energy, and Power
Unit 4: Systems of Particles and Linear Momentum
Unit 5: Rotation
Unit 6: Oscillations
Unit 7: Gravitation
AP Physics C: Electricity and Magnetism
AP Physics C: Electricity and Magnetism covers electricity and magnetism, including:
Unit 1: Electrostatics
Unit 2: Conductors, Capacitors, Dielectrics
Unit 3: Electric Circuits
Unit 4: Magnetic Fields
Unit 5: Electromagnetism
AP Physics B
Until 1969, only a single AP Physics course existed. In 1969, it was split into AP Physics B and AP Physics C, each having its own exam. AP Physics B was equivalent to an introductory algebra-based college course in physics. The course did n |
https://en.wikipedia.org/wiki/Firdaus%20Faudzi | Mohd Firdaus bin Mohd Faudzi (born 2 August 1987 in Kedah) is a Malaysian professional footballer who plays as a right back for Kuala Lumpur Rovers in the Malaysia M3 League.
Career statistics
Club
References
External links
1987 births
Living people
Malaysian men's footballers
Footballers from Kedah
Malaysian people of Malay descent
Kuala Lumpur City F.C. players
Terengganu FC players
Men's association football defenders
FELDA United F.C. players
Sime Darby F.C. players
Kedah Darul Aman F.C. players
Perlis F.A. players
Kuala Muda Naza F.C. players |
https://en.wikipedia.org/wiki/Robert%20M.%20Anderson%20%28mathematician%29 | Robert Murdoch Anderson (born 1951) is Professor of Economics and of Mathematics at the University of California, Berkeley. He is director of the Center for Risk Management Research, University of California, Berkeley and he was chair of the University of California Academic Senate 2011-12. He is also the Co-Director for the Consortium for Data Analytics in Risk at UC Berkeley.
Research
Anderson’s nonstandard construction of Brownian motion is a single object which, when viewed from a nonstandard perspective, has all the formal properties of a discrete random walk; however, when viewed from a measure-theoretic perspective, it is a standard Brownian motion. This permits a pathwise definition of the Itô Integral and pathwise solutions of stochastic differential equations.
Anderson’s contributions to mathematical economics are primarily within General Equilibrium Theory. Some of this work uses nonstandard analysis, but much of it provides simple elementary treatments that generalize work that had originally been done using sophisticated mathematical machinery. The best known of these papers is the 1978 Econometrica article cited, which establishes by elementary means a very general theorem on the cores of exchange economies.
In the 2008 Econometrica article cited, Anderson and Raimondo provide the first satisfactory proof of existence of equilibrium in a continuous-time securities market with more than one agent. The paper also provides a convergence theorem relating the equilibria of discrete-time securities markets to those of continuous-time securities markets. It uses Anderson’s nonstandard construction of Brownian and properties of real analytic functions.
Recently, Anderson has focused on the analysis of investment strategies, and his work relies on both theoretical considerations and empirical analysis. In an article published in the Financial Analysts Journal in 2012 and cited below, Anderson, Bianchi and Goldberg found that long-term returns to risk parity strategies, which have acquired tens of billions of dollars in assets under management in the wake of the global financial crisis, are not materially different from the returns to more transparent strategies once realistic financing and trading costs are taken into account; they do well in some periods and poorly in others. A subsequent investigation by the same research team found that returns to dynamically levered strategies such as risk parity are highly unpredictable due to high sensitivity of strategy performance to a key risk factor: the co-movement of leverage with return to the underlying portfolio that is levered.
Selected publications
Anderson, Robert M.: A nonstandard representation for Brownian motion and Ito integration. Israel Journal of Mathematics 25(1976), 15-46.
Anderson, Robert M.: An elementary core equivalence theorem. Econometrica 46(1978), 1483-1487.
Anderson, Robert M. and Salim Rashid: A Nonstandard Characterization of Weak Convergence, Proceedings |
https://en.wikipedia.org/wiki/Tall%20cardinal | In mathematics, a tall cardinal is a large cardinal κ that is θ-tall for all ordinals θ, where a cardinal is called θ-tall if there is an elementary embedding j : V → M with critical point κ such that j(κ) > θ and Mκ ⊆ M.
Tall cardinals are equiconsistent with strong cardinals.
References
Large cardinals |
https://en.wikipedia.org/wiki/Nevanlinna%20function | In mathematics, in the field of complex analysis, a Nevanlinna function is a complex function which is an analytic function on the open upper half-plane and has non-negative imaginary part. A Nevanlinna function maps the upper half-plane to itself or to a real constant, but is not necessarily injective or surjective. Functions with this property are sometimes also known as Herglotz, Pick or R functions.
Integral representation
Every Nevanlinna function admits a representation
where is a real constant, is a non-negative constant, is the upper half-plane, and is a Borel measure on satisfying the growth condition
Conversely, every function of this form turns out to be a Nevanlinna function.
The constants in this representation are related to the function via
and the Borel measure can be recovered from by employing the Stieltjes inversion formula (related to the inversion formula for the Stieltjes transformation):
A very similar representation of functions is also called the Poisson representation.
Examples
Some elementary examples of Nevanlinna functions follow (with appropriately chosen branch cuts in the first three). ( can be replaced by for any real number .)
These are injective but when does not equal 1 or −1 they are not surjective and can be rotated to some extent around the origin, such as .
A sheet of such as the one with .
(an example that is surjective but not injective).
A Möbius transformation
is a Nevanlinna function if (sufficient but not necessary) is a positive real number and . This is equivalent to the set of such transformations that map the real axis to itself. One may then add any constant in the upper half-plane, and move the pole into the lower half-plane, giving new values for the parameters. Example:
and are examples which are entire functions. The second is neither injective nor surjective.
If is a self-adjoint operator in a Hilbert space and is an arbitrary vector, then the function
is a Nevanlinna function.
If and are both Nevanlinna functions, then the composition is a Nevanlinna function as well.
Importance in operator theory
Nevanlinna functions appear in the study of Operator monotone functions.
References
General
Complex analysis |
https://en.wikipedia.org/wiki/Holomorphic%20curve | In mathematics, in the field of complex geometry, a holomorphic curve in a complex manifold M is a non-constant holomorphic map f from the complex plane to M.
Nevanlinna theory addresses the question of the distribution of values of a holomorphic curve in the complex projective line.
See also
Pseudoholomorphic curve
Notes
References
Complex manifolds |
https://en.wikipedia.org/wiki/Nepalis%20in%20the%20Netherlands | Nepalese in the Netherlands consists of immigrants, expatriates and international students from Nepal to the Netherlands as well as Dutch people of Nepalese origin. As of 2010, statistics of the Dutch Centraal Bureau voor de Statistiek shows that there are about 1,505 people of Nepalese origin living in the country.
Lhotshampa refugees
The Netherlands are home to a number of Lhotshampa (Bhutanese Nepalis) refugees who were deported from Bhutan. Every year the Netherlands has been resettling around 100 Lhotshampa refugees since 2009. As of November 2011, around 350 refugees got resettled in The Netherlands.
Education
Nepalese students have been studying in the Netherlands since the early 1970s. Every year about a hundred students attend an international program in the Netherlands. So far, about 2,000 Nepalese students have graduated from different institutions all over the Netherlands in areas like Engineering, Law, Social Sciences and Management. Many Nepalese students are supported by the Netherlands Fellowship Program (NFP). The Consulate of the Netherlands is the body responsible for helping prospective Nepalese students in contacting an institution that meets their needs.
Organizations
Until the late 1990s, there was no Nepali-run organizations so almost all Nepal-related programs were organized by the Dutch people. The Nepal Samaj Nederlands was founded in 1999 as a cultural entity, it started to promote various Nepalese festivities among Nepalese and Dutch people who are interested in friendship with Nepalese people, culture, language and food. NSN publishes a news bulletin called Chautrai twice a year in both Nepali and Dutch.
Other organizations include the NRN-NCC Netherlands and the Worldwide Nepalese Students' Organization – Netherlands.
See also
Hinduism in the Netherlands
Buddhism in the Netherlands
References
External links
Nepal Samaj Nederlands
Non Resident Nepali Association, Netherlands
Worldwide Nepalese Students' Organization (Netherlands)
Asian diaspora in the Netherlands
Ethnic groups in the Netherlands
Netherlands |
https://en.wikipedia.org/wiki/Immanuel%20Bonfils | Immanuel ben Jacob Bonfils (c. 1300 – 1377) was a French-Jewish mathematician and astronomer in medieval times who flourished from 1340 to 1377, a rabbi who was a pioneer of exponential calculus and is credited with inventing the system of decimal fractions. He taught astronomy and mathematics in Orange and later lived in Tarascon, both towns in the Holy Roman Empire that are now part of modern-day France. Bonfils studied the works of Gersonides (Levi ben Gershom), the father of modern trigonometry, and Al-Battani and even taught at the academy founded by Gersonides in Orange.
Bonfils preceded any attempt at a European decimal system by 150 years, publishing the treatise Method of Division by Rabbi Immanuel and Other Topics () on the general theory of decimal fractions around 1350. This was a forerunner to Simon Stevin, the first to widely distribute publications on this topic, and employed decimal notation for integers, fractions, and both positive and negative exponents.
While living in Tarascon in 1365, Bonfils published the work for which he would become best known, Sepher Shesh Kenaphayim (Book of Six Wings) (), a manuscript on eclipses that featured astronomical tables predicting future solar and lunar positions (divided into six parts). The book included data for every important date on the Jewish calendar and even correction factors necessary for those who lived as far away as Constantinople. Breaking the tables into six parts was an allusion to the six wings of the seraphim as mentioned in the Bible in Isaiah 6:2, earning Bonfils the nickname master of the wings.
For 300 years, Bonfils' calculations which were extensively used by sailors and explorers well into the 17th century. The book was translated from Hebrew into Latin in 1406 by Johannes Lucae e Camerino and into Greek in 1435 by Michael Chrysokokkes. The book inspired Chemist George Sarton to publish his own version of Six Wings nearly 600 years later. Bonfils translated a number of books from Latin to Hebrew. He also wrote a treatise on the relationship between the diameter and circumference of a circle and methods of calculating square roots.
Works
Bonfils, Immanuel (1365), The Wings of Eagles, , in six books. Other name: Book of Six Wings, . The main astronomical work of Bonfils.
Bonfils, Immanuel (c. 1350), The Invention of the Decimal Fractions and the Application of the Exponential Calculus by Immanuel Bonfils of Tarascon
Bonfils, Immanuel (c. 1350), Method of Division by Rabbi Immanuel and Other Topics, , a course of decimal arithmetics, including decimal fractions.
References
Gandz, S.: "The invention of the decimal fractions and the application of the exponential calculus by Immanuel Bonfils of Tarascon (c. 1350)", Isis 25 (1936), 16–45.
P. Solon: The Six Wings of J. Bonfils and Michael Chrysokokkes, in: Centaurus, 15 (1970) 1–20
External links
BONFILS, IMMANUEL BEN JACOB in Jewish Encyclopedia.
Six Wings.
1300 births
1377 deaths
Jewish astronomers
14 |
https://en.wikipedia.org/wiki/Weak%20duality | In applied mathematics, weak duality is a concept in optimization which states that the duality gap is always greater than or equal to 0.
This means that for any maximization problem, called the primal problem, the solution to the primal problem is always less than or equal to the solution to the dual (minimization) problem.
Weak duality is in contrast to strong duality, which states that the primal optimal objective and the dual optimal objective are equal. Strong duality only holds in certain cases.
Uses
Many primal-dual approximation algorithms are based on the principle of weak duality.
Weak duality theorem
The primal problem:
Maximize subject to ;
The dual problem,
Minimize subject to .
The weak duality theorem states .
Namely, if is a feasible solution for the primal maximization linear program and is a feasible solution for the dual minimization linear program, then the weak duality theorem can be stated as
, where and are the coefficients of the respective objective functions.
Proof:
Generalizations
More generally, if is a feasible solution for the primal maximization problem and is a feasible solution for the dual minimization problem, then weak duality implies where and are the objective functions for the primal and dual problems respectively.
See also
Convex optimization
Max–min inequality
References
Linear programming
Convex optimization |
https://en.wikipedia.org/wiki/Aldin%20%C5%A0etki%C4%87 | Aldin Šetkić (born 21 December 1987) is a Bosnian professional tennis player.
Career statistics
Singles titles (14)
Doubles titles (16)
Singles performance timeline
''Current as far as the 2015 US Open.
References
External links
1987 births
Living people
Bosnia and Herzegovina expatriate sportspeople in Serbia
Bosnia and Herzegovina male tennis players
Tennis players from Belgrade
Sportspeople from Sarajevo |
https://en.wikipedia.org/wiki/National%20Mathematical%20Society%20of%20Pakistan | The National Mathematics Society of Pakistan (NMSP), is an academic, non-profit, and scientific society of applied mathematicians and engineers dedicated to the development and promotion of applied mathematics at all levels. The society is aim to serve as the national community by organizing seminars, workshops, competitions, meetings and publications.
Established in 2010, the society was initially headquartered in Abdus Salam School of Mathematical Sciences (AS-SMS), at the Government College University, Lahore. It is working abroad to come and be a part of the international mathematical society and exchange ideas and skills with the other members of the society in Pakistan. It is currently the adhering organization of the International Mathematical Union on Pakistan.
On 2015, the Abdus Salam Shield Of Honor in Mathematics was initiated by NMSP to promote and recognize quality research in Mathematics. The first Shield was given to Prof. Hassan Azad from KFUPM in February 2016.
On 2017, NSMP initiated the Abdus Salam Medal for Mathematics to recognise the young and senior mathematicians at national and international levels for their excellent performance. The first recipients were Dr. Ayesha Asloob Qureshi from Sabancı University and Dr. Sahibzada Waleed Noor from University of Campinas.
References
Educational organisations based in Pakistan
Learned societies of Pakistan
Scientific organisations based in Pakistan |
https://en.wikipedia.org/wiki/Leif%20Arkeryd | Leif O. Arkeryd (born 24 August 1940) is professor emeritus of mathematics at Chalmers University of Technology. He is a specialist on the theory of the Boltzmann equation.
Arkeryd earned his doctorate from Lund University in 1966, under the supervision of Jaak Peetre.
Selected publications
Arkeryd, Leif: On the Boltzmann equation. I. Existence. Arch. Rational Mech. Anal. 45 (1972), 1–16.
Nonstandard analysis. Theory and applications. Proceedings of the NATO Advanced Study Institute on Nonstandard Analysis and its Applications held in Edinburgh, June 30–July 13, 1996. Edited by Leif O. Arkeryd, Nigel J. Cutland and C. Ward Henson. NATO Advanced Science Institutes Series C: Mathematical and Physical Sciences, 493. Kluwer Academic Publishers Group, Dordrecht, 1997.
See also
Influence of non-standard analysis
References
Swedish mathematicians
Living people
Lund University alumni
Academic staff of the Chalmers University of Technology
1940 births |
https://en.wikipedia.org/wiki/Journal%20of%20Computational%20Geometry | The Journal of Computational Geometry (JoCG) is an open access mathematics journal that was established in 2010. It covers research in all aspects of computational geometry. All its papers are published free of charge to both authors and readers, and are made freely available through a Creative Commons Attribution license. The current editors-in-chief are Kenneth L. Clarkson and Günter Rote.
Along with its regularly contributed papers, the journal has since 2014 invited selected papers from the annual Symposium on Computational Geometry to a special issue.
Abstracting and indexing
The Journal of Computational Geometry is abstracted and indexed in MathSciNet, Zentralblatt Math, and the Emerging Sources Citation Index. Long-term preservation of journal contents are ensured by the journal's membership in the Global LOCKSS Network.
References
External links
Carleton University
Computational geometry
Computer science journals
Creative Commons Attribution-licensed journals
Mathematics journals
Open access journals
Academic journals established in 2010 |
https://en.wikipedia.org/wiki/Game%20manager | In American football, a game manager is a quarterback who, despite pedestrian individual statistics such as passing yards and touchdowns, also maintains low numbers of mistakes, such as interceptions and fumbles. Such a quarterback is seen as a major factor in neither his team's wins nor their losses; his performance is good enough to not negatively affect the performances of other players on his team, even if he himself does not have the skills to be considered an elite player. Game managers often benefit from strong defense and rushing offense on their teams.
Arizona Sports said that "game manager" was "a term that often comes with negative connotations of a non-talented, play-it-safe type of quarterback". The New York Times called it a "backhanded compliment". The San Francisco Chronicle wrote, "As consolation ... Quarterbacks are called game managers only if they're winning." The Associated Press opined, "But like any cliche, [game manager is] oversimplified". Former Indianapolis Colts president Bill Polian laughed, "Every quarterback is a game manager, it's what the job is all about". Nick Saban said, "I don't think you can be a good quarterback unless you're a really good game manager". The Los Angeles Times noted that although Trent Dilfer was not an elite quarterback, the 2000 Baltimore Ravens won the Super Bowl with a dominant defense and Dilfer as a game manager. Peyton Manning, who was a five-time NFL Most Valuable Player, transitioned into a game manager role with a defensive-oriented Denver Broncos squad in his final season in 2015, when he won his second championship and became at the time the oldest quarterback to win a Super Bowl, at age 39.
See also
Journeyman quarterback
References
American football terminology |
https://en.wikipedia.org/wiki/Semi-orthogonal%20matrix | In linear algebra, a semi-orthogonal matrix is a non-square matrix with real entries where: if the number of columns exceeds the number of rows, then the rows are orthonormal vectors; but if the number of rows exceeds the number of columns, then the columns are orthonormal vectors.
Equivalently, a non-square matrix A is semi-orthogonal if either
In the following, consider the case where A is an m × n matrix for m > n.
Then
The fact that implies the isometry property
for all x in Rn.
For example, is a semi-orthogonal matrix.
A semi-orthogonal matrix A is semi-unitary (either A†A = I or AA† = I) and either left-invertible or right-invertible (left-invertible if it has more rows than columns, otherwise right invertible). As a linear transformation applied from the left, a semi-orthogonal matrix with more rows than columns preserves the dot product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation or reflection.
References
Geometric algebra
Matrices |
https://en.wikipedia.org/wiki/Ramanujan%27s%20master%20theorem | In mathematics, Ramanujan's Master Theorem, named after Srinivasa Ramanujan, is a technique that provides an analytic expression for the Mellin transform of an analytic function.
The result is stated as follows:
If a complex-valued function has an expansion of the form
then the Mellin transform of is given by
where is the gamma function.
It was widely used by Ramanujan to calculate definite integrals and infinite series.
Higher-dimensional versions of this theorem also appear in quantum physics (through Feynman diagrams).
A similar result was also obtained by Glaisher.
Alternative formalism
An alternative formulation of Ramanujan's Master Theorem is as follows:
which gets converted to the above form after substituting and using the functional equation for the gamma function.
The integral above is convergent for subject to growth conditions on .
Proof
A proof subject to "natural" assumptions (though not the weakest necessary conditions) to Ramanujan's Master theorem was provided by G. H. Hardy (chapter XI) employing the residue theorem and the well-known Mellin inversion theorem.
Application to Bernoulli polynomials
The generating function of the Bernoulli polynomials is given by:
These polynomials are given in terms of the Hurwitz zeta function:
by for .
Using the Ramanujan master theorem and the generating function of Bernoulli polynomials one has the following integral representation:
which is valid for .
Application to the gamma function
Weierstrass's definition of the gamma function
is equivalent to expression
where is the Riemann zeta function.
Then applying Ramanujan master theorem we have:
valid for .
Special cases of and are
Application to Bessel functions
The Bessel function of the first kind has the power series
By Ramanujan's Master Theorem, together with some identities for the gamma function and rearranging, we can evaluate the integral
valid for .
Equivalently, if the spherical Bessel function is preferred, the formula becomes
valid for .
The solution is remarkable in that it is able to interpolate across the major identities for the gamma function. In particular, the choice of gives the square of the gamma function, gives the duplication formula, gives the reflection formula, and fixing to the evaluable or gives the gamma function by itself, up to reflection and scaling.
Bracket integration method
The bracket integration method (method of brackets) applies Ramanujan's Master Theorem to a broad range of integrals. The bracket integration method generates an integral of a series expansion, introduces simplifying notations, solves linear equations, and completes the integration using formulas arising from Ramanujan's Master Theorem.
Generate an integral of a series expansion
This method transforms the integral to an integral of a series expansion involving M variables, , and S summation parameters, . A multivariate integral may assume this |
https://en.wikipedia.org/wiki/List%20of%20largest%20Canadian%20cities%20by%20census | This is a list of the largest cities in Canada by census starting with the 1871 census of Canada, the first national census. Only communities that were incorporated as cities (defined by Statistics Canada as CY, as compared to larger census metropolitan areas (CMA) or census agglomerations (CA) aroundand includingthese CYs) at the time of each census are presented. Therefore, this list does not include any incorporated towns (T) that may have been larger than any incorporated cities at each census.
1871
1881
1891
Winnipeg, Manitoba, becomes the first city in Western Canada to appear on the Top 10 list, cutting The Maritimes from three spots on the liston both (all) of the previous censusesto two.
1901
Vancouver, British Columbia, becomes the second city in Western Canada to appear on the Top 10 list, cutting Ontario from five spots on the liston all three previous censusesto four.
1911
Calgary, Alberta, becomes the third city in Western Canada to appear on the Top 10 list, cutting The Maritimes from two spots on the liston the two most recent previous censusesto two.
1921
Edmonton, Alberta, becomes the fourth city in Western Canada to appear on the Top 10 list, removing all cities in The Maritimes from the list for the first time as of this sixth national census; The Maritimes have never again placed a city in the Top 10 list. Western Canada's four most populous citiesVancouver, Calgary, Edmonton and Winnipeghave remained in the Top 10 since 1921, joined briefly in 2001 (only) by Surrey, British Columbia.
1931
1941
1951
1956
1961
1966
1971
1976
1981
After holding two spots on the Top 10 list in all 14 previous censuses, Quebec is reduced to one city on the list. It will briefly return to two positions, in 1996 (19th census) and 2006 (20th census).
Through the 1970s, while a number of Canadian cities suffered population losses, the three Canadian Prairies cities on the Top 10 listCalgary, Edmonton and Winnipegsaw significant growth: the two Alberta cities primarily through consistent net migration, with Winnipeg primarily boosted by amalgamation of its surrounding municipalities prior to the 1976 census.
1986
1991
1996
2001
A wave of amalgamations took place in Ontario during the 1990s and 2000s that affected city population figures.
A significant change is that, after holding the position of largest city in Canada on all 19 previous censuses, covering the first 129 years of the nation of Canada, Montreal drops to second place on the list, displaced by Toronto. These two cities have maintained the same top two positions on all subsequent censuses.
2006
A wave of amalgamations took place in Quebec since the previous census, affecting city population figures. In particular, in 2002, both Montreal and Quebec City combined with a number of smaller surrounding cities, though some later chose to leave the amalgamations.
2011
2016
2021
See also
Census in Canada
List of the largest cities and towns in Canada by area
List o |
https://en.wikipedia.org/wiki/Aaron%20Pixton | Aaron C. Pixton (born January 13, 1986) is an American mathematician at the University of Michigan. He works in enumerative geometry, and is also known for his chess playing, where he is a FIDE Master.
Early life and education
Pixton was born in Binghamton, New York; his father, Dennis Pixton, is a retired professor of mathematics at Binghamton University. He grew up in Vestal, New York. While a student at Vestal Senior High School, he scored a perfect score on the American Mathematics Competition three times from 2002 to 2004. He went on to the International Mathematical Olympiad in 2003 and 2004 to win consecutive gold medals.
He received a Bachelor of Arts in 2008 and a Doctor of Philosophy in 2013, both from Princeton University.
While an undergraduate at Princeton University, Pixton was a three-time Putnam Fellow. For his research conducted as an undergraduate, he was awarded the 2009 Morgan Prize. In 2008, he received a Churchill Scholarship to the University of Cambridge. Pixton received his Ph.D. in 2013 from Princeton under the supervision of Rahul Pandharipande; his dissertation was The tautological ring of the moduli space of curves.
Career
Pixton was appointed as a Clay Research Fellow for a term of five years beginning in 2013. After two years as a postdoctoral researcher at Harvard University, he became an assistant professor of mathematics at the Massachusetts Institute of Technology in 2015. In 2017, he received a Sloan Research Fellowship. In 2020, he moved to the University of Michigan as an assistant professor.
Chess
Pixton is also a former child prodigy in chess. He was the 2001 U.S. Cadet Champion and the 2002 US Junior Chess Champion, and had a win against the former US Champion Joel Benjamin in 2003.
Selected publications
References
External links
Home page at MIT
1986 births
Living people
21st-century American mathematicians
Sportspeople from Binghamton, New York
Princeton University alumni
Chess FIDE Masters
American chess players
Putnam Fellows
International Mathematical Olympiad participants
Massachusetts Institute of Technology School of Science faculty
Mathematicians from New York (state)
Sloan Research Fellows
Algebraic geometers
University of Michigan faculty |
https://en.wikipedia.org/wiki/Berkovich%20space | In mathematics, a Berkovich space, introduced by , is a version of an analytic space over a non-Archimedean field (e.g. p-adic field), refining Tate's notion of a rigid analytic space.
Motivation
In the complex case, algebraic geometry begins by defining the complex affine space to be For each we define the ring of analytic functions on to be the ring of holomorphic functions, i.e. functions on that can be written as a convergent power series in a neighborhood of each point.
We then define a local model space for to be
with A complex analytic space is a locally ringed -space which is locally isomorphic to a local model space.
When is a complete non-Archimedean field, we have that is totally disconnected. In such a case, if we continue with the same definition as in the complex case, we wouldn't get a good analytic theory. Berkovich gave a definition which gives nice analytic spaces over such , and also gives back the usual definition over
In addition to defining analytic functions over non-Archimedean fields, Berkovich spaces also have a nice underlying topological space.
Berkovich spectrum
A seminorm on a ring is a non-constant function such that
for all . It is called multiplicative if and is called a norm if implies .
If is a normed ring with norm then the Berkovich spectrum of , denoted , is the set of multiplicative seminorms on that are bounded by the norm of .
The Berkovich spectrum is equipped with the weakest topology such that for any the map
is continuous.
The Berkovich spectrum of a normed ring is non-empty if is non-zero and is compact if is complete.
If is a point of the spectrum of then the elements with form a prime ideal of . The field of fractions of the quotient by this prime ideal is a normed field, whose completion is a complete field with a multiplicative norm; this field is denoted by and the image of an element is denoted by . The field is generated by the image of .
Conversely a bounded map from to a complete normed field with a multiplicative norm that is generated by the image of gives a point in the spectrum of .
The spectral radius of
is equal to
Examples
The spectrum of a field complete with respect to a valuation is a single point corresponding to its valuation.
If is a commutative C*-algebra then the Berkovich spectrum is the same as the Gelfand spectrum. A point of the Gelfand spectrum is essentially a homomorphism to , and its absolute value is the corresponding seminorm in the Berkovich spectrum.
Ostrowski's theorem shows that the Berkovich spectrum of the integers (with the usual norm) consists of the powers of the usual valuation, for a prime or . If is a prime then and if then When these all coincide with the trivial valuation that is on all non-zero elements. For each (prime or infinity) we get a branch which is homeomorphic to a real interval, the branches meet at the point corresponding to the trivial valuation. The open neighborhoods of the tri |
https://en.wikipedia.org/wiki/First-difference%20estimator | In statistics and econometrics, the first-difference (FD) estimator is an estimator used to address the problem of omitted variables with panel data. It is consistent under the assumptions of the fixed effects model. In certain situations it can be more efficient than the standard fixed effects (or "within") estimator.
The estimator requires data on a dependent variable, , and independent variables, , for a set of individual units and time periods . The estimator is obtained by running a pooled ordinary least squares (OLS) estimation for a regression of on .
Derivation
The FD estimator avoids bias due to some unobserved, time-invariant variable , using the repeated observations over time:
Differencing the equations, gives:
which removes the unobserved .
The FD estimator is then obtained by using the differenced terms for x and u in OLS:
Where and , are notation for matrices of relevant variables. Note that the rank condition must be met for to be invertible () where is the number of regressors.
Let and define analogously. If , by the Central limit theorem, Law of large numbers, and Slutsky's theorem, the estimator is distributed normally with asymptotic variance of .
Under the assumption of homoskedasticity and no serial correlation, mathematically that, , the asymptotic variance can be estimated with
where is given by
and .
Properties
To be unbiased, the fixed estimator (FE) requires strict exogeneity, . The first difference estimator is also unbiased under this assumption. Under the weaker assumption that , the FD estimator is consistent. Note that this assumption is less restrictive than the assumption of strict exogeneity which is required for consistency using the FE estimator when T is fixed. If T goes to infinity, then both FE and FD are consistent with the weaker assumption of contemporaneous exogeneity.
Relation to fixed effects estimator
For , the FD and fixed effects estimators are numerically equivalent.
Under the assumption of homoscedasticity and no serial correlation in , the FE estimator is more efficient than the FD estimator. This is because the FD estimator induces no serial correlation when differencing the errors. If follows a random walk, however, the FD estimator is more efficient as are serially uncorrelated.
See also
Factor analysis
Panel analysis
References
Estimator
Latent variable models |
https://en.wikipedia.org/wiki/World%20Integrated%20Trade%20Solution | The World Integrated Trade Solution (WITS) is a trade software provided by the World Bank for users to query several international trade databases.
WITS allows the user to query trade statistics (export, import, re-exports and re-imports) from the UN's repository of official international trade statistics and relevant analytical tables (UN COMTRADE), tariff and non-tariff measures data from UNCTAD trade analysis and information system, tariff and bound tariff information from WTO's integrated data base for applied tariffs and imports, and from the WTO's consolidated tariff schedules database for the bound duties of all WTO members. WITS also has a module called global preferential trade agreement to search and browse free trade agreements. It also has modules to calculate several trade indicators and perform tariff cut simulation.
WITS has multiple sections, including summary trade statistics by country on total exports, imports, export/import partners, top product groups exported/imported, top exporters and importers in the World, derived analytical databases, and a WITS application allowing the user to use the underlying data to generate custom trade statistics and indicators and tariff cut simulations.
Summary trade statistics by country
The trade data for each country is divided into four sections. The first section is the country profile summary, and provides summary of the key indicators in trade, tariffs, trade indicators, top export and import partners of the country, and top exported products. The next section is by trading partner and provides the top export or import partners of the country with the trade value, partner share. The final section is by product group, providing details of exports and imports of the country by various standard product groups like by HS sector, SITC revision 2 standard product groups, and UNCTAD's stages of processing.
Advanced analysis
The second section of WITS allows users to perform advanced analysis and select their own set of country and country groups, product and product groups, bulk download data, analyze trade competitiveness of countries and perform tariff cut simulation. The trade outcomes module provides a flexible array of options. These options include the selection of countries of interest, product classifications, the usage of reported or mirrored data, and the years of the analysis. In addition, users can also create ad-hoc country and product groups or—when relevant—investigate specific trading partners. It is also possible to generate only a subset of indicators and get comparative data on peer countries. The user's guideline document provides specific details for these options.
As an alternative to the indicator by indicator analysis, the software offers a built-in set of choices that the user can automatically employ to generate the set of indicators by section for the country and the year of choice. The output is data for each indicator along with a companion visualization.
|
https://en.wikipedia.org/wiki/Optimal%20estimation | In applied statistics, optimal estimation is a regularized matrix inverse method based on Bayes' theorem.
It is used very commonly in the geosciences, particularly for atmospheric sounding.
A matrix inverse problem looks like this:
The essential concept is to transform the matrix, A, into a conditional probability and the variables, and into probability distributions by assuming Gaussian statistics and using empirically-determined covariance matrices.
Derivation
Typically, one expects the statistics of most measurements to be Gaussian. So for example for , we can write:
where m and n are the numbers of elements in and respectively is the matrix to be solved (the linear or linearised forward model) and is the covariance matrix of the vector . This can be similarly done for :
Here is taken to be the so-called "a-priori" distribution: denotes the a-priori values for while is its covariance matrix.
The nice thing about the Gaussian distributions is that only two parameters are needed to describe them and so the whole problem can be converted once again to matrices. Assuming that takes the following form:
may be neglected since, for a given value of , it is simply a constant scaling term. Now it is possible to solve for both the expectation value of , , and for its covariance matrix by equating and . This produces the following equations:
Because we are using Gaussians, the expected value is equivalent to the maximum likely value, and so this is also a form of maximum likelihood estimation.
Typically with optimal estimation, in addition to the vector of retrieved quantities, one extra matrix is returned along with the covariance matrix. This is sometimes called the resolution matrix or the averaging kernel and is calculated as follows:
This tells us, for a given element of the retrieved vector, how much of the other elements of the vector are mixed in. In the case of a retrieval of profile information, it typical indicates the altitude resolution for a given altitude. For instance if the resolution vectors for all the altitudes contain non-zero elements (to a numerical tolerance) in their four nearest neighbours, then the altitude resolution is only one fourth that of the actual grid size.
References
Inverse problems
Remote sensing |
https://en.wikipedia.org/wiki/Discrete%20Chebyshev%20transform | In applied mathematics, the discrete Chebyshev transform (DCT), named after Pafnuty Chebyshev, is either of two main varieties of DCTs: the discrete Chebyshev transform on the 'roots' grid of the Chebyshev polynomials of the first kind and the discrete Chebyshev transform on the 'extrema' grid of the Chebyshev polynomials of the first kind.
Discrete Chebyshev transform on the roots grid
The discrete chebyshev transform of u(x) at the points is given by:
where:
where and otherwise.
Using the definition of ,
and its inverse transform:
(This so happens to the standard Chebyshev series evaluated on the roots grid.)
This can readily be obtained by manipulating the input arguments to a discrete cosine transform.
This can be demonstrated using the following MATLAB code:
function a=fct(f,l)
% x =-cos(pi/N*((0:N-1)'+1/2));
f = f(end:-1:1,:);
A = size(f); N = A(1);
if exist('A(3)','var') && A(3)~=1
for i=1:A(3)
a(:,:,i) = sqrt(2/N) * dct(f(:,:,i));
a(1,:,i) = a(1,:,i) / sqrt(2);
end
else
a = sqrt(2/N) * dct(f(:,:,i));
a(1,:)=a(1,:) / sqrt(2);
end
The discrete cosine transform (dct) is in fact computed using a fast Fourier transform algorithm in MATLAB.
And the inverse transform is given by the MATLAB code:
function f=ifct(a,l)
% x = -cos(pi/N*((0:N-1)'+1/2))
k = size(a); N=k(1);
a = idct(sqrt(N/2) * [a(1,:) * sqrt(2); a(2:end,:)]);
end
Discrete Chebyshev transform on the extrema grid
This transform uses the grid:
This transform is more difficult to implement by use of a Fast Fourier Transform (FFT). However it is more widely used because it is on the extrema grid which tends to be most useful for boundary value problems. Mostly because it is easier to apply boundary conditions on this grid.
There is a discrete (and in fact fast because it performs the dct by using a fast Fourier transform) algorithm available at the MATLAB file exchange that was created by Greg von Winckel. So it is omitted here.
In this case the transform and its inverse are
where and otherwise.
Usage and implementations
The primary uses of the discrete Chebyshev transform are numerical integration, interpolation, and stable numerical differentiation.
An implementation which provides these features is given in the C++ library Boost
See also
Chebyshev polynomials
Discrete cosine transform
Discrete Fourier transform
List of Fourier-related transforms
References
Transforms
Articles with example MATLAB/Octave code |
https://en.wikipedia.org/wiki/Conformal%20welding | In mathematics, conformal welding (sewing or gluing) is a process in geometric function theory for producing a Riemann surface by joining together two Riemann surfaces, each with a disk removed, along their boundary circles. This problem can be reduced to that of finding univalent holomorphic maps f, g of the unit disk and its complement into the extended complex plane, both admitting continuous extensions to the closure of their domains, such that the images are complementary Jordan domains and such that on the unit circle they differ by a given quasisymmetric homeomorphism. Several proofs are known using a variety of techniques, including the Beltrami equation, the Hilbert transform on the circle and elementary approximation techniques. describe the first two methods of conformal welding as well as providing numerical computations and applications to the analysis of shapes in the plane.
Welding using the Beltrami equation
This method was first proposed by .
If f is a diffeomorphism of the circle, the Alexander extension gives a way of extending f to a diffeomorphism of the unit disk D:
where ψ is a smooth function with values in [0,1], equal to 0 near 0 and 1 near 1, and
with g(θ + 2π) = g(θ) + 2π.
The extension F can be continued to any larger disk |z| < R with R > 1. Accordingly in the unit disc
Now extend μ to a Beltrami coefficient on the whole of C by setting it equal to 0 for |z| ≥ 1. Let G be the corresponding solution of the Beltrami equation:
Let F1(z) = G ∘ F−1(z) for |z| ≤ 1 and
F2(z) = G (z) for |z| ≥ 1. Thus F1 and F2 are univalent holomorphic maps of |z| < 1 and |z| > 1 onto the inside and outside of a Jordan curve. They extend continuously to homeomorphisms fi of the unit circle onto the Jordan curve on the boundary. By construction they satisfy the
conformal welding condition:
Welding using the Hilbert transform on the circle
The use of the Hilbert transform to establish conformal welding was first suggested by the Georgian mathematicians D.G. Mandzhavidze and B.V. Khvedelidze in 1958. A detailed account was given at the same time by F.D. Gakhov and presented in his classic monograph ().
Let en(θ) = einθ be the standard orthonormal basis of L2(T). Let H2(T) be Hardy space, the closed subspace spanned by the en with n ≥ 0. Let P be the orthogonal projection onto Hardy space and set T = 2P - I. The operator H = iT is the Hilbert transform on the circle and can be written as a singular integral operator.
Given a diffeomorphism f of the unit circle, the task is to determine two univalent holomorphic functions
defined in |z| < 1 and |z| > 1 and both extending smoothly to the unit circle, mapping onto a Jordan domain and its complement, such that
Let F be the restriction of f+ to the unit circle. Then
and
Hence
If V(f) denotes the bounded invertible operator on L2 induced by the diffeomorphism f, then the operator
is compact, indeed it is given by an operator with smooth kernel because P and T are given by singular in |
https://en.wikipedia.org/wiki/2-EPT%20probability%20density%20function | In probability theory, a 2-EPT probability density function is a class of probability density functions on the real line. The class contains the density functions of all distributions that have characteristic functions that are strictly proper rational functions (i.e., the degree of the numerator is strictly less than the degree of the denominator).
Definition
A 2-EPT probability density function is a probability density function on with a strictly proper rational characteristic function. On either or these probability density functions are exponential-polynomial-trigonometric (EPT) functions.
Any EPT density function on can be represented as
where e represents a matrix exponential, are square matrices, are column vectors and are row vectors. Similarly the EPT density function on is expressed as
The parameterization
is the minimal realization of the 2-EPT function.
The general class of probability measures on with (proper) rational characteristic functions are densities corresponding to mixtures of the pointmass at zero ("delta distribution") and 2-EPT densities. Unlike phase-type and matrix geometric distributions, the 2-EPT probability density functions are defined on the whole real line. It has been shown that the class of 2-EPT densities is closed under many operations and using minimal realizations these calculations have been illustrated for the two-sided framework in Sexton and Hanzon. The most involved operation is the convolution of 2-EPT densities using state space techniques. Much of the work centers on the ability to decompose the rational characteristic function into the sum of two rational functions with poles located in either the open left or open right half plane. The variance-gamma distribution density has been shown to be a 2-EPT density under a parameter restriction and the variance gamma process can be implemented to demonstrate the benefits of adopting such an approach for financial modelling purposes.
It can be shown using Parseval's theorem and an isometry that approximating the discrete time rational transform is equivalent to approximating the 2-EPT density itself in the L-2 Norm sense. The rational approximation software RARL2 is used to approximate the discrete time rational characteristic function of the density.
Applications
Examples of applications include option pricing, computing the Greeks and risk management calculations. Fitting 2-EPT density functions to empirical data has also been considered.
Notes
External links
2 - Exponential-Polynomial-Trigonometric (2-EPT) Probability Density Functions Website for background and Matlab implementations
Types of probability distributions
ru:Распределение variance-gamma |
https://en.wikipedia.org/wiki/Probability%20bounds%20analysis | Probability bounds analysis (PBA) is a collection of methods of uncertainty propagation for making qualitative and quantitative calculations in the face of uncertainties of various kinds. It is used to project partial information about random variables and other quantities through mathematical expressions. For instance, it computes sure bounds on the distribution of a sum, product, or more complex function, given only sure bounds on the distributions of the inputs. Such bounds are called probability boxes, and constrain cumulative probability distributions (rather than densities or mass functions).
This bounding approach permits analysts to make calculations without requiring overly precise assumptions about parameter values, dependence among variables, or even distribution shape. Probability bounds analysis is essentially a combination of the methods of standard interval analysis and classical probability theory. Probability bounds analysis gives the same answer as interval analysis does when only range information is available. It also gives the same answers as Monte Carlo simulation does when information is abundant enough to precisely specify input distributions and their dependencies. Thus, it is a generalization of both interval analysis and probability theory.
The diverse methods comprising probability bounds analysis provide algorithms to evaluate mathematical expressions when there is uncertainty about the input values, their dependencies, or even the form of mathematical expression itself. The calculations yield results that are guaranteed to enclose all possible distributions of the output variable if the input p-boxes were also sure to enclose their respective distributions. In some cases, a calculated p-box will also be best-possible in the sense that the bounds could be no tighter without excluding some of the possible distributions.
P-boxes are usually merely bounds on possible distributions. The bounds often also enclose distributions that are not themselves possible. For instance, the set of probability distributions that could result from adding random values without the independence assumption from two (precise) distributions is generally a proper subset of all the distributions enclosed by the p-box computed for the sum. That is, there are distributions within the output p-box that could not arise under any dependence between the two input distributions. The output p-box will, however, always contain all distributions that are possible, so long as the input p-boxes were sure to enclose their respective underlying distributions. This property often suffices for use in risk analysis and other fields requiring calculations under uncertainty.
History of bounding probability
The idea of bounding probability has a very long tradition throughout the history of probability theory. Indeed, in 1854 George Boole used the notion of interval bounds on probability in his The Laws of Thought. Also dating from the latter half of the 19th |
https://en.wikipedia.org/wiki/Rizza%20manifold | In differential geometry a Rizza manifold, named after Giovanni Battista Rizza, is an almost complex manifold also supporting a Finsler structure: this kind of manifold is also referred as almost Hermitian Finsler manifold.
Historical notes
The history of Rizza manifolds follows the history of the structure that such objects carry. According to , the geometry of complex Finsler structures was first studied in the paper : however, Rizza announced his results nearly two years before, in the short communications and , proving them in the article , nearly one year earlier than the one cited by Kobayashi. Rizza called this differential geometric structure, defined on even-dimensional manifolds, "Struttura di Finsler quasi Hermitiana": his motivation for the introduction of the concept seems to be the aim of comparing two different structures existing on the same manifold. Later started calling this structure "Rizza structure", and manifolds carrying it "Rizza manifolds".
Formal definition
The content of this paragraph closely follows references and , borrowing the scheme of notation equally from both sources. Precisely, given a differentiable manifold M and one of its points x ∈ M
TM is the tangent bundle of M;
TxM is the tangent space at x;
Let M be a 2n-dimensional Finsler manifold, n ≥ 1, and let F : TM → ℝ its Finsler function. If the condition
holds true, then M is a Rizza Manifold.
See also
Almost complex manifold
Complex manifold
Finsler manifold
Hermitian manifold
Notes
References
.
.
. In this paper, Shoshichi Kobayashi acknowledges Giovanni Battista Rizza as the first one to study complex manifolds with Finsler structure, now called Rizza manifolds.
. A tribute to Rizza by his former master Enzo Martinelli: an English translation of the title reads as:-"Homage to Giovanni Battista Rizza on his 70th birthday".
. A short research announcement describing briefly the results proved in .
. Another short presentation of the results proved in : the English translation of the title reads as:-"Finsler structures on almost complex manifolds".
. The article giving the proofs of the results previously announced in references and : the English translation of the title reads as:-"Finsler structures of almost Hermitian type".
. This article is the one Shoshichi Kobayashi cites as the first one in the theory of Rizza manifolds: an English translation of the title reads as:-"Hermitian and quadratic F-forms".
Differential geometry
Smooth manifolds |
https://en.wikipedia.org/wiki/Arithmetic%20zeta%20function | In mathematics, the arithmetic zeta function is a zeta function associated with a scheme of finite type over integers. The arithmetic zeta function generalizes the Riemann zeta function and Dedekind zeta function to higher dimensions. The arithmetic zeta function is one of the most-fundamental objects of number theory.
Definition
The arithmetic zeta function is defined by an Euler product analogous to the Riemann zeta function:
where the product is taken over all closed points of the scheme . Equivalently, the product is over all points whose residue field is finite. The cardinality of this field is denoted .
Examples and properties
Varieties over a finite field
If is the spectrum of a finite field with elements, then
For a variety X over a finite field, it is known by Grothendieck's trace formula that
where is a rational function (i.e., a quotient of polynomials).
Given two varieties X and Y over a finite field, the zeta function of is given by
where denotes the multiplication in the ring of Witt vectors of the integers.
Ring of integers
If is the spectrum of the ring of integers, then is the Riemann zeta function. More generally, if is the spectrum of the ring of integers of an algebraic number field, then is the Dedekind zeta function.
Zeta functions of disjoint unions
The zeta function of affine and projective spaces over a scheme are given by
The latter equation can be deduced from the former using that, for any that is the disjoint union of a closed and open subscheme and , respectively,
Even more generally, a similar formula holds for infinite disjoint unions. In particular, this shows that the zeta function of is the product of the ones of the reduction of modulo the primes :
Such an expression ranging over each prime number is sometimes called Euler product and each factor is called Euler factor. In many cases of interest, the generic fiber is smooth. Then, only finitely many are singular (bad reduction). For almost all primes, namely when has good reduction, the Euler factor is known to agree with the corresponding factor of the Hasse–Weil zeta function of . Therefore, these two functions are closely related.
Main conjectures
There are a number of conjectures concerning the behavior of the zeta function of a regular irreducible equidimensional scheme (of finite type over the integers). Many (but not all) of these conjectures generalize the one-dimensional case of well known theorems about the Euler-Riemann-Dedekind zeta function.
The scheme need not be flat over , in this case it is a scheme of finite type over some . This is referred to as the characteristic case below. In the latter case, many of these conjectures (with the most notable exception of the Birch and Swinnerton-Dyer conjecture, i.e. the study of special values) are known. Very little is known for schemes that are flat over and are of dimension two and higher.
Meromorphic continuation and functional equation
Hasse and Weil conjecture |
https://en.wikipedia.org/wiki/Basil%20Gordon | Basil Gordon (December 23, 1931 – January 12, 2012) was a mathematician at UCLA, specializing in number theory and combinatorics. He obtained his Ph.D. at California Institute of Technology under the supervision of Tom Apostol. Ken Ono was one of his students.
Gordon is well known for Göllnitz–Gordon identities, generalizing the Rogers–Ramanujan identities. He also posed the still-unsolved Gaussian moat problem in 1962.
Gordon was drafted into the US Army, where he worked with the former Nazi rocket scientist Wernher von Braun. Gordon's calculations of the gravitational interactions of earth, moon, and satellite contributed to the success and longevity of Explorer I, which launched in 1958 and remained in orbit until 1970. He was the step-grandson of General George Barnett and is a descendant of the Gordon family of British distillers, producers of Gordon's Gin.
References
External links
In memoriam: Basil Gordon,Professor of Mathematics, Emeritus, 1931 – 2012, UCLA Mathematics Department website
Some Tauberian Theorems connected with the Prime Number Theorem, Basil Gordon, PhD thesis, 1956
2012 deaths
20th-century American mathematicians
21st-century American mathematicians
Combinatorialists
California Institute of Technology alumni
1931 births |
https://en.wikipedia.org/wiki/Tuka%20Tisam | Tuka Tisam (born 8 July 1986) is a Cook Islander retired footballer who played as a midfielder.
Career statistics
International
Statistics accurate as of match played 26 November 2011
References
1986 births
Living people
Cook Islands men's international footballers
Men's association football midfielders
People educated at Mount Albert Grammar School
Cook Island men's footballers
Expatriate football managers in Fiji
Expatriate men's soccer players in the United States
Expatriate men's association footballers in New Zealand
Association football managers by women's national team
Cook Islands women's national football team |
https://en.wikipedia.org/wiki/Skew%20gradient | In mathematics, a skew gradient of a harmonic function over a simply connected domain with two real dimensions is a vector field that is everywhere orthogonal to the gradient of the function and that has the same magnitude as the gradient.
Definition
The skew gradient can be defined using complex analysis and the Cauchy–Riemann equations.
Let be a complex-valued analytic function, where u,v are real-valued scalar functions of the real variables x, y.
A skew gradient is defined as:
and from the Cauchy–Riemann equations, it is derived that
Properties
The skew gradient has two interesting properties. It is everywhere orthogonal to the gradient of u, and of the same length:
References
Peter Olver, Introduction to Partial Differential Equations, ch. 7, p. 232
Differential calculus
Generalizations of the derivative
Linear operators in calculus
Vector calculus |
https://en.wikipedia.org/wiki/List%20of%20Atlas%20launches%20%281960%E2%80%931969%29 |
Launch statistics
Rocket configurations
Launch sites
Launch outcomes
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
References
Main Page
List of Atlas launches
Atlas |
https://en.wikipedia.org/wiki/Jimmy%20Connors%20career%20statistics | This is a list of the main career statistics of former tennis player Jimmy Connors.
Grand Slam finals
Singles: 15 (8 titles, 7 runner-ups)
Doubles: 3 (2 titles, 1 runner-up)
Mixed doubles: 1 (1 runner-up)
Grand Prix year-end championships finals
Singles: 1 (1 title)
WCT year-end championship finals
Singles: 3 (2 titles, 1 runner-up)
ATP Tour singles timeline
Qualifying matches and walkovers are neither official match wins nor losses.
Career finals
Singles titles (109)
Runner-ups (55)
** The "Pepsi Grand Slam" was a four-man invitational tournament not bringing ATP-ranking points. It is included in the ATP Tour statistics even though it was an ITF event.
Other singles titles
Here are Connors's tournament titles that are not included in the statistics on the Association of Tennis Professionals Web site. These mainly are special events like invitational tournaments and exhibitions (24).
Other singles titles (4–8 man fields)
These are non-ATP, exhibition and special events (16)
Challenge matches / Exhibition matches (2 players) / amateur titles (50)
1970: Modesto, California (amateur title) – Final opponent: Robert Potthast 4–6 6–4 6–3
1975: Ilie Năstase – Syracuse, N.Y. 6–4 6–7 6–2
1975: Rod Laver – Las Vegas 6–4, 6–2, 3–6, 7–5.
1975: Vitas Gerulaitis – Ridgefield, Connecticut 6–3 7–6
1975: John Newcombe – Las Vegas 6–3, 4–6, 6–2, 6–4
1976: Manuel Orantes – Las Vegas 6–2 6–2 6–1
1976: Ilie Năstase – Providence 6–4 6–1
1976: Tony Roche – Hartford (Aetna World Cup WCT) 6–4 7–5
1976: John Newcombe – Hartford (Aetna World Cup WCT) 6–2 6–3
1977: John Alexander – Hartford (Aetna World Cup WCT) 6–1 6–4
1977: Tony Roche – Hartford (Aetna World Cup WCT) 6–4 7–5
1977: Ilie Năstase – Puerto Rico 4–6 6–3 7–5 6–3
1978: John Newcombe – New Haven (Aetna World Cup WCT) 6–4 6–4
1978: John Alexander – New Haven (Aetna World Cup WCT) 6–2 6–4
1978: Eddie Dibbs – Toledo, Ohio 6–4 6–4
1979: Hank Pfister – São Paulo (Brasil) 3–6 6–2 6–4 6–1
1979: Guillermo Vilas – Buenos Aires 7–5 6–3 6–3
1980: Adriano Panatta – Copenhagen 6–4 6–1
1980: Björn Borg – Copenhagen 6–4 6–2
1980: Ilie Năstase – Detroit 7-6 6-3
1980: Ilie Năstase – Toronto 6-3 6-4
1980: Eddie Dibbs – Portland 6–4 7–6
1980: Eddie Dibbs – San Diego 6-4 6-3
1980: Roscoe Tanner – Napa Valley (Harvest Cup) 6–4 6–2
1981: Ilie Năstase – San Diego
1981: Ilie Năstase – Portland (Peugeot Tennis Invitatonal) 6–2 6–2
1982: Björn Borg – Richmond 6–4 3–6 7–5 6–3
1982: Björn Borg – Seattle 6–4 3–6 7–5
1982: Björn Borg – Los Angeles 6–3 2–6 6–2
1982: Björn Borg – Vancouver 6–2 5–7 6–4
1982: Björn Borg – San Francisco 7–5 7–6
1983: Björn Borg – Bâton-Rouge 6–7 6–4 6–4
1983: Björn Borg – Providence 6–4 6–4
1983: Björn Borg – Séoul 5–7 6–1 4–6 6–4 7–6
1983: Ivan Lendl – San Diego 6–2 5–7 6–1
1983: Kevin Curren – Cape Town (Southafrica) 2–6 7–6 7–6 6–4
1983: Vitas Gerulaitis – Portland (Peugeot Tennis Invitatonal) 6–3 7–5
1983: Ilie Năstase – Tampa 6–2 7–5
1984: John McEnroe |
https://en.wikipedia.org/wiki/Piecewise-deterministic%20Markov%20process | In probability theory, a piecewise-deterministic Markov process (PDMP) is a process whose behaviour is governed by random jumps at points in time, but whose evolution is deterministically governed by an ordinary differential equation between those times. The class of models is "wide enough to include as special cases virtually all the non-diffusion models of applied probability." The process is defined by three quantities: the flow, the jump rate, and the transition measure.
The model was first introduced in a paper by Mark H. A. Davis in 1984.
Examples
Piecewise linear models such as Markov chains, continuous-time Markov chains, the M/G/1 queue, the GI/G/1 queue and the fluid queue can be encapsulated as PDMPs with simple differential equations.
Applications
PDMPs have been shown useful in ruin theory, queueing theory, for modelling biochemical processes such as DNA replication in eukaryotes and subtilin production by the organism B. subtilis, and for modelling earthquakes. Moreover, this class of processes has been shown to be appropriate for biophysical neuron models with stochastic ion channels.
Properties
Löpker and Palmowski have shown conditions under which a time reversed PDMP is a PDMP. General conditions are known for PDMPs to be stable.
Galtier and Al. studied the law of the trajectories of PDMP and provided a reference measure in order to express a density of a trajectory of the PDMP. Their work opens the way to any application using densities of trajectory. (For instance, they used the density of a trajectories to perform importance sampling, this work was further developed by Chennetier and Al. to estimate the reliability of industrial systems.)
See also
Jump diffusion, a generalization of piecewise-deterministic Markov processes
Hybrid system (in the context of dynamical systems), a broad class of dynamical systems that includes all jump diffusions (and hence all piecewise-deterministic Markov processes)
References
Markov processes |
https://en.wikipedia.org/wiki/Eddie%20Epstein | Eddie Epstein is one of the pioneers of the modern age of baseball analysis, or sabermetrics. He was Director of Research and Statistics for the Baltimore Orioles from 1988 to 1994 and Director of Baseball Operations for the San Diego Padres from 1995 to 1999. He was President of his own baseball consulting company, EBC, Inc., from 2000 to 2011 and in that role consulted on baseball operations and player personnel matters for several major league teams, including the Cleveland Indians, Oakland A's, and Tampa Bay Rays. He wrote the 1995 STATS Minor League Scouting Notebook, co-authored Baseball Dynasties with Rob Neyer, and wrote Dominance - the subject of which was the greatest NFL teams since 1950. The Wall Street Journal review of Dominance claimed that the book was, "Without a doubt the best book on pro football analysis ever written."
References
Year of birth missing (living people)
Living people
Sports scientists
Baseball statisticians |
https://en.wikipedia.org/wiki/List%20of%20tennis%20rivalries | In tennis history there have been a number of notable rivalries. This is a list of some of the greatest rivalries.
For the pre-1991 eras, complete statistics on all matches is difficult to obtain in definitive form. In many years there were significant numbers of minor events and exhibition matches outside the designated tours, some of which were not reported in newspapers or recorded by the respective amateur or professional tour management. The approximate nature of these results should be understood and kept in mind while reading this data.
For the purpose of this article only, the criteria for inclusion are (all must be met):
Both players must have a career high ranking of world No. 3 or better, and at least one of them must have reached No. 1.
Both must have met multiple times in semi-final or final stages of Grand Slam events (or Pro Slam and also WCCC and WHCC count).
At least twelve of the career match-ups between them must be in main (regular) tour or circuit series of tournaments.
Men
Pre-open era
Open Era
* Including walkovers or abandoned matches (not counted in head to heads, same as the official ATP head to heads).
Women
Amateur Era
Open Era
≈ minimum confirmed (early records are incomplete)
See also
List of sports rivalries
Big Three (tennis)
References
rivalries |
https://en.wikipedia.org/wiki/Nonparametric%20skew | In statistics and probability theory, the nonparametric skew is a statistic occasionally used with random variables that take real values. It is a measure of the skewness of a random variable's distribution—that is, the distribution's tendency to "lean" to one side or the other of the mean. Its calculation does not require any knowledge of the form of the underlying distribution—hence the name nonparametric. It has some desirable properties: it is zero for any symmetric distribution; it is unaffected by a scale shift; and it reveals either left- or right-skewness equally well. In some statistical samples it has been shown to be less powerful than the usual measures of skewness in detecting departures of the population from normality.
Properties
Definition
The nonparametric skew is defined as
where the mean (µ), median (ν) and standard deviation (σ) of the population have their usual meanings.
Properties
The nonparametric skew is one third of the Pearson 2 skewness coefficient and lies between −1 and +1 for any distribution. This range is implied by the fact that the mean lies within one standard deviation of any median.
Under an affine transformation of the variable (X), the value of S does not change except for a possible change in sign. In symbols
where a ≠ 0 and b are constants and S( X ) is the nonparametric skew of the variable X.
Sharper bounds
The bounds of this statistic ( ±1 ) were sharpened by Majindar who showed that its absolute value is bounded by
with
and
where X is a random variable with finite variance, E() is the expectation operator and Pr() is the probability of the event occurring.
When p = q = 0.5 the absolute value of this statistic is bounded by 1. With p = 0.1 and p = 0.01, the statistic's absolute value is bounded by 0.6 and 0.199 respectively.
Extensions
It is also known that
where ν0 is any median and E(.) is the expectation operator.
It has been shown that
where xq is the qth quantile. Quantiles lie between 0 and 1: the median (the 0.5 quantile) has q = 0.5. This inequality has also been used to define a measure of skewness.
This latter inequality has been sharpened further.
Another extension for a distribution with a finite mean has been published:
The bounds in this last pair of inequalities are attained when and for fixed numbers a < b.
Finite samples
For a finite sample with sample size n ≥ 2 with xr is the rth order statistic, m the sample mean and s the sample standard deviation corrected for degrees of freedom,
Replacing r with n / 2 gives the result appropriate for the sample median:
where a is the sample median.
Statistical tests
Hotelling and Solomons considered the distribution of the test statistic
where n is the sample size, m is the sample mean, a is the sample median and s is the sample's standard deviation.
Statistical tests of D have assumed that the null hypothesis being tested is that the distribution is symmetric .
Gastwir |
https://en.wikipedia.org/wiki/Jensen%27s%20theorem | In mathematics, Jensen's theorem may refer to:
Johan Jensen's inequality for convex functions
Johan Jensen's formula in complex analysis
Ronald Jensen's covering theorem in set theory |
https://en.wikipedia.org/wiki/Pandurang%20Vasudeo%20Sukhatme | Pandurang Vasudeo Sukhatme (1911–1997) was an Indian statistician. He is known for his pioneering work of applying random sampling methods in agricultural statistics and in biometry, in the 1940s. He was also influential in the establishment of the Indian Agricultural Statistics Research Institute. As a part of his work at the Food and Agriculture Organization in Rome, he developed statistical models for assessing the dimensions of hunger and future food supplies for the world. He also developed methods for measuring the size and nature of the protein gap.
His other major contributions included applying statistical techniques to the study of human nutrition. One of his ideas, the Sukhatme–Margen hypothesis, suggested that at low calorie intake levels, stored energy in the body is used with greater metabolic efficiency and that the metabolic efficiency decreases as the intake increases above the homeostatic range. This involved paying attention to intra-individual variability that was found to be more than the inter-individual variability in protein or calorie intake. He gave a genetic interpretation of the intra-individual variation jointly with P. Narain.
He was awarded the Padma Bhushan by the Government of India in 1971.
Early life
Sukhatme was born in a Deshastha Brahmin family on 27 July 1911 in village Budh, district Satara in the state of Maharashtra in India. He graduated in 1932 from Fergusson College with Mathematics as principal subject and Physics as subsidiary subject.
From 1932 to 1936, he studied at University College London where he was awarded a Ph.D. in 1936 and a D.Sc. in 1939 for his work on bi-partitional functions. With J. Neyman and E. S. Pearson he made significant contributions in the statistical theory of sampling which was instrumental in his subsequent research in sampling theory of survey and improvement of agricultural statistics in India. This ushered in what may appropriately be termed as Sukhatme era in the development of agricultural statistics in India.
Professional life
1940–1951: Statistical Adviser, Indian Council of Agricultural Research, New Delhi.
1951–1971: Director, Statistics Division, Food and Agriculture Organization, Rome
Contributions
Some of Dr Sukhatme's significant contributions in the form of research papers and books are:
"Contribution to the Theory of the Representative Method", Journal of the Royal Statistical Society (1935). PhD thesis under the guidance of Jerzy Neyman and Egon Pearson (son of Karl Pearson).
"On Bi-partitional Functions", Philosophical Transactions of the Royal Society (1936). DSc thesis under the guidance of Ronald Fisher.
"Random Sampling for Estimating Rice Yield in Madras Province", Indian Journal of Agricultural Science (1945).
"The Problem of Plot Size in Large-Scale Yield Surveys", Journal of the American Statistical Association (1947).
"Use of Small Size Plots in Yield Surveys", Nature (1947)
(With V.G.Panse). Crop Surveys in India — II, Journal of In |
https://en.wikipedia.org/wiki/Dino%20Marcan | Dino Marcan (born 12 February 1991 in Rijeka, Yugoslavia) is a Croatian professional tennis player. Dino mostly competes on the ATP Challenger Tour.
Career statistics
Singles titles (2)
References
External links
1991 births
Living people
Croatian male tennis players
Sportspeople from Rijeka
French Open junior champions
Grand Slam (tennis) champions in boys' doubles
21st-century Croatian people |
https://en.wikipedia.org/wiki/Bloch%27s%20principle | Bloch's Principle is a philosophical principle in mathematics
stated by André Bloch.
Bloch states the principle in Latin as: Nihil est in infinito quod non prius fuerit in finito, and explains this as follows: Every proposition in whose statement the actual infinity occurs can be always considered a consequence, almost immediate, of a proposition where it does not occur, a proposition in finite terms.
Bloch mainly applied this principle to the theory of functions of a complex variable. Thus, for example, according to this principle, Picard's theorem corresponds to Schottky's theorem, and Valiron's theorem corresponds to Bloch's theorem.
Based on his Principle, Bloch was able to predict or conjecture several
important results such as the Ahlfors's Five Islands theorem,
Cartan's theorem on holomorphic curves omitting hyperplanes, Hayman's result that an exceptional set of radii is unavoidable in Nevanlinna theory.
In the more recent times several general theorems were proved which can be regarded as rigorous statements in the spirit of the Bloch Principle:
Zalcman's lemma
A family of functions meromorphic on the unit disc is not normal if and only if there exist:
a number
points
functions
numbers
such that
spherically uniformly on compact subsets of where is a nonconstant meromorphic function on
Zalcman's lemma may be generalized to several complex variables. First, define the following:
A family of holomorphic functions on a domain is normal in if every sequence of functions contains either a subsequence which converges to a limit function uniformly on each compact subset of or a subsequence which converges uniformly to on each compact subset.
For every function of class define at each point a Hermitian form
and call it the Levi form of the function at
If function is holomorphic on set
This quantity is well defined since the Levi form is nonnegative for all
In particular, for the above formula takes the form
and coincides with the spherical metric on
The following characterization of normality can be made based on Marty's theorem, which states that a family is normal if and only if the spherical derivatives are locally bounded:
Suppose that the family of functions holomorphic on is not normal at some point Then there exist sequences such that the sequence converges locally uniformly in to a non-constant entire function satisfying
Brody's lemma
Let X be a compact complex analytic manifold, such that every holomorphic map from the complex plane
to X is constant. Then there exists a metric on X such that every holomorphic map from the unit disc with the Poincaré metric to X does not increase distances.
References
Mathematical principles
Philosophy of mathematics |
https://en.wikipedia.org/wiki/Entropy%20influence%20conjecture | In mathematics, the entropy influence conjecture is a statement about Boolean functions originally conjectured by Ehud Friedgut and Gil Kalai in 1996.
Statement
For a function note its Fourier expansion
The entropy–influence conjecture states that there exists an absolute constant C such that where the total influence is defined by
and the entropy (of the spectrum) is defined by
(where x log x is taken to be 0 when x = 0).
See also
Analysis of Boolean functions
References
Unsolved Problems in Number Theory, Logic and Cryptography
The Open Problems Project, discrete and computational geometry problems
Entropy
Conjectures |
https://en.wikipedia.org/wiki/Akaflieg%20Darmstadt%20D-40 | The Akaflieg Darmstadt D-40 is an experimental variable geometry single seat sailplane, fitted with almost full span, camber changing flaps for optimum aerodynamics in weak thermals and integrated into the wing so as to minimise flap tip drag. One flew successfully but the D-40, like other variable geometry sailplanes, was not commercialised.
Design and development
The Akademische Fliegergruppe of the Technical University of Darmstadt (Akaflieg Darmstadt) was first formed in 1921. It was, and is, a group of aeronautical students who design and construct aircraft as part of their studies and with the help and encouragement of their University. Design work on the variable wing geometry D-40 began in 1980 but the first flight did not take place until 15 August 1986.
As understanding of thermal soaring grew in the 1930s, glider pilots and designers became aware of two conflicting requirements for cross country flights. The aircraft needed good climb characteristics and low stalling speeds to enable tight turns within thermals but high speeds in the sinking air between them. These respectively called for low and high wing loadings on wings with high and low camber. Several designs, e.g. the 1938 Akaflieg Hannover AFH-4 and the later LET L-13 Blaník and Beatty-Johl BJ-2, had added large area slotted Fowler flaps on the inner part of the wing to increase camber and add area when extended. These satisfactorily reduced stall speed and with it the turn radius, but disappointed hopes of improving climb rates because of vortex generation (induced drag) at the tips of the flaps, seriously decreasing the lift to drag ratio. A solution to this problem was to extend the whole trailing edge, including the ailerons, and this route was taken by both the disappointing, heavy and complicated Operation Sigma Sigma, the more successful but still heavy and complex Akaflieg München Mü27 and the World Championship winning 15 m class Akaflieg Braunschweig SB-11.
These last three designs changed the wing geometry by extending the wing rearwards at right angles to the trailing edge. Akaflieg Darmstadt took a different approach, pivoting the single piece flap near the tip and sliding it out from within the wing trailing edge, gaining the mechanism the nickname "penknife wing". As it is extended, a track in the fuselage side guides the thin flap into its high camber position at the wing root. The wing area is increased by 21% with the flaps extended. Although this arrangement avoids the vortexes at the flap tip, like any area increasing method used on a fixed span wing it results in a lower aspect ratio and hence a lower lift to drag ratio.
The D-40 is constructed from a mixture of plastic-fibre composites, glass reinforced plastic (GRP), carbon fibre reinforced plastic (CRP) and aramid reinforced plastic (ARP) with some use of balsa wood. The shoulder mounted wing has a spar built from CRP rovings and GRP-balsa webs. The skin of both wing and flaps is an ARP/CRP/ |
https://en.wikipedia.org/wiki/Eckhard%20Meinrenken | Eckhard Meinrenken is a German-Canadian mathematician working in differential geometry and mathematical physics. He is a professor at University of Toronto.
Education and career
Meinrenken studied Physics at Albert-Ludwigs-Universität Freiburg, where he obtained a Diplom in 1990 and a PhD in 1994, with a thesis entitled Vielfachheitsformeln für die Quantisierung von Phasenräumen (Multiplicity formulas for the quantization of phase spaces), under the supervision of .
He was a postdoc at Massachusetts Institute of Technology from 1995 to 1997, and then he joined University of Toronto Department of Mathematics in 1998 as assistant professor. In 2000 he become Associated Professor and since 2004 he is Full Professor at the same university.
Meinrenken was awarded in 2001 an André Aisenstadt Prize, in 2003 a McLean Award and in 2007 a NSERC Steacie Memorial Fellowship.
In 2002 he was invited speaker at the International Congress of Mathematicians in Beijing and in 2008 he was elected Fellow of the Royal Society of Canada.
Research
Meinrenken's research interests lie in the fields of differential geometry and mathematical physics. In particular, he works on symplectic geometry, Lie theory and Poisson geometry.
Among his most important contributions, in 1998 he proved, together with Reyer Sjamaar the conjecture "quantisation commutes with reduction", originally formulated in 1982 by Guillemin and Sternberg. In the same year, together with Anton Alekseev and Anton Malkin, he introduced Lie group-valued moment maps in symplectic geometry.
Meinrenken is author of more than 50 research papers in peer-reviewed journals, as well as a monograph on Clifford algebras. He has supervised 9 PhD students as of 2021.
References
Living people
German mathematicians
Canadian mathematicians
Academic staff of the University of Toronto
Year of birth missing (living people)
University of Freiburg alumni
Fellows of the Royal Society of Canada |
https://en.wikipedia.org/wiki/William%20Brown%20%28psychologist%29 | William Brown FRCP (5 December 1881 – 17 May 1952) was a British psychologist and psychiatrist.
Biography
Brown was born in Slinfold, Sussex. He studied mathematics and philosophy at Christ Church, Oxford. He took medical
training at King's College London and graduated MBBCh in Oxford in 1914. He worked as a neurologist in France where he helped develop a treatment for shell-shock in WW1 persons, and later returned to his post at King's College London where he earned a DM in 1918, MRCP in 1921 and was elected FRCP in 1930.
In 1936 he became the director of the Institute of Experimental Psychology at Oxford University. Brown, along with May Smith, Cyril Burt, and John Flügel, were all students of William McDougall while the latter mentioned was a professor at Oxford. He was a Christian and had a lifelong interest in parapsychology. He served on the board of the Society for Psychical Research 1923-1940.
Brown was associated with Harry Price and his National Laboratory of Psychical Research. He attended séances with the medium Helen Duncan at the laboratory and concluded she was fraudulent.
Publications
Mind and Personality: An Essay in Psychology and Philosophy (1970)
Personality and Religion (1946)
War and the Psychological Conditions of Peace (1942)
Psychological Methods of Healing; An Introduction to Psychotherapy (1938)
Mind, Medicine and Metaphysics: The Philosophy of a Physician (1936)
Science and Personality (1929)
Suggestion and Mental Analysis: An Outline of the Theory and Practice of Mind Cure (1922)
William Brown, 'The psychologist in war-time', Lancet (1939), 1: 1288.
William Brown, 'The treatment of cases of shell shock in an advanced neurological centre', Lancet (1918), 2: 197.
William Brown, 'War neuroses', Lancet (1919), 2: 833.
References
Further reading
Burt, C. (1952). Dr. William Brown Obituary. British Journal of Psychology: Statistical Section, 5, 137–138.
Sutherland, J.D. (1953). William Brown, D.M., D.Sc., F.R.C.P. Obituary. British Journal of Medical Psychology, 26, 1.
1881 births
1952 deaths
Alumni of King's College London
Fellows of the Royal College of Physicians
British parapsychologists
People educated at The College of Richard Collyer
Presidents of the British Psychological Society
20th-century British psychologists
People from Slinfold |
https://en.wikipedia.org/wiki/Nikolai%20Smirnov%20%28mathematician%29 | Nikolai Vasilyevich Smirnov () (17 October 1900 – 2 June 1966) was a Soviet Russian mathematician noted for his work in various fields including probability theory and statistics.
Smirnov's principal works in mathematical statistics and probability theory were devoted to the investigation of limit distributions by means of the asymptotic behaviour of multiple integrals as the multiplicity is increased with limit. He was one of the creators of the nonparametric methods in mathematical statistics and of the theory of limit distributions of order statistics.
Biography
Smirnov was born October 17, 1900, in Moscow into the family of a church clerk who was also employed as a clerk in the office of the Bolshoi Theater. He completed his gymnasium education during the First World War, during which he served in various medical units of the military.
After the October Revolution Smirnov joined the ranks of the Red Army. During this time he took an interest in philosophy and philology, which shaped his later interests in mathematics. It is suggested that Smirnov was influenced by the works of Velimir Khlebnikov, who had emphasized that the most fruitful results in arts and humanities could be achieved only after thoroughly understanding the natural sciences. According to the testimony of a friend, the artist S. P. Isakov, Smirnov, following this advice, entered Moscow State University after his discharge from the army in 1921 and focused his attention to the study of mathematics.
Smirnov graduated from the Faculty of Physics and Mathematics at Moscow State University, and, beginning in 1926, taught mathematics for many years at Timiryazev Agricultural Academy, Moscow City Pedagogical Institute, and Moscow State University. During this time, Smirnov narrowed his research foci to the fields of probability theory and mathematical statistics. Smirnov's initial period of research ended in 1938 with the defense of his doctoral dissertation "On approximation of the distribution laws of random variables" (Russian: Об аппроксимации законов распределения случайных величин), which served as the foundation for his work on nonparametric tests for which he was later renowned.
After his dissertation, Smirnov took up research with the Steklov Institute of Mathematics in 1938, where he worked for the remainder of his life. While at the institute he obtained new fundamental results in nonparametric statistics, and also studied the limit distributions of nonparametric criteria, the theory of large deviations, and the limit distributions for terms of variational series. For these series of works Smirnov was awarded the State Prize in 1951. In 1957 he was made the Head of Mathematical Statistics at the Steklov Institute.
In 1960 Smirnov was elected to the Academy of Sciences of the USSR as a corresponding member, in recognition for his contributions to the advancement of mathematical statistics.
Smirnov died on June 2, 1966. His sudden, unexpected death left him unable t |
https://en.wikipedia.org/wiki/Antieigenvalue%20theory | In applied mathematics, antieigenvalue theory was developed by Karl Gustafson from 1966 to 1968. The theory is applicable to numerical analysis, wavelets, statistics, quantum mechanics, finance and optimization.
The antieigenvectors are the vectors most turned by a matrix or operator , that is to say those for which the angle between the original vector and its transformed image is greatest. The corresponding antieigenvalue is the cosine of the maximal turning angle. The maximal turning angle is and is called the angle of the operator. Just like the eigenvalues which may be ordered as a spectrum from smallest to largest, the theory of antieigenvalues orders the antieigenvalues of an operator A from the smallest to the largest turning angles.
References
.
Operator theory
Matrix theory |
https://en.wikipedia.org/wiki/Kim%20Oh-gyu | Kim Oh-gyu (; born 20 June 1989) is a South Korean footballer who plays as a defender for Gangwon FC.
Club career statistics
External links
1989 births
Living people
Men's association football defenders
South Korean men's footballers
Gangwon FC players
Gimcheon Sangmu FC players
K League 1 players
K League 2 players
People from Gangneung |
https://en.wikipedia.org/wiki/Locally%20finite%20variety | In universal algebra, a variety of algebras means the class of all algebraic structures of a given signature satisfying a given set of identities. One calls a variety locally finite if every finitely generated algebra has finite cardinality, or equivalently, if every finitely generated free algebra has finite cardinality.
The variety of Boolean algebras constitutes a famous example. The free Boolean algebra on n generators has cardinality 22n, consisting of the n-ary operations 2n→2.
The variety of sets constitutes a degenerate example: the free set on n generators has cardinality n, consisting of just the generators themselves.
The variety of pointed sets constitutes a trivial example: the free pointed set on n generators has cardinality n+1, consisting of the generators along with the basepoint.
The variety of graphs defined as follows constitutes a combinatorial example. Define a graph G = (E,s,t) to be a set E of edges and unary operations s, t of source and target satisfying s(s(e)) = t(s(e)) and s(t(e)) = t(t(e)). Vertices are those edges in the (common) image of s and t. The free graph on n generators has cardinality 3n and consists of n edges e each with two endpoints s(e) and t(e). Graphs with nontrivial incidence relations arise as quotients of free graphs, most usefully by identifying vertices.
The variety of sets and the variety of graphs so defined each forms a presheaf category and hence a topos. This is not the case for the variety of Boolean algebras or of pointed sets.
References
http://www.math.mcmaster.ca/~matt/publications/novo.pdf
Universal algebra |
https://en.wikipedia.org/wiki/Least-squares%20function%20approximation | In mathematics, least squares function approximation applies the principle of least squares to function approximation, by means of a weighted sum of other functions. The best approximation can be defined as that which minimizes the difference between the original function and the approximation; for a least-squares approach the quality of the approximation is measured in terms of the squared differences between the two.
Functional analysis
A generalization to approximation of a data set is the approximation of a function by a sum of other functions, usually an orthogonal set:
with the set of functions {} an orthonormal set over the interval of interest, : see also Fejér's theorem. The coefficients {} are selected to make the magnitude of the difference ||||2 as small as possible. For example, the magnitude, or norm, of a function over the can be defined by:
where the ‘*’ denotes complex conjugate in the case of complex functions. The extension of Pythagoras' theorem in this manner leads to function spaces and the notion of Lebesgue measure, an idea of “space” more general than the original basis of Euclidean geometry. The satisfy orthonormality relations:
where δij is the Kronecker delta. Substituting function into these equations then leads to
the n-dimensional Pythagorean theorem:
The coefficients {aj} making ||f − fn||2 as small as possible are found to be:
The generalization of the n-dimensional Pythagorean theorem to infinite-dimensional real inner product spaces is known as Parseval's identity or Parseval's equation. Particular examples of such a representation of a function are the Fourier series and the generalized Fourier series.
Further discussion
Using linear algebra
It follows that one can find a "best" approximation of another function by minimizing the area between two functions, a continuous function on and a function where is a subspace of :
all within the subspace . Due to the frequent difficulty of evaluating integrands involving absolute value, one can instead define
as an adequate criterion for obtaining the least squares approximation, function , of with respect to the inner product space .
As such, or, equivalently, , can thus be written in vector form:
In other words, the least squares approximation of is the function closest to in terms of the inner product . Furthermore, this can be applied with a theorem:
Let be continuous on , and let be a finite-dimensional subspace of . The least squares approximating function of with respect to is given by
where is an orthonormal basis for .
References
Least squares
Approximation theory |
https://en.wikipedia.org/wiki/Michel%20Goemans | Michel Xavier Goemans (born December, 1964) is a Belgian-American professor of applied mathematics and the RSA Professor of Mathematics at MIT working in discrete mathematics and combinatorial optimization at CSAIL and MIT Operations Research Center.
Career
Goemans earned his doctorate in 1990 from MIT. Goemans is the "Leighton Family Professor" of Applied Mathematics at MIT and an adjunct professor at the University of Waterloo. He was also a professor at the University of Louvain and a visiting professor at the RIMS of the University of Kyoto.
Recognition
In 1991 he received the A.W. Tucker Prize. From 1995 to 1997 he was a Sloan Research Fellow. In 1998 he was an Invited Speaker of the International Congress of Mathematicians in Berlin. For the academic year 2007–2008 he was a Guggenheim Fellow.
Goemans is a Fellow of the Association for Computing Machinery (2008), a fellow of the American Mathematical Society (2012), and a fellow of the Society for Industrial and Applied Mathematics (2013). In 2000 he was awarded the MOS-AMS Fulkerson Prize for joint work with David P. Williamson on the semidefinite programming approximation algorithm for the maximum cut problem. In 2012 Goemans was awarded the Farkas Prize. In 2022 he received the AMS Steele Prize for Seminal Contribution to Research.
Personal life
His hobby is sailing. Goemans has Belgian and US citizenship.
References
1964 births
Living people
Belgian mathematicians
Fellows of the American Mathematical Society
Massachusetts Institute of Technology School of Science faculty
Fellows of the Society for Industrial and Applied Mathematics
Theoretical computer scientists
Combinatorialists |
https://en.wikipedia.org/wiki/Henk%20J.%20M.%20Bos | Hendrik Jan Maarten "Henk" Bos (born 17 July 1940, Enschede) is a Dutch historian of mathematics.
Career
Hendrik was a student of Hans Freudenthal and Jerome Ravetz at Utrecht University and in 1973 wrote a thesis "Differentials, higher order differentials, and the derivative in the Leibnizian calculus" for his doctorate.
Bos worked at Utrecht University for most of his career. In 1985 he became professor of history of mathematics.
He took an interest in the tractrix as a mathematical stimulus.
Bos retired in 2005. Since his retirement he has been honorary professor of the history of mathematics at the Faculty of Science at the University of Aarhus. He is married to Kirsti Andersen.
At his Valedictory Symposium when he retired, Henk spoke on fluid concepts in mathematics in a talk titled "Loose Ends". He was awarded the Kenneth O. May Medal for 2005.
Selected publications
Bos has contributed to the study of the mathematical works of the seventeenth-century philosopher René Descartes, including Descartes’ contribution to the development of algebra and geometry.
1974: "Differentials, higher-order differentials and the derivative in the Leibnizian calculus", Archive for History of Exact Sciences 14: 1–90,
1980: "Newton, Leibnitz and the Leibnizian tradition", chapter 2, pages 49–93, in From the Calculus to Set Theory, 1630 – 1910: An Introductory History, edited by Ivor Grattan-Guinness, Duckworth
1981: (with Herbert Mehrtens & Ivo Schneider) "Mathematics and Revolution from Lacroix to Cauchy", pages 50–71 in Social History of Nineteenth Century Mathematics, Birkhäuser
1984: "The closure theorem of Poncelet", Rend. Sem. Mat. Fis. Milano 54, 145–158 (1987).
1987: (with Kers, C.; Oort, F.; Raven, D. W.) "Poncelet's closure theorem", Exposition. Math. 5 no. 4, 289–364.
Joseph Harris wrote for Mathematical Reviews, "The authors trace very carefully the history of the problem, describing various approaches culminating in a modern proof. The paper is fascinating from both a historical and a mathematical point of view, and should serve as the definitive source of information about Poncelet's problem in the future" here
1993: Lectures in the History of Mathematics, History of Mathematics #7, American Mathematical Society & London Mathematical Society
2001: Redefining Geometrical Exactness. Descartes' transformation of the early modern concept of construction, Sources and Studies in the History of Mathematics and Physical Sciences. Springer-Verlag, New York.
2012: Huygens, Christiaan (Also Huyghens, Christian), Complete Dictionary of Scientific Biography, Encyclopedia.com
References
External links
University of Aarhus Staff
155 Notes on Contributors
1940 births
Living people
20th-century Dutch historians
Historians of mathematics
Academic staff of Utrecht University
Utrecht University alumni
People from Enschede
21st-century Dutch historians |
https://en.wikipedia.org/wiki/Akaflieg%20Braunschweig%20SB-11 | The Akaflieg Braunschweig SB-11 is an experimental, single seat, variable geometry sailplane designed by aeronautical students in Germany. It won the 15 m span class at the World Gliding Championships of 1978 but its advances over the best, more conventional, opposition were not sufficient to lead to widespread imitation.
Design and development
The Akaflieg Braunschweig or Akademische Fliegergruppe Braunschweig () is one of some fourteen German student flying groups attached to and supported by their home Technical University. Several have designed and built aircraft, often technically advanced and leading the development of gliders in particular. The announcement, in 1975, of a new, unrestricted 15 m glider class led the Brunswick group to the design of the SB-11, a variable geometry aircraft.
A long-standing challenge for the designers of competition sailplanes were the conflicting requirements posed by the need to gain height in sometimes weak and narrow thermals, calling for low stalling speeds for small radius turns, and the need for rapid penetration of the cool, sinking air between thermals. In thermals, wings should ideally be of high camber and be lightly loaded; between thermals, low camber wings with high wing loading would fly faster. Large area, camber-changing flaps were one solution but vortexes generated at their extremities added significantly to the drag, decreasing climb rates. Akaflieg Brunswick decided to follow the example of the disappointing British Sigma by providing the SB-11 with Wortmann flaps along the whole of the trailing edge of the wing, including the ailerons. This avoided the flap-associated vortexes, though any increase in wing area, however implemented, will lower the aspect ratio and raise the induced drag caused by wingtip vortexes.
The SB-11 is almost entirely built of weight-saving CRP. In order to concentrate design effort on the wing, the Brunswick students blended together the front fuselage of the Schleicher ASW 20 with the rear fuselage and empennage of the Schempp-Hirth Janus. This gave the SB-11 a conventional fuselage, deeper forward of the shoulder wing, with a long, single piece, forward hinged canopy and a monowheel undercarriage under the wing. There is provision for water ballast. The rear fuselage is slender, with a T-tail with straight edged fin, rudder and swept, all moving tailplane.
The wings of the SB-11 have 2.3° of dihedral and are unswept. The inner 60% of span has constant chord. The outer panels, carrying the ailerons, are straight tapered with a taper ratio of 0.4. Manually driven full-span Fowler type Wortmann flaps emerge from the trailing edge between rollers, increasing the chord by 200 mm (7.9 in) over the untapered inner wing and by 80 mm (3.2 in) at the tip. Overall the flaps produce a 25% increase in wing area. Their trailing edges carry ailerons outboard and plain camber-changing flaps inboard. Mid-chord Schempp-Hirth airbrakes are placed at about mid span on the upp |
https://en.wikipedia.org/wiki/2011%E2%80%9312%20FK%20Vojvodina%20season | The 2011–12 season was FK Vojvodina's 6th season in Serbian SuperLiga. This article shows player statistics and all matches (official and friendly) that the club played during the 2011–12 season.
Players
Squad information
Squad statistics
Matches
Serbian SuperLiga
Serbian Cup
UEFA Europa League
External links
Official website
FK Vojvodina seasons
Vojvodina
Vojvodina |
https://en.wikipedia.org/wiki/Higman%20group | In mathematics, the Higman group, introduced by , was the first example of an infinite finitely presented group with no non-trivial finite quotients.
The quotient by the maximal proper normal subgroup is a finitely generated infinite simple group. later found some finitely presented infinite groups that are simple if is even and have a simple subgroup of index 2 if is odd, one of which is one of the Thompson groups.
Higman's group is generated by 4 elements with the relations
References
Group theory |
https://en.wikipedia.org/wiki/IT%2B%2B | IT++ is a C++ library of classes and functions for linear algebra, numerical optimization, signal processing, communications, and statistics. It is being developed by researchers in these areas and is widely used by researchers, both in the communications industry and universities. The IT++ library originates from the former Department of Information Theory at the Chalmers University of Technology, Gothenburg, Sweden.
The kernel of the IT++ library is templated vector and matrix classes, and a set of accompanying functions. Such a kernel makes IT++ library similar to Matlab/Octave. For increased functionality, speed and accuracy, IT++ can make extensive use of existing free and open source libraries, especially BLAS, LAPACK and FFTW libraries. Instead of BLAS and LAPACK, some optimized platform-specific libraries can be used as well, i.e.:
ATLAS (Automatically Tuned Linear Algebra Software) - includes optimised BLAS, CBLAS and a limited set of LAPACK routines;
MKL (Intel Math Kernel Library) - includes all required BLAS, CBLAS, LAPACK and FFT routines (FFTW not required);
ACML (AMD Core Math Library) - includes BLAS, LAPACK and FFT routines (FFTW not required).
It is possible to compile and use IT++ without any of the above-listed libraries, but the functionality will be reduced. IT++ works on Linux, Solaris, Windows (with Cygwin, MinGW/MSYS, or Microsoft Visual C++) and OS X operating systems.
Example
Here is a trivial example demonstrating the IT++ functionality similar to Matlab/Octave,
#include <iostream>
#include <itpp/itbase.h>
using namespace std;
using namespace itpp;
int main()
{
vec a = linspace(0.0, 2.0, 2);
vec b = "1.0 2.0";
vec c = 2*a + 3*b;
cout << "c =\n" << c << endl;
mat A = "1.0 2.0; 3.0 4.0";
mat B = "0.0 1.0; 1.0 0.0";
mat C = A*B + 2*A;
cout << "C =\n" << C << endl;
cout << "inverse of B =\n" << inv(B) << endl;
return 0;
}
See also
List of numerical analysis software
List of numerical libraries
Numerical linear algebra
Scientific computing
References
External links
C++ numerical libraries
Chalmers University of Technology
Free science software
Software using the GPL license |
https://en.wikipedia.org/wiki/Lill%27s%20method | In mathematics, Lill's method is a visual method of finding the real roots of a univariate polynomial of any degree. It was developed by Austrian engineer Eduard Lill in 1867. A later paper by Lill dealt with the problem of complex roots.
Lill's method involves drawing a path of straight line segments making right angles, with lengths equal to the coefficients of the polynomial. The roots of the polynomial can then be found as the slopes of other right-angle paths, also connecting the start to the terminus, but with vertices on the lines of the first path.
Description of the method
To employ the method a diagram is drawn starting at the origin. A line segment is drawn rightwards by the magnitude of the first coefficient (the coefficient of the highest-power term) (so that with a negative coefficient the segment will end left of the origin). From the end of the first segment another segment is drawn upwards by the magnitude of the second coefficient, then left by the magnitude of the third, and down by the magnitude of the fourth, and so on. The sequence of directions (not turns) is always rightward, upward, leftward, downward, then repeating itself. Thus each turn is counterclockwise. The process continues for every coefficient of the polynomial including zeroes, with negative coefficients "walking backwards". The final point reached, at the end of the segment corresponding to the equation's constant term, is the terminus.
A line is then launched from the origin at some angle , reflected off of each line segment at a right angle (not necessarily the "natural" angle of reflection), and refracted at a right angle through the line through each segment (including a line for the zero coefficients) when the angled path does not hit the line segment on that line. The vertical and horizontal lines are reflected off or refracted through in the following sequence: the line containing the segment corresponding to the coefficient of then of etc. Choosing so that the path lands on the terminus, the negative of the tangent of is a root of this polynomial. For every real zero of the polynomial there will be one unique initial angle and path that will land on the terminus. A quadratic with two real roots, for example, will have exactly two angles that satisfy the above conditions.
For complex roots, one also needs to find a series of similar triangles, but with the vertices of the root path displaced from the polynomial path by a distance equal to the imaginary part of the root. In this case the root path will not be rectangular.
Explanation
The construction in effect evaluates the polynomial according to Horner's method. For the polynomial the values of , , are successively generated as distances between the vertices of the polynomial and root paths. For a root of the polynomial the final value is zero, so the last vertex coincides with the polynomial path terminus.
Additional properties
A solution line giving a root is similar to the Lill's con |
https://en.wikipedia.org/wiki/Albert%20Crumeyrolle | Albert J. Crumeyrolle (1919–1992) was a French mathematician and professor of mathematics at the Paul Sabatier University, known for his contributions to spinor structures and Clifford algebra.
Work
Crumeyrolle was a student of André Lichnerowicz under whose supervision he completed a thesis in 1961.
His first important paper after completing his doctorate addressed spinor structures using methods of Clifford algebras developed by Claude Chevalley.
Crumeyrolle is known for his major contributions to theories of Clifford algebras and spinor structures. In 1975 he laid the foundations for symplectic Clifford algebra and the symplectic spinor. An earlier publication by two other authors, Nouazé and Revoy, had appeared three years before in which Weyl algebras were treated from a Cliffordian point of view. Crumeyrolle however drew more attention to the topic, and, as emphasized by Jacques Helmstetter, he contributed original ideas of his own. His work on symplectic Clifford algebras however came under serious critique on mathematical grounds.
The mathematician Artibano Micali recalled Crumeyrolle stating that periodicity of Clifford algebras should play a similar role for elementary particle physics as the periodic classification of elements by Dmitri Mendeleev has played for the periodic table of elements.
Crumeyrolle taught in Iran in 1966, in several Europe countries and, in 1973, at Stanford University summer school.
Publications
Books
Orthogonal and symplectic Clifford algebras: Spinor Structures, 1990
Albert Crumeyrolle & J. Grifone: Symplectic geometry, Pitman Advanced Publishing Program, 1983
Algèbres de Clifford et spineurs, 1974
Bases géométriques de la topologie algébrique, 1970
Compléments d'algèbre moderne, 1969
Notions fondamentales d'algèbre moderne, 1967
Further reading
Rafał Abłamowicz, Pertti Lounesto (eds.): Clifford algebras and spinor structures: a special volume dedicated to the memory of Albert Crumeyrolle (1919–1992), Kluwer Academic Publishers, 1995,
Z. Ozievicz, Cz. Sitarczyk: Parallel treatment of Riemannian and symplectic Clifford algebras. In: A. Micali, R. Boudet, J. Helmstetter (eds.): Clifford Algebras and their Applications in Mathematical Physics: Workshop Proceedings: 2nd (Fundamental Theories of Physics), Kluwer Academic Publishers, 1992, , p. 83–96
References
1919 births
1992 deaths
20th-century French mathematicians |
https://en.wikipedia.org/wiki/List%20of%20Lafayette%20Leopards%20head%20football%20coaches |
Key
Head coaches
Statistics correct as of the end of the 2022 college football season.
Notes
References
Lists of college football head coaches
Pennsylvania sports-related lists |
https://en.wikipedia.org/wiki/Color%20moments | Color moments are measures that characterise color distribution in an image in the same way that central moments uniquely describe a probability distribution. Color moments are mainly used for color indexing purposes as features in image retrieval applications in order to compare how similar two images are based on color. Usually one image is compared to a database of digital images with pre-computed features in order to find and retrieve a similar Image. Each comparison between images results in a similarity score, and the lower this score is the more identical the two images are supposed to be.
Overview
Color moments are scaling and rotation invariant. It is usually the case that only the first three color moments are used as features in image retrieval applications as most of the color distribution information is contained in the low-order moments. Since color moments encode both shape and color information they are a good feature to use under changing lighting conditions, but they cannot handle occlusion very successfully. Color moments can be computed for any color model. Three color moments are computed per channel (e.g. 9 moments if the color model is RGB and 12 moments if the color model is CMYK). Computing color moments is done in the same way as computing moments of a probability distribution.
Mean
The first color moment can be interpreted as the average color in the image, and it can be calculated by using the following formula
where N is the number of pixels in the image and is the value of the j-th pixel of the image at the i-th color channel.
Standard Deviation
The second color moment is the standard deviation, which is obtained by taking the square root of the variance of the color distribution.
where is the mean value, or first color moment, for the i-th color channel of the image.
Skewness
The third color moment is the skewness. It measures how asymmetric the color distribution is, and thus it gives information about the shape of the color distribution. Skewness can be computed with the following formula:
Kurtosis
Kurtosis is the fourth color moment, and, similarly to skewness, it provides information about the shape of the color distribution. More specifically, kurtosis is a measure of how extreme the tails are in comparison to the normal distribution.
Higher-order color moments
Higher-order color moments are usually not part of the color moments feature set in image retrieval tasks as they require more data in order to obtain a good estimate of their value, and also the lower-order moments generally provide enough information.
Applications
Color moments have significant applications in image retrieval. They can be used in order to compare how similar two images are. This is a relatively new approach to color indexing. The greatest advantage of using color moments comes from the fact that there is no need to store the complete color distribution. This greatly speeds up image retrieval since there are les |
https://en.wikipedia.org/wiki/Leigh%20Mercer | Leigh Mercer (1893–1977) was a British wordplay and recreational mathematics expert.
Career
Palindrome
Mercer is best known for devising the palindrome "A man, a plan, a canal: Panama!".
Mathematical limerick
The following mathematical limerick is attributed to him:
This is read as follows:
A dozen, a gross, and a score
Plus three times the square root of four
Divided by seven
Plus five times eleven
Is nine squared and not a bit more.
References
1893 births
1977 deaths
Recreational cryptographers
Word games
Word play
Mathematical humor
Palindromists |
https://en.wikipedia.org/wiki/Projective | Projective may refer to
Mathematics
Projective geometry
Projective space
Projective plane
Projective variety
Projective linear group
Projective module
Projective line
Projective object
Projective transformation
Projective hierarchy
Projective connection
Projective Hilbert space
Projective morphism
Projective polyhedron
Projective resolution
Psychology
Projective test
Projective techniques
See also
Projection (disambiguation)
Projector (disambiguation)
Project (disambiguation)
Proform, which covers proadjective
Adjective
Injective
Surjective |
https://en.wikipedia.org/wiki/Modern%20elementary%20mathematics | Modern elementary mathematics is the theory and practice of teaching elementary mathematics according to contemporary research and thinking about learning. This can include pedagogical ideas, mathematics education research frameworks, and curricular material.
In practicing modern elementary mathematics, teachers may use new and emerging media and technologies like social media and video games, as well as applying new teaching techniques based on the individualization of learning, in-depth study of the psychology of mathematics education, and integrating mathematics with science, technology, engineering and the arts.
General practice
Areas of mathematics
Making all areas of mathematics accessible to young children is a key goal of modern elementary mathematics. Author and academic Liping Ma calls for "profound understanding of fundamental mathematics" by elementary teachers and parents of learners, as well as learners themselves.
Algebra: Early algebra covers the approach to elementary mathematics which helps children generalize number and set ideas.
Probability and statistics: Modern technologies make probability and statistics accessible to elementary learners with tools such as computer-assisted data visualization.
Geometry: Specially developed physical and virtual manipulatives, as well as interactive geometry software, can make geometry (beyond basic sorting and measuring) available to elementary learners.
Calculus: New innovations, such as Don Cohen's map to calculus, which was developed using children's work and level of understanding, is making calculus accessible to elementary learners.
Problem solving: Creative problem solving, which contrasts with exercises in arithmetic, such as adding or multiplying numbers, is now a major part of elementary mathematics.
Other areas of mathematics such as logical reasoning and paradoxes, which used to be reserved for advanced groups of learners, are now being integrated into more mainstream curricula.
Use of psychology
Psychology in mathematics education is an applied research domain, with many recent developments relevant to elementary mathematics. A major aspect is the study of motivation; while most young children enjoy some mathematical practices, by the age of seven to ten many lose interest and begin to experience mathematical anxiety. Constructivism and other learning theories consider the ways young children learn mathematics, taking child developmental psychology into account.
Both practitioners and researchers focus on children's memory, mnemonic devices, and computer-assisted techniques such as spaces repetition. There is an ongoing discussion of relationships between memory, procedural fluency with algorithms, and conceptual understanding of elementary mathematics. Sharing songs, rhymes, visuals and other mnemonics is popular in teacher social networks.
The understanding that young children benefit from hands-on learning is more than a century old, going back to the wo |
https://en.wikipedia.org/wiki/Primitive%20polynomial | In different branches of mathematics, primitive polynomial may refer to:
Primitive polynomial (field theory), a minimal polynomial of an extension of finite fields
Primitive polynomial (ring theory), a polynomial with coprime coefficients |
https://en.wikipedia.org/wiki/Weak%20equivalence%20%28homotopy%20theory%29 | In mathematics, a weak equivalence is a notion from homotopy theory that in some sense identifies objects that have the same "shape". This notion is formalized in the axiomatic definition of a model category.
A model category is a category with classes of morphisms called weak equivalences, fibrations, and cofibrations, satisfying several axioms. The associated homotopy category of a model category has the same objects, but the morphisms are changed in order to make the weak equivalences into isomorphisms. It is a useful observation that the associated homotopy category depends only on the weak equivalences, not on the fibrations and cofibrations.
Topological spaces
Model categories were defined by Quillen as an axiomatization of homotopy theory that applies to topological spaces, but also to many other categories in algebra and geometry. The example that started the subject is the category of topological spaces with Serre fibrations as fibrations and weak homotopy equivalences as weak equivalences (the cofibrations for this model structure can be described as the retracts of relative cell complexes X ⊆ Y). By definition, a continuous mapping f: X → Y of spaces is called a weak homotopy equivalence if the induced function on sets of path components
is bijective, and for every point x in X and every n ≥ 1, the induced homomorphism
on homotopy groups is bijective. (For X and Y path-connected, the first condition is automatic, and it suffices to state the second condition for a single point x in X.)
For simply connected topological spaces X and Y, a map f: X → Y is a weak homotopy equivalence if and only if the induced homomorphism f*: Hn(X,Z) → Hn(Y,Z) on singular homology groups is bijective for all n. Likewise, for simply connected spaces X and Y, a map f: X → Y is a weak homotopy equivalence if and only if the pullback homomorphism f*: Hn(Y,Z) → Hn(X,Z) on singular cohomology is bijective for all n.
Example: Let X be the set of natural numbers {0, 1, 2, ...} and let Y be the set {0} ∪ {1, 1/2, 1/3, ...}, both with the subspace topology from the real line. Define f: X → Y by mapping 0 to 0 and n to 1/n for positive integers n. Then f is continuous, and in fact a weak homotopy equivalence, but it is not a homotopy equivalence.
The homotopy category of topological spaces (obtained by inverting the weak homotopy equivalences) greatly simplifies the category of topological spaces. Indeed, this homotopy category is equivalent to the category of CW complexes with morphisms being homotopy classes of continuous maps.
Many other model structures on the category of topological spaces have also been considered. For example, in the Strøm model structure on topological spaces, the fibrations are the Hurewicz fibrations and the weak equivalences are the homotopy equivalences.
Chain complexes
Some other important model categories involve chain complexes. Let A be a Grothendieck abelian category, for example the category of modules over a ring or the ca |
https://en.wikipedia.org/wiki/Antonio%20Carannante | Antonio Carannante is an Italian former football defender who has played for Napoli, Ascoli, Lecce, Piacenza, Avellino and Nola.
Career statistics
External links
1965 births
Living people
Italian men's footballers
Italy men's under-21 international footballers
Serie A players
Serie B players
Serie C players
SSC Napoli players
Ascoli Calcio 1898 FC players
US Lecce players
Piacenza Calcio 1919 players
US Avellino 1912 players
UEFA Cup winning players
Men's association football defenders
People from Nola
Footballers from Campania |
https://en.wikipedia.org/wiki/Mukhopadhyaya%20theorem | In geometry Mukhopadhyaya's theorem may refer to one of several closely related theorems about the number of vertices of a curve due to . One version, called the four-vertex theorem, states that a simple convex curve in the plane has at least four vertices, and another version states that a simple convex curve in the affine plane has at least six affine vertices.
References
Theorems in plane geometry |
https://en.wikipedia.org/wiki/Syamadas%20Mukhopadhyaya | Syamadas Mukhopadhyaya (22 June 1866 – 8 May 1937) was an Indian mathematician who introduced the four-vertex theorem and Mukhopadhyaya's theorem in plane geometry.
Biography
Syamadas Mukhopadhyaya was born at Haripal, Hooghly district, in Bengal Presidency, British India. He graduated from Hooghly College, received his M.A. degree from Presidency College in Calcutta, and his Ph.D. degree from Calcutta University in 1910. He also took classes from the Indian Association for the Cultivation of Science.
Mukhopadhyaya was appointed by Asustosh Mookerjee as professor of mathematics in the Rajabazar Science College, University of Calcutta. Jacques Hadamard communicated with Mukhopadyaya about the latter's work on the geometry of a plane arc and Wilhelm Blaschke's book on geometry had a reference to Mukhopadhyaya.
He worked at Bangabasi College and then at Bethune College in Calcutta, where he lectured in Mathematics, English Literature, and Philosophy. In 1932, he was elected president of the Calcutta Mathematical Society. He served in this capacity until his death from heart failure in 1937. Mukhopadhyaya went to Europe on a Ghose Travelling Fellowship in 1933 to study methods of education. He gave lectures in Paris University.
References
"Syamadas Mukhopadhyaya", Bulletin of the Calcutta Mathematical Society, Vol. 29 (1937), pages 115-120.
D. DeTurck, H. Gluck, D. Pomerleano, D.S. Vick, The four vertex theorem and its converse, Notices of the AMS, 54 (2007), no. 2, 192–207.
Differential geometers
19th-century Indian mathematicians
20th-century Indian mathematicians
University of Calcutta alumni
Academic staff of the University of Calcutta
Scientists from Kolkata
1937 deaths
1866 births
People from the Bengal Presidency
Mathematicians from British India |
https://en.wikipedia.org/wiki/Monothetic%20group | In mathematics, a monothetic group is a topological group with a dense cyclic subgroup. They were introduced by . An example is the additive group of p-adic integers, in which the integers are dense.
A monothetic group is necessarily abelian.
References
Topological groups
Properties of groups |
https://en.wikipedia.org/wiki/Tukey%20depth | In statistics and computational geometry, the Tukey depth is a measure of the depth of a point in a fixed set of points. The concept is named after its inventor, John Tukey. Given a set of n points in d-dimensional space, Tukey's depth of a point x is the smallest fraction (or number) of points in any closed halfspace that contains x.
Tukey's depth measures how extreme a point is with respect to a point cloud. It is used to define the bagplot, a bivariate generalization of the boxplot.
For example, for any extreme point of the convex hull there is always a (closed) halfspace that contains only that point, and hence its Tukey depth as a fraction is 1/n.
Definitions
Sample Tukey's depth of point x, or Tukey's depth of x with respect to the point cloud , is defined as
where is the indicator function that equals 1 if its argument holds true or 0 otherwise.
Population Tukey's depth of x wrt to a distribution is
where X is a random variable following distribution .
Tukey mean and relation to centerpoint
A centerpoint c of a point set of size n is nothing else but a point of Tukey depth of at least n/(d + 1).
See also
Centerpoint (geometry)
References
Computational geometry |
https://en.wikipedia.org/wiki/Conformal%20geometric%20algebra | Conformal geometric algebra (CGA) is the geometric algebra constructed over the resultant space of a map from points in an -dimensional base space to null vectors in . This allows operations on the base space, including reflections, rotations and translations to be represented using versors of the geometric algebra; and it is found that points, lines, planes, circles and spheres gain particularly natural and computationally amenable representations.
The effect of the mapping is that generalized (i.e. including zero curvature) -spheres in the base space map onto -blades, and so that the effect of a translation (or any conformal mapping) of the base space corresponds to a rotation in the higher-dimensional space. In the algebra of this space, based on the geometric product of vectors, such transformations correspond to the algebra's characteristic sandwich operations, similar to the use of quaternions for spatial rotation in 3D, which combine very efficiently. A consequence of rotors representing transformations is that the representations of spheres, planes, circles and other geometrical objects, and equations connecting them, all transform covariantly. A geometric object (a -sphere) can be synthesized as the wedge product of linearly independent vectors representing points on the object; conversely, the object can be decomposed as the repeated wedge product of vectors representing distinct points in its surface. Some intersection operations also acquire a tidy algebraic form: for example, for the Euclidean base space , applying the wedge product to the dual of the tetravectors representing two spheres produces the dual of the trivector representation of their circle of intersection.
As this algebraic structure lends itself directly to effective computation, it facilitates exploration of the classical methods of projective geometry and inversive geometry in a concrete, easy-to-manipulate setting. It has also been used as an efficient structure to represent and facilitate calculations in screw theory. CGA has particularly been applied in connection with the projective mapping of the everyday Euclidean space into a five-dimensional vector space , which has been investigated for applications in robotics and computer vision. It can be applied generally to any pseudo-Euclidean space - for example, Minkowski space to the space .
Construction of CGA
Notation and terminology
In this article, the focus is on the algebra as it is this particular algebra that has been the subject of most attention over time; other cases are briefly covered in a separate section.
The space containing the objects being modelled is referred to here as the base space, and the algebraic space used to model these objects as the representation or conformal space. A homogeneous subspace refers to a linear subspace of the algebraic space.
The terms for objects: point, line, circle, sphere, quasi-sphere etc. are used to mean either the geometric object in the base |
https://en.wikipedia.org/wiki/Avner%20Magen | Avner Magen (March 30, 1968 – May 29, 2010) was an associate professor of computer science at the University of Toronto whose research focused on the theory of metric embeddings, discrete geometry and computational geometry. He completed his undergraduate and graduate studies at the Hebrew University of Jerusalem, and received his Ph.D. in Computer Science in 2002, under the supervision of Nati Linial. He held a postdoctoral fellowship at NEC Research in Princeton, New Jersey, from 2000 until 2002. He joined the University of Toronto in 2002, first as a postdoctoral fellow, and then as an assistant professor in 2004. He was promoted to associate professor in 2009.
His major contributions include an algorithm for approximating the weight of the Euclidean minimum spanning tree in sublinear time, and finding a tight integrality gap for the vertex cover problem using the Frankl–Rödl graphs. He proved with his coauthors essentially that a huge class of semidefinite programming algorithms for the famous vertex cover problem will not achieve a solution of value less than the value of the optimal solution times a factor of two. With Nati Linial and Michael Saks, he showed how to embed trees into Euclidean metrics with low distortion. And in a later result, he showed how to do JL-style embeddings that preserved not only distances, but also higher order volumes.
He died in a climbing accident in Alaska on May 29, 2010, along with good friend Andrew Herzenberg, leaving behind three children, Noa, Ofri, and Roy, and a wife, Ayelet.
References
External links
Avner Magen's home page at University of Toronto.
1968 births
2010 deaths
Mountaineering deaths
Canadian computer scientists
Theoretical computer scientists
Hebrew University of Jerusalem School of Computer Science & Engineering alumni
Academic staff of the University of Toronto |
https://en.wikipedia.org/wiki/Nakayama%27s%20conjecture | In mathematics, Nakayama's conjecture is a conjecture about Artinian rings, introduced by . The generalized Nakayama conjecture is an extension to more general rings, introduced by . proved some cases of the generalized Nakayama conjecture.
Nakayama's conjecture states that if all the modules of a minimal injective resolution of an Artin algebra R are injective and projective, then R is self-injective.
References
Ring theory
Conjectures |
https://en.wikipedia.org/wiki/Mehdi%20Eslami%20%28footballer%2C%20born%201985%29 | Seyed Mehdi Eslami (, born May 5, 1985, in Iran) is an Iranian former footballer.
Club career
Club career statistics
Honours
Club
Esteghlal
Hazfi Cup (1): 2011–12
References
External links
Seyed Mehdi Eslami at PersianLeague
Iranian men's footballers
1985 births
Living people
PAS Hamedan F.C. players
F.C. Shahrdari Bandar Abbas players
Esteghlal F.C. players
Machine Sazi F.C. players
Men's association football goalkeepers
Footballers from Tehran |
https://en.wikipedia.org/wiki/Oka%27s%20lemma | In mathematics, Oka's lemma, proved by Kiyoshi Oka, states that in a domain of holomorphy in , the function is plurisubharmonic, where is the distance to the boundary. This property shows that the domain is pseudoconvex. Historically, this lemma was first shown in the Hartogs domain in the case of two variables, also Oka's lemma is the inverse of the Levi's problem (unramified Riemann domain over ). So maybe that's why Oka called Levi's problem as "problème inverse de Hartogs", and the Levi's problem is occasionally called Hartogs' Inverse Problem.
References
Further reading
PDF TeX
Theorems in complex analysis
Lemmas in analysis |
https://en.wikipedia.org/wiki/Lalbaba%20College | Lalbaba College, established in 1964, is an undergraduate college in Howrah, India. It is affiliated with the University of Calcutta.
Departments
Science
Chemistry
Physics
Mathematics
Geography
Arts and Commerce
Bengali
Hindi
Sanskrit
English
Urdu
History
Political Science
Philosophy
Education
Economics
Commerce
Online Admission
One can visit online for admission and merit list at for the 2022-23 academic year.
Accreditation
The college is recognized by the University Grants Commission (UGC). In 2004 it was accredited by the National Assessment and Accreditation Council (NAAC), and awarded B+ grade, the college is now preparing for its next re-accreditation.|
See also
List of colleges affiliated to the University of Calcutta
Education in India
Education in West Bengal
References
External links
http://lalbabacollege.net
Educational institutions established in 1964
University of Calcutta affiliates
Universities and colleges in Howrah district
1964 establishments in West Bengal |
https://en.wikipedia.org/wiki/Archive%20for%20History%20of%20Exact%20Sciences | Archive for History of Exact Sciences is a peer-reviewed academic journal currently published bimonthly by Springer Science+Business Media, covering the history of mathematics and of astronomy observations and techniques, epistemology of science, and philosophy of science from Antiquity until now. It was established in 1960 and the current editors-in-chief are Jed Z. Buchwald and Jeremy Gray.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.594.
References
External links
History of science journals
Springer Science+Business Media academic journals
Bimonthly journals
English-language journals
Academic journals established in 1960 |
https://en.wikipedia.org/wiki/Maplet | A maplet or maplet arrow (symbol: ↦, commonly pronounced "maps to") is a symbol consisting of a vertical line with a rightward-facing arrow. It is used in mathematics and in computer science to denote functions (the expression x ↦ y is also called a maplet). One example of use of the maplet is in Z notation, a formal specification language used in software development.
In the Unicode character set, the maplet is at the point U+21A6.
See also
Arrow notation – e.g., , also known as map
References
Mathematical symbols |
https://en.wikipedia.org/wiki/Pontryagin%20product | In mathematics, the Pontryagin product, introduced by , is a product on the homology of a topological space induced by a product on the topological space. Special cases include the Pontryagin product on the homology of an abelian group, the Pontryagin product on an H-space, and the Pontryagin product on a loop space.
Cross product
In order to define the Pontryagin product we first need a map which sends the direct product of the m-th and n-th homology group to the (m+n)-th homology group of a space. We therefore define the cross product, starting on the level of singular chains. Given two topological spaces X and Y and two singular simplices and we can define the product map , the only difficulty is showing that this defines a singular (m+n)-simplex in . To do this one can subdivide into (m+n)-simplices. It is then easy to show that this map induces a map on homology of the form
by proving that if and are cycles then so is and if either or is a boundary then so is the product.
Definition
Given an H-space with multiplication we define the Pontryagin product on homology by the following composition of maps
where the first map is the cross product defined above and the second map is given by the multiplication of the H-space followed by application of the homology functor to obtain a homomorphism on the level of homology. Then .
References
Homology theory
Group theory |
https://en.wikipedia.org/wiki/HSE%20Faculty%20of%20Mathematics | The Faculty of Mathematics (FM) at the National Research University Higher School of Economics (Russian: факультет математики Национального Исследовательского университета «Высшая Школа Экономики») was founded in 2008 jointly by the Higher School of Economics (HSE) and the Independent University of Moscow (IUM). It offers Bachelor of Science program “Mathematics” (in Russian), Master of Science program “Mathematics” (in English), Master of Science program “Mathematics and Mathematical Physics” (in Russian). The faculty also plays a key role in the HSE Graduate School of Mathematics (open to domestic and international students). Since the creation of the FM, new faculty members were hired at the international market, and researchers from the USA, Japan, Canada, France, the UK, etc., joined the team. The Faculty of Mathematics has joint departments with distinguished research institutes of the Russian Academy of Science: Steklov Institute of Mathematics, Kharkevich Institute for Information Transmission Problems, Lebedev Physical Institute. Associated with the FM are three international research groups, the so-called laboratories: the Laboratory of Algebraic Geometry and its Applications, the Laboratory of Representation Theory and Mathematical Physics, and the Laboratory of Mirror Symmetry and Automorphic Forms.
Jointly with Moscow Center for Pedagogical Mastership, the FM announced two new programs in 2017: one at the Bachelor and one at the Master level.
History
In 2007, HSE university central administration contacted the Independent University of Moscow (IUM) with a suggestion to join HSE as one of the faculties. Although this suggestion in its initial form was rejected, the IUM decided to help establishing a new Faculty of Mathematics. In particular, several IUM professors were hired by HSE in 2008 and formed the initial composition of the faculty. All further hiring was conducted through open international competitions. In September 2008, FM invited the first Bachelor of Science Students.
The idea behind HSE Faculty of Mathematics was to create the first mathematics department in the former USSR that would be internationally competitive in the following
sense:
Employment terms and conditions (including salary range and research funding) should be attractive for mathematicians from developed countries;
A hybrid strategy of instruction should be employed that combines Konstantinov's system and the best practices of leading Western mathematics departments (such as teaching assistantships, a cumulative grading scheme, an external English language proficiency evaluation, etc.).
A new faculty should inherit the main positive features of the IUM, which are the mixing studies and research, teaching of modern mathematics, high level of both professors and students, and lacking the weak ones, which are no right to provide state diplomas, and, most seriously, having a small number of possessors of the IUM degree.
In 2010, a Master of Science pr |
https://en.wikipedia.org/wiki/Projective%20range | In mathematics, a projective range is a set of points in projective geometry considered in a unified fashion. A projective range may be a projective line or a conic. A projective range is the dual of a pencil of lines on a given point. For instance, a correlation interchanges the points of a projective range with the lines of a pencil. A projectivity is said to act from one range to another, though the two ranges may coincide as sets.
A projective range expresses projective invariance of the relation of projective harmonic conjugates. Indeed, three points on a projective line determine a fourth by this relation. Application of a projectivity to this quadruple results in four points likewise in the harmonic relation. Such a quadruple of points is termed a harmonic range. In 1940 Julian Coolidge described this structure and identified its originator:
Two fundamental one-dimensional forms such as point ranges, pencils of lines, or of planes are defined as projective, when their members are in one-to-one correspondence, and a harmonic set of one ... corresponds to a harmonic set of the other. ... If two one-dimensional forms are connected by a train of projections and intersections, harmonic elements will correspond to harmonic elements, and they are projective in the sense of Von Staudt.
Conic ranges
When a conic is chosen for a projective range, and a particular point E on the conic is selected as origin, then addition of points may be defined as follows:
Let A and B be in the range (conic) and AB the line connecting them. Let L be the line through E and parallel to AB. The "sum of points A and B", A + B, is the intersection of L with the range.
The circle and hyperbola are instances of a conic and the summation of angles on either can be generated by the method of "sum of points", provided points are associated with angles on the circle and hyperbolic angles on the hyperbola.
References
H. S. M. Coxeter (1955) The Real Projective Plane, University of Toronto Press, p 20 for line, p 101 for conic.
Projective geometry |
https://en.wikipedia.org/wiki/Rosati%20involution | In mathematics, a Rosati involution, named after Carlo Rosati, is an involution of the rational endomorphism ring of an abelian variety induced by a polarisation.
Let be an abelian variety, let be the dual abelian variety, and for , let be the translation-by- map, . Then each divisor on defines a map via . The map is a polarisation if is ample. The Rosati involution of relative to the polarisation sends a map to the map , where is the dual map induced by the action of on .
Let denote the Néron–Severi group of . The polarisation also induces an inclusion via . The image of is equal to , i.e., the set of endomorphisms fixed by the Rosati involution. The operation then gives the structure of a formally real Jordan algebra.
References
Algebraic geometry
Ring theory |
https://en.wikipedia.org/wiki/Carlo%20Rosati | Carlo Rosati (Livorno, 24 April 1876 – Pisa, 19 August 1929) was an Italian mathematician working on algebraic geometry who introduced the Rosati involution.
Notes
References
External links
Carlo Rosati in Mathematica Italiana
Carlo Rosati
1876 births
1929 deaths
Italian mathematicians
Academic staff of the University of Pisa |
https://en.wikipedia.org/wiki/Reiss%20relation | In algebraic geometry, the Reiss relation, introduced by , is a condition on the second-order elements of the points of a plane algebraic curve meeting a given line.
Statement
If C is a complex plane curve given by the zeros of a polynomial f(x,y) of two variables, and L is a line meeting C transversely and not meeting C at infinity, then
where the sum is over the points of intersection of C and L, and fx, fxy and so on stand for partial derivatives of f .
This can also be written as
where κ is the curvature of the curve C and θ is the angle its tangent line makes with L, and the sum is again over the points of intersection of C and L .
References
Akivis, M. A.; Goldberg, V. V.: Projective differential geometry of submanifolds. North-Holland Mathematical Library, 49. North-Holland Publishing Co., Amsterdam, 1993 (chapter 8).
Theorems in algebraic geometry
Algebraic curves |
https://en.wikipedia.org/wiki/Sonine%20formula | In mathematics, Sonine's formula is any of several formulas involving Bessel functions found by Nikolay Yakovlevich Sonin.
One such formula is the following integral formula involving a product of three Bessel functions:
where Δ is the area of a triangle with given sides.
References
Special hypergeometric functions |
https://en.wikipedia.org/wiki/1%20%E2%88%92%201%20%2B%202%20%E2%88%92%206%20%2B%2024%20%E2%88%92%20120%20%2B%20%E2%8B%AF | In mathematics,
is a divergent series, first considered by Euler, that sums the factorials of the natural numbers with alternating signs. Despite being divergent, it can be assigned a value of approximately 0.596347 by Borel summation.
Euler and Borel summation
This series was first considered by Euler, who applied summability methods to assign a finite value to the series. The series is a sum of factorials that are alternately added or subtracted. One way to assign a value to this divergent series is by using Borel summation, where one formally writes
If summation and integration are interchanged (ignoring that neither side converges), one obtains:
The summation in the square brackets converges when , and for those values equals . The analytic continuation of to all positive real leads to a convergent integral for the summation:
where E1(z) is the exponential integral. This is by definition the Borel sum of the series.
Connection to differential equations
Consider the coupled system of differential equations
where dots denote derivatives with respect to t.
The solution with stable equilibrium at as t → ∞ has y(t) = , and substituting it into the first equation gives a formal series solution
Observe x(1) is precisely Euler's series.
On the other hand, the system of differential equations has a solution
By successively integrating by parts, the formal power series is recovered as an asymptotic approximation to this expression for x(t). Euler argues (more or less) that since the formal series and the integral both describe the same solution to the differential equations, they should equal each other at , giving
See also
Alternating factorial
1 + 1 + 1 + 1 + ⋯
1 − 1 + 1 − 1 + ⋯ (Grandi's)
1 + 2 + 3 + 4 + ⋯
1 + 2 + 4 + 8 + ⋯
1 − 2 + 3 − 4 + ⋯
1 − 2 + 4 − 8 + ⋯
References
Further reading
Divergent series |
https://en.wikipedia.org/wiki/Method%20of%20mean%20weighted%20residuals | In applied mathematics, methods of mean weighted residuals (MWR) are methods for solving differential equations. The solutions of these differential equations are assumed to be well approximated by a finite sum of test functions . In such cases, the selected method of weighted residuals is used to find the coefficient value of each corresponding test function. The resulting coefficients are made to minimize the error between the linear combination of test functions, and actual solution, in a chosen norm.
Notation of this page
It is often very important to firstly sort out notation used before presenting how this method is executed in order to avoid confusion.
shall be used to denote the solution to the differential equation that the MWR method is being applied to.
Solving the differential equation mentioned shall be accomplished by setting some function called the "residue function" to zero.
Every method of mean weighted residuals involves some "test functions" that shall be denoted by .
The degrees of freedom shall be denoted by .
If the assumed form of the solution to the differential equation is linear (in the degrees of freedom) then the basis functions used in said form shall be denoted by .
Mathematical statement of method
The method of mean weighted residuals solves by imposing that the degrees of freedom are such that:
is satisfied. Where the inner product is the standard function inner product with respect to some weighting function which is determined usually by the basis function set or arbitrarily according to whichever weighting function is most convenient. For instance, when the basis set is just the Chebyshev polynomials of the first kind, the weighting function is typically because inner products can then be more easily computed using a Chebyshev transform.
Additionally, all these methods have in common that they enforce boundary conditions by either enforcing that the basis functions (in the case of a linear combination) individual enforce the boundary conditions on the original BVP (This only works if the boundary conditions are homogeneous however it is possible to apply it to problems with inhomogeneous boundary conditions by letting and substituting this expression into the original differential equation and imposing homogeneous boundary conditions to the new solution being sought to find u(x) that is v(x) where L(x) is a function that satisfies the boundary conditions imposed on u that is known.), or by explicitly imposing the boundary by removing n rows to the matrix representing the discretised problem where n refers to the order of the differential equation and substituting them with ones that represent the boundary conditions.
Choice of test functions
The choice of test function, as mentioned earlier, depends on the specific method used (under the general heading of mean weighted residual methods). Here is a list of commonly used specific MWR methods and their corresponding test functions roughly ac |
https://en.wikipedia.org/wiki/Atkinson%E2%80%93Mingarelli%20theorem | In applied mathematics, the Atkinson–Mingarelli theorem, named after Frederick Valentine Atkinson and A. B. Mingarelli, concerns eigenvalues of certain Sturm–Liouville differential operators.
In the simplest of formulations let p, q, w be real-valued piecewise continuous functions defined on a closed bounded real interval, . The function w(x), which is sometimes denoted by r(x), is called the "weight" or "density" function. Consider the Sturm–Liouville differential equation
where y is a function of the independent variable x. In this case, y is called a solution if it is continuously differentiable on (a,b) and (p y′)(x) is piecewise continuously differentiable and y satisfies the equation () at all except a finite number of points in (a,b). The unknown function y is typically required to satisfy some boundary conditions at a and b.
The boundary conditions under consideration here are usually called separated boundary conditions and they are of the form:
where the , are real numbers. We define
The theorem
Assume that p(x) has a finite number of sign changes and that the positive (resp. negative) part of the function p(x)/w(x) defined by , (resp. are not identically zero functions over I. Then the eigenvalue problem (), ()–() has an infinite number of real positive eigenvalues ,
and an infinite number of negative eigenvalues ,
whose spectral asymptotics are given by their solution [2] of Jörgens' Conjecture [3]:
and
For more information on the general theory behind () see the article on Sturm–Liouville theory. The stated theorem is actually valid more generally for coefficient functions that are Lebesgue integrable over .
References
F. V. Atkinson, A. B. Mingarelli, Multiparameter Eigenvalue Problems – Sturm–Liouville Theory, CRC Press, Taylor and Francis, 2010.
F. V. Atkinson, A. B. Mingarelli, Asymptotics of the number of zeros and of the eigenvalues of general weighted Sturm–Liouville problems, J. für die Reine und Ang. Math. (Crelle), 375/376 (1987), 380–393. See also free download of the original paper.
K. Jörgens, Spectral theory of second-order ordinary differential operators, Lectures delivered at Aarhus Universitet, 1962/63.
Ordinary differential equations
Theorems in analysis |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.