source
stringlengths
16
98
text
stringlengths
40
168k
Wikipedia:Tuna Altınel#0
Tuna Altınel is a Turkish mathematician, born February 12, 1966, in Istanbul, who has worked at the University Lyon 1 in France since 1996. He is a specialist in group theory and mathematical logic. With Alexandre Borovik and Gregory Cherlin, he proved a major case of the Cherlin–Zilber conjecture. Altınel is active in the Academics for Peace movement, which supports a peaceful resolution of the conflict in south-eastern Turkey, and calls for the human rights of the civilian population to be respected. Accused by the Turkish authorities of membership in a terrorist organization, Altınel has been imprisoned since May 11, 2019, at the Kepsut prison in Turkey. == Education and career == After undergraduate studies in mathematics and computer science at Boğaziçi University, Istanbul, Altınel received his doctorate from Rutgers University (New Jersey, United States) under the direction of Gregory Cherlin. In 1996 he joined the department of mathematics of the university Lyon-1, as maître de conférences, and completed his French habilitation in 2001. Altınel has written 26 mathematical articles, principally on the subject of groups in model theory, more particularly groups of finite Morley rank and the Cherlin–Zilber Algebraicity Conjecture, concerning the structure of the simple groups of finite Morley rank. He is joint author with Alexandre Borovik and Gregory Cherlin of a book in which this conjecture is proved in the case of infinite 2-rank, after the development of a body of machinery analogous to certain chapters of finite simple group theory. Altınel's doctoral advisees include Éric Jaligot, winner of the 2000 Sacks Prize, a prize given annually for an outstanding doctoral thesis in mathematical logic (doctoral thesis supervised jointly by Tuna Altınel and Bruno Poizat). He is active in the domain of scientific cooperation with Turkey; in particular, he was an organizer of an international mathematics conference held in Istanbul in 2016 in honor of Alexandre Borovik and Ali Nesin (Leelavati prize winner, 2018). == Political activities == === Overview === Altınel has been an active supporter of a peaceful resolution of the conflict in southeastern Turkey and of human rights and civil liberties in Turkey. With regard to the Kurdish conflict in southeastern Turkey, he was one of 116 academics who signed a 2003 letter in support of a peaceful resolution of that conflict, among the first group of signatories of a similar peace petition in January 2016 that garnered 1128 signatures at the time of its promulgation under the title "We will not be parties to this crime," among the 132 intellectuals calling for assistance to those wounded in the conflict at Cizre, and one of 170 academics to sign a letter in 2018 opposing the Afrin operation. On February 21, 2019, he acted as translator for a former member of parliament of the Peoples' Democratic Party (HDP) at a public meeting in Lyon, France, in which a documentary on the Cizre massacres was shown, followed by a discussion. With the resumption of active conflict in August 2015 following a period of relative calm, Altınel reached out to the affected community and began to visit the areas involved in September 2015. His own account of these activities is quoted below, from subsequent court testimony. With the trials of the signatories of the January 2016 petition and the broader wave of repression following the attempted coup of July 2016, described in more detail below, questions of academic freedom and freedom of speech become more prominent. Altınel's actions in this direction include a petition responding to the suicide of Mehmet Fatih Traş, an academic fired for his involvement with the peace petition (February 2017) denunciation of the role of the Turkish research council TÜBİTAK in the state of emergency following the attempted coup d'état of 2016 (April 2017); the CNRS Scientific Council voted unanimously to recommend to the CNRS to reconsider its agreements concerning collaboration with TÜBİTAK (April 24–25, 2017). publication of a review article on the trials of the Academics for Peace entitled "Les procès contre les Universitaires pour la paix : extraits d’une comédie politico-juridique (The trials of the Academics for Peace: scenes from a politico-juridical spectacle)". petition in support of Academic for Peace Füsün Üstel These activities have led to two separate court cases against Altınel in Turkey and his social media postings have been used to justify the second of these cases. === January 2016 petition and Academics for Peace === Altınel was one of the first signatories of the January 2016 peace petition entitled "We will not be parties to this crime!", which was promulgated by the Academics for Peace on January 11, 2016. The following day, President Erdoğan publicly criticized the signatories, and within a few days 27 had been arrested." At the same time foreign reaction was strongly supportive of the signatories. The peace petition ultimately garnered 2212 signatures of academics, largely in Turkey. Altınel is one of over 750 signatories from the first group of 1128 such who have been prosecuted or sentenced as individuals for that act under Turkish Anti-Terrorism legislation, through June 2019, on a charge of "propaganda in support of a terrorist organization." Since 2016 Altınel has been an active and vocal supporter both of the content of this petition and of the civil rights of its signers. In the second hearing in his case, February 28, 2019, at the 29th Central Criminal Court, Çağlayan Courthouse, Istanbul, Altınel testified that he had aided civilian victims of military operations that took place in the towns placed under military curfew: Since September 2015, I have traveled several times to a number of provinces, including some of those mentioned in the Peace Petition which I signed. ... I carried bag upon bag of provisions to help the victims of destruction and forced migration, I spoke with those who had lost their homes and relatives. I did all of this on my own initiative, and my principle was as follows: If every Turkish citizen will do what I do, we will come closer to peace. You can find the traces of my efforts where I sojourned in the towns of Sur, Nusaybin, Cizre, Hakkari, and Yüksekova. The Prosecutor may use this as evidence against me. ... I did not simply sign the Peace Petition. I thought about it, felt it, lived it. I wrote that text. I stand behind every sentence. The sentencing hearing for Altınel's trial for "propaganda on behalf of a terrorist organization" in the context of the Academics for Peace Trials is scheduled for July 16, 2019. === 2019 charge and imprisonment === On April 12, 2019, on arriving for a visit to Turkey, Altınel's passport was confiscated at the airport. On May 10 he requested a new passport at the Balıkesir prefecture and was taken into custody for interrogation and placed in pre-trial detention on the following day. It was learned later that a new charge had been filed against him on April 30, 2019, at the prosecutor general's office in Balıkesir. This new charge is "membership in a terrorist organization", based on his participation on February 21, 2019, at a public meeting in Villeurbanne, near Lyon, France. This meeting was organized by the local Kurdish Society; a documentary was shown on the subject of the Cizre massacres and a discussion was held with a former member of the Turkish parliament, Faysal Sarıyıldız (HDP), now in exile. At that public meeting, Altınel acted as translator for the former MP. On May 8 Füsun Üstel was incarcerated and began serving a 15-month sentence for signing the peace petition of January 2016. Altınel was arrested on May 11. After his first hearing on the new charge was scheduled for July 30, 2019, he was released. === Reactions === ==== Press reports ==== Altınel's May 11 arrest was widely reported in the press, notably in France and in Turkey. Some early reports of the arrest in Turkey quoting variously from Altınel's lawyer or Academics for Peace put the case in the context of the Academics for Peace trials and the conference held in Lyon, France. Other reports originating with the İhlas News Agency and reported on Habertürk and elsewhere described the case as the capture of a wanted terrorist; one of these reports stated that an anti-terrorist operation captured five members of Gülen Movement and the Kurdistan Workers' Party (PKK), listing Altınel's arrest as the fifth. The first article in France, in Mediapart, appeared that same day and was followed rapidly by articles in Le Progrès, Le Monde, 20 minutes, Lyon Capitale, Lyon Mag, Le Figaro Étudiant, Le Figaro, Le Canard enchaîné, Libération, and L’Humanité. Altınel was featured as L’Humanité's Man of the Day on May 16, 2019. Euronews TV reported on the case on May 30, 2019. ==== Official reactions ==== Less than weeks after the confiscation of Altınel's passport, on April 23, 2019, the French Applied Mathematics Society and the French Mathematical Society wrote jointly to President Macron of France. On May 11, the day of Altınel's arrest, the Turkish Consul General in Lyon, Mehmet Özgür Çakar, stated "Tuna Altınel organized, and moderated, a meeting in Lyon consisting entirely of propaganda in favor of the PKK. ... It is possible that this had a negative effect on his situation." The consul also noted that the PKK remains classified a terrorist group by Ankara, the United States, and the European Union. The French Ministry of Europe and Foreign Affairs expressed its "disquiet" on May 13, 2019. A support committee formed at Lyon created a website to document the evolution of the affair, and on May 23 the committee launched a petition in favor of the liberation of Altınel, with over 6000 signatories as of June, 2019, predominantly academics, along with approximately 60 members of the French National Assembly. Professional societies from a number of countries, including mathematics societies in the United States, France, Great Britain, Germany Austria, Italy, and Belgium, as well as the European Mathematical Society, the Association for Symbolic Logic, and the Committee of Concerned Scientists have issued statements in support of Altınel. ==== National Assembly, France ==== On June 11, 2019, the French mathematician and politician Cédric Villani (LREM), Member of Parliament for Essonne's fifth district and Fields medalist, who is a colleague and an outspoken supporter of Altınel, posed a question on the subject during a session of the National Assembly to the Minister for Europe and Foreign Affairs Jean-Yves Le Drian, who stated that the government was committed to doing "everything in its power" in favor of his liberation, notably on the occasion of his June 13 visit to Turkey to consult his counterpart there. == See also == Stable group, Presidency of Recep Tayyip Erdoğan, State of emergency and purges Censorship in Turkey: Article 301 Kurdish–Turkish conflict (2015–present) == References == == External links == Tuna Altınel: CV Altinel, Tuna; Borovik, Alexandre; Cherlin, Gregory (2008), Simple Groups of Finite Morley Rank, Mathematical Surveys and Monographs, vol. 145, Providence, RI: American Mathematical Society, pp. xx+556, doi:10.1090/surv/145, ISBN 978-0-8218-4305-5, MR 2400564 Altinel Support Committee, Lyon Webpage, Academics for Peace Observations from 2ith February 2019, in the Caglayan Courts ("The Turkish State vs. Academics for Peace"), David Bradley-Williams, April/May 2019 Translation of statement by Altınel, Feb. 28, 2019, Çağlayan Courthouse
Wikipedia:Turgay Uzer#0
Ahmet Turgay Uzer is a Turkish-born American theoretical physicist and nature photographer. Regents' Professor Emeritus at Georgia Institute of Technology following Joseph Ford (physicist). He has contributed in the field of atomic and molecular physics, nonlinear dynamics and chaos significantly. His research on interplay between quantum dynamics and classical mechanics, in the context of chaos is considered to be novel in molecular and theoretical physics and chemistry. == Academic career == Turgay Uzer completed his bachelor's degree at Turkey's prestigious Middle East Technical University. According to Harvard University Library his doctoral thesis was entitled "Photon and electron interactions with diatomic molecules." He defended his dissertation and graduated from Harvard University in 1979. Before joining Georgia Tech in 1985 as an associate professor, he worked as a research fellow at University of Oxford 1979/81, Caltech 1982/1983, and as a research associate at University of Colorado 1983/85. Currently, he is a faculty member with the Center for Nonlinear Science and full professor of physics at Georgia Tech. His research areas are quite broad, but he has focused on the dynamics of intermolecular energy transfer, reaction dynamics, quantal manifestations of classical mechanics, quantization of nonlinear systems, computational physics, molecular physics, applied mathematics. == Awards == Uzer was Alexander von Humboldt-Stiftung Foundation Fellow in 1993–1994 at Max Planck Institute, Munich. Uzer is of Turkish origin and was also awarded the prestigious Science award for his contributions to physics from the Scientific and Technological Research Council (TÜBİTAK) [1] in 1998. == Selected publications == === Books === The Physics and Chemistry of Wave Packets, with John Yeazell at books.google Lecture Notes on Atomic and Molecular Physics with Şakir Erkoç at books.google === Some of the seminal papers === Uzer has more than 80 referenced Journal articles, in a number of highly respected scientific journals. appeared in PRE Chaotic billiards with neutral boundaries appeared in Science Celestial Mechanics on a Microscopic Scale appeared in JCP Quantization with operators appropriate to shapes of trajectories and classical perturbation theory == References == == External links == Georgia Tech Homepage Physics Professor Turgay Uzer has been named Regents-reception
Wikipedia:Tutte matrix#0
In graph theory, the Tutte matrix A of a graph G = (V, E) is a matrix used to determine the existence of a perfect matching: that is, a set of edges which is incident with each vertex exactly once. If the set of vertices is V = { 1 , 2 , … , n } {\displaystyle V=\{1,2,\dots ,n\}} then the Tutte matrix is an n-by-n matrix A with entries A i j = { x i j if ( i , j ) ∈ E and i < j − x j i if ( i , j ) ∈ E and i > j 0 otherwise {\displaystyle A_{ij}={\begin{cases}x_{ij}\;\;{\mbox{if}}\;(i,j)\in E{\mbox{ and }}i<j\\-x_{ji}\;\;{\mbox{if}}\;(i,j)\in E{\mbox{ and }}i>j\\0\;\;\;\;{\mbox{otherwise}}\end{cases}}} where the xij are indeterminates. The determinant of this skew-symmetric matrix is then a polynomial (in the variables xij, i < j ): this coincides with the square of the pfaffian of the matrix A and is non-zero (as a polynomial) if and only if a perfect matching exists. (This polynomial is not the Tutte polynomial of G.) The Tutte matrix is named after W. T. Tutte, and is a generalisation of the Edmonds matrix for a balanced bipartite graph. == References == R. Motwani, P. Raghavan (1995). Randomized Algorithms. Cambridge University Press. p. 167. Allen B. Tucker (2004). Computer Science Handbook. CRC Press. p. 12.19. ISBN 1-58488-360-X. W.T. Tutte (April 1947). "The factorization of linear graphs" (PDF). J. London Math. Soc. 22 (2): 107–111. doi:10.1112/jlms/s1-22.2.107. Retrieved 2008-06-15.
Wikipedia:Twin circles#0
In geometry, the twin circles are two special circles associated with an arbelos. An arbelos is determined by three collinear points A, B, and C, and is the curvilinear triangular region between the three semicircles that have AB, BC, and AC as their diameters. If the arbelos is partitioned into two smaller regions by a line segment through the middle point of A, B, and C, perpendicular to line ABC, then each of the two twin circles lies within one of these two regions, tangent to its two semicircular sides and to the splitting segment. These circles first appeared in the Book of Lemmas, which showed (Proposition V) that the two circles are congruent. Thābit ibn Qurra, who translated this book into Arabic, attributed it to Greek mathematician Archimedes. Based on this claim the twin circles, and several other circles in the Arbelos congruent to them, have also been called Archimedes's circles. However, this attribution has been questioned by later scholarship. == Construction == Specifically, let A {\displaystyle A} , B {\displaystyle B} , and C {\displaystyle C} be the three corners of the arbelos, with B {\displaystyle B} between A {\displaystyle A} and C {\displaystyle C} . Let D {\displaystyle D} be the point where the larger semicircle intercepts the line perpendicular to the A C {\displaystyle AC} through the point B {\displaystyle B} . The segment B D {\displaystyle BD} divides the arbelos in two parts. The twin circles are the two circles inscribed in these parts, each tangent to one of the two smaller semicircles, to the segment B D {\displaystyle BD} , and to the largest semicircle. Each of the two circles is uniquely determined by its three tangencies. Constructing it is a special case of the Problem of Apollonius. Alternative approaches to constructing two circles congruent to the twin circles have also been found. These circles have also been called Archimedean circles. They include the Bankoff circle, Schoch circles, and Woo circles. == Properties == Let a and b be the diameters of two inner semicircles, so that the outer semicircle has diameter a + b. The diameter of each twin circle is then d = a b a + b . {\displaystyle d={\frac {ab}{a+b}}.} Alternatively, if the outer semicircle has unit diameter, and the inner circles have diameters s {\displaystyle s} and 1 − s {\displaystyle 1-s} , the diameter of each twin circle is d = s ( 1 − s ) . {\displaystyle d=s(1-s).\,} The smallest circle that encloses both twin circles has the same area as the arbelos. == See also == Schoch line == References ==
Wikipedia:Two-element Boolean algebra#0
In mathematics and abstract algebra, the two-element Boolean algebra is the Boolean algebra whose underlying set (or universe or carrier) B is the Boolean domain. The elements of the Boolean domain are 1 and 0 by convention, so that B = {0, 1}. Paul Halmos's name for this algebra "2" has some following in the literature, and will be employed here. == Definition == B is a partially ordered set and the elements of B are also its bounds. An operation of arity n is a mapping from Bn to B. Boolean algebra consists of two binary operations and unary complementation. The binary operations have been named and notated in various ways. Here they are called 'sum' and 'product', and notated by infix '+' and '∙', respectively. Sum and product commute and associate, as in the usual algebra of real numbers. As for the order of operations, brackets are decisive if present. Otherwise '∙' precedes '+'. Hence A ∙ B + C is parsed as (A ∙ B) + C and not as A ∙ (B + C). Complementation is denoted by writing an overbar over its argument. The numerical analog of the complement of X is 1 − X. In the language of universal algebra, a Boolean algebra is a ⟨ B , + , {\displaystyle \langle B,+,} ∙ , . . ¯ , 1 , 0 ⟩ {\displaystyle ,{\overline {..}},1,0\rangle } algebra of type ⟨ 2 , 2 , 1 , 0 , 0 ⟩ {\displaystyle \langle 2,2,1,0,0\rangle } . Either one-to-one correspondence between {0,1} and {True,False} yields classical bivalent logic in equational form, with complementation read as NOT. If 1 is read as True, '+' is read as OR, and '∙' as AND, and vice versa if 1 is read as False. These two operations define a commutative semiring, known as the Boolean semiring. == Some basic identities == 2 can be seen as grounded in the following trivial "Boolean" arithmetic: 1 + 1 = 1 + 0 = 0 + 1 = 1 0 + 0 = 0 0 ⋅ 0 = 0 ⋅ 1 = 1 ⋅ 0 = 0 1 ⋅ 1 = 1 1 ¯ = 0 0 ¯ = 1 {\displaystyle {\begin{aligned}&1+1=1+0=0+1=1\\&0+0=0\\&0\cdot 0=0\cdot 1=1\cdot 0=0\\&1\cdot 1=1\\&{\overline {1}}=0\\&{\overline {0}}=1\end{aligned}}} Note that: '+' and '∙' work exactly as in numerical arithmetic, except that 1+1=1. '+' and '∙' are derived by analogy from numerical arithmetic; simply set any nonzero number to 1. Swapping 0 and 1, and '+' and '∙' preserves truth; this is the essence of the duality pervading all Boolean algebras. This Boolean arithmetic suffices to verify any equation of 2, including the axioms, by examining every possible assignment of 0s and 1s to each variable (see decision procedure). The following equations may now be verified: A + A = A A ⋅ A = A A + 0 = A A + 1 = 1 A ⋅ 0 = 0 A ¯ ¯ = A {\displaystyle {\begin{aligned}&A+A=A\\&A\cdot A=A\\&A+0=A\\&A+1=1\\&A\cdot 0=0\\&{\overline {\overline {A}}}=A\end{aligned}}} Each of '+' and '∙' distributes over the other: A ⋅ ( B + C ) = A ⋅ B + A ⋅ C ; {\displaystyle \ A\cdot (B+C)=A\cdot B+A\cdot C;} A + ( B ⋅ C ) = ( A + B ) ⋅ ( A + C ) . {\displaystyle \ A+(B\cdot C)=(A+B)\cdot (A+C).} That '∙' distributes over '+' agrees with elementary algebra, but not '+' over '∙'. For this and other reasons, a sum of products (leading to a NAND synthesis) is more commonly employed than a product of sums (leading to a NOR synthesis). Each of '+' and '∙' can be defined in terms of the other and complementation: A ⋅ B = A ¯ + B ¯ ¯ {\displaystyle A\cdot B={\overline {{\overline {A}}+{\overline {B}}}}} A + B = A ¯ ⋅ B ¯ ¯ . {\displaystyle A+B={\overline {{\overline {A}}\cdot {\overline {B}}}}.} We only need one binary operation, and concatenation suffices to denote it. Hence concatenation and overbar suffice to notate 2. This notation is also that of Quine's Boolean term schemata. Letting (X) denote the complement of X and "()" denote either 0 or 1 yields the syntax of the primary algebra of G. Spencer-Brown's Laws of Form. A basis for 2 is a set of equations, called axioms, from which all of the above equations (and more) can be derived. There are many known bases for all Boolean algebras and hence for 2. An elegant basis notated using only concatenation and overbar is: A B C = B C A {\displaystyle \ ABC=BCA} (Concatenation commutes, associates) A ¯ A = 1 {\displaystyle {\overline {A}}A=1} (2 is a complemented lattice, with an upper bound of 1) A 0 = A {\displaystyle \ A0=A} (0 is the lower bound). A A B ¯ = A B ¯ {\displaystyle A{\overline {AB}}=A{\overline {B}}} (2 is a distributive lattice) Where concatenation = OR, 1 = true, and 0 = false, or concatenation = AND, 1 = false, and 0 = true. (overbar is negation in both cases.) If 0=1, (1)–(3) are the axioms for an abelian group. (1) only serves to prove that concatenation commutes and associates. First assume that (1) associates from either the left or the right, then prove commutativity. Then prove association from the other direction. Associativity is simply association from the left and right combined. This basis makes for an easy approach to proof, called "calculation" in Laws of Form, that proceeds by simplifying expressions to 0 or 1, by invoking axioms (2)–(4), and the elementary identities A A = A , A ¯ ¯ = A , 1 + A = 1 {\displaystyle AA=A,{\overline {\overline {A}}}=A,1+A=1} , and the distributive law. == Metatheory == De Morgan's theorem states that if one does the following, in the given order, to any Boolean function: Complement every variable; Swap '+' and '∙' operators (taking care to add brackets to ensure the order of operations remains the same); Complement the result, the result is logically equivalent to what you started with. Repeated application of De Morgan's theorem to parts of a function can be used to drive all complements down to the individual variables. A powerful and nontrivial metatheorem states that any identity of 2 holds for all Boolean algebras. Conversely, an identity that holds for an arbitrary nontrivial Boolean algebra also holds in 2. Hence all identities of Boolean algebra are captured by 2. This theorem is useful because any equation in 2 can be verified by a decision procedure. Logicians refer to this fact as "2 is decidable". All known decision procedures require a number of steps that is an exponential function of the number of variables N appearing in the equation to be verified. Whether there exists a decision procedure whose steps are a polynomial function of N falls under the P = NP conjecture. The above metatheorem does not hold if we consider the validity of more general first-order logic formulas instead of only atomic positive equalities. As an example consider the formula (x = 0) ∨ (x = 1). This formula is always true in a two-element Boolean algebra. In a four-element Boolean algebra whose domain is the powerset of ⁠ { 0 , 1 } {\displaystyle \{0,1\}} ⁠, this formula corresponds to the statement (x = ∅) ∨ (x = {0,1}) and is false when x is ⁠ { 1 } {\displaystyle \{1\}} ⁠. The decidability for the first-order theory of many classes of Boolean algebras can still be shown, using quantifier elimination or small model property (with the domain size computed as a function of the formula and generally larger than 2). == See also == Boolean algebra Bounded set Lattice (order) Order theory == References == == Further reading == Many elementary texts on Boolean algebra were published in the early years of the computer era. Perhaps the best of the lot, and one still in print, is: Mendelson, Elliot, 1970. Schaum's Outline of Boolean Algebra. McGraw–Hill. The following items reveal how the two-element Boolean algebra is mathematically nontrivial. Stanford Encyclopedia of Philosophy: "The Mathematics of Boolean Algebra," by J. Donald Monk. Burris, Stanley N., and H.P. Sankappanavar, H. P., 1981. A Course in Universal Algebra. Springer-Verlag. ISBN 3-540-90578-2.
Wikipedia:Two-graph#0
In mathematics, a two-graph is a set of unordered triples chosen from a finite vertex set X, such that every unordered quadruple from X contains an even number of triples of the two-graph. A regular two-graph has the property that every pair of vertices lies in the same number of triples of the two-graph. Two-graphs have been studied because of their connection with equiangular lines and, for regular two-graphs, strongly regular graphs, and also finite groups because many regular two-graphs have interesting automorphism groups. A two-graph is not a graph and should not be confused with other objects called 2-graphs in graph theory, such as 2-regular graphs. == Examples == On the set of vertices {1,...,6} the following collection of unordered triples is a two-graph: 123 124 135 146 156 236 245 256 345 346 This two-graph is a regular two-graph since each pair of distinct vertices appears together in exactly two triples. Given a simple graph G = (V,E), the set of triples of the vertex set V whose induced subgraph has an odd number of edges forms a two-graph on the set V. Every two-graph can be represented in this way. This example is referred to as the standard construction of a two-graph from a simple graph. As a more complex example, let T be a tree with edge set E. The set of all triples of E that are not contained in a path of T form a two-graph on the set E. == Switching and graphs == A two-graph is equivalent to a switching class of graphs and also to a (signed) switching class of signed complete graphs. Switching a set of vertices in a (simple) graph means reversing the adjacencies of each pair of vertices, one in the set and the other not in the set: thus the edge set is changed so that an adjacent pair becomes nonadjacent and a nonadjacent pair becomes adjacent. The edges whose endpoints are both in the set, or both not in the set, are not changed. Graphs are switching equivalent if one can be obtained from the other by switching. An equivalence class of graphs under switching is called a switching class. Switching was introduced by van Lint & Seidel (1966) and developed by Seidel; it has been called graph switching or Seidel switching, partly to distinguish it from switching of signed graphs. In the standard construction of a two-graph from a simple graph given above, two graphs will yield the same two-graph if and only if they are equivalent under switching, that is, they are in the same switching class. Let Γ be a two-graph on the set X. For any element x of X, define a graph with vertex set X having vertices y and z adjacent if and only if {x, y, z} is in Γ. In this graph, x will be an isolated vertex. This construction is reversible; given a simple graph G, adjoin a new element x to the set of vertices of G, retaining the same edge set, and apply the standard construction above. This two-graph is called the extension of G by x in design theoretic language. In a given switching class of graphs of a regular two-graph, let Γx be the unique graph having x as an isolated vertex (this always exists, just take any graph in the class and switch the open neighborhood of x) without the vertex x. That is, the two-graph is the extension of Γx by x. In the first example above of a regular two-graph, Γx is a 5-cycle for any choice of x. To a graph G there corresponds a signed complete graph Σ on the same vertex set, whose edges are signed negative if in G and positive if not in G. Conversely, G is the subgraph of Σ that consists of all vertices and all negative edges. The two-graph of G can also be defined as the set of triples of vertices that support a negative triangle (a triangle with an odd number of negative edges) in Σ. Two signed complete graphs yield the same two-graph if and only if they are equivalent under switching. Switching of G and of Σ are related: switching the same vertices in both yields a graph H and its corresponding signed complete graph. == Adjacency matrix == The adjacency matrix of a two-graph is the adjacency matrix of the corresponding signed complete graph; thus it is symmetric, is zero on the diagonal, and has entries ±1 off the diagonal. If G is the graph corresponding to the signed complete graph Σ, this matrix is called the (0, −1, 1)-adjacency matrix or Seidel adjacency matrix of G. The Seidel matrix has zero entries on the main diagonal, −1 entries for adjacent vertices and +1 entries for non-adjacent vertices. If graphs G and H are in a same switching class, the multisets of eigenvalues of the two Seidel adjacency matrices of G and H coincide, since the matrices are similar. A two-graph on a set V is regular if and only if its adjacency matrix has just two distinct eigenvalues ρ1 > 0 > ρ2 say, where ρ1ρ2 = 1 − |V|. == Equiangular lines == Every two-graph is equivalent to a set of lines in some dimensional euclidean space each pair of which meet in the same angle. The set of lines constructed from a two graph on n vertices is obtained as follows. Let −ρ be the smallest eigenvalue of the Seidel adjacency matrix, A, of the two-graph, and suppose that it has multiplicity n − d. Then the matrix ρI + A is positive semi-definite of rank d and thus can be represented as the Gram matrix of the inner products of n vectors in euclidean d-space. As these vectors have the same norm (namely, ρ {\displaystyle {\sqrt {\rho }}} ) and mutual inner products ±1, any pair of the n lines spanned by them meet in the same angle φ where cos φ = 1/ρ. Conversely, any set of non-orthogonal equiangular lines in a euclidean space can give rise to a two-graph (see equiangular lines for the construction). With the notation as above, the maximum cardinality n satisfies n ≤ d(ρ2 − 1)/(ρ2 − d) and the bound is achieved if and only if the two-graph is regular. == Strongly regular graphs == The two-graphs on X consisting of all possible triples of X and no triples of X are regular two-graphs and are considered to be trivial two-graphs. For non-trivial two-graphs on the set X, the two-graph is regular if and only if for some x in X the graph Γx is a strongly regular graph with k = 2μ (the degree of any vertex is twice the number of vertices adjacent to both of any non-adjacent pair of vertices). If this condition holds for one x in X, it holds for all the elements of X. It follows that a non-trivial regular two-graph has an even number of points. If G is a regular graph whose two-graph extension is Γ having n points, then Γ is a regular two-graph if and only if G is a strongly regular graph with eigenvalues k, r and s satisfying n = 2(k − r) or n = 2(k − s). == Notes == == References == Brouwer, A.E., Cohen, A.M., and Neumaier, A. (1989), Distance-Regular Graphs. Springer-Verlag, Berlin. Sections 1.5, 3.8, 7.6C. Cameron, P.J.; van Lint, J.H. (1991), Designs, Graphs, Codes and their Links, London Mathematical Society Student Texts 22, Cambridge University Press, ISBN 978-0-521-42385-4 Colbourn, Charles J; Corneil, Derek G. (1980). "On deciding switching equivalence of graphs". Disc. Appl. Math. 2 (3): 181–184. doi:10.1016/0166-218X(80)90038-4. Colbourn, Charles J.; Dinitz, Jeffrey H. (2007), Handbook of Combinatorial Designs (2nd ed.), Boca Raton: Chapman & Hall/ CRC, pp. 875–882, ISBN 1-58488-506-8 Godsil, Chris: Royle, Gordon (2001), Algebraic Graph Theory. Graduate Texts in Mathematics, Vol. 207. Springer-Verlag, New York. Chapter 11. Mallows, C. L.; Sloane, N. J. A. (1975). "Two-graphs, switching classes, and Euler graphs are equal in number". SIAM J. Appl. Math. 28 (4): 876–880. CiteSeerX 10.1.1.646.5464. JSTOR 2100368. Seidel, J. J. (1976), A survey of two-graphs. In: Colloquio Internazionale sulle Teorie Combinatorie (Proceedings, Rome, 1973), Vol. I, pp. 481–511. Atti dei Convegni Lincei, No. 17. Accademia Nazionale dei Lincei, Rome. Reprinted in Seidel (1991), pp. 146–176. Seidel, J. J. (1991), Geometry and Combinatorics: Selected Works of J.J. Seidel, ed. D. G. Corneil and R. Mathon. Academic Press, Boston, 1991. Taylor, D. E. (1977), Regular 2-graphs. Proceedings of the London Mathematical Society (3), vol. 35, pp. 257–274. van Lint, J. H.; Seidel, J. J. (1966), "Equilateral point sets in elliptic geometry", Indagationes Mathematicae, Proc. Koninkl. Ned. Akad. Wetenschap. Ser. A 69, 28: 335–348
Wikipedia:Tytus Babczyński#0
Titus Babczyński (1832 – 1910) was a Polish mathematician and physicist. He graduated from the School of Fine Arts in Warsaw, then studied physics and mathematics. In 1872, he was a doctor at the University of St. Petersburg. In the period (1857–1862), he was a professor of higher mathematics and mechanics at the School of Fine Arts in Warsaw and eventually the University of Warsaw (1862–1887). == Early life == Babczyński was born in Warsaw on 4 January 1832. He graduated provincial school in 1847. He then entered the School of Fine Arts in Warsaw, graduating with a degree in architecture in 1850. This same year, he moved to St. Petersberg to study mathematical sciences. == Selected works == "On the phenomena of induction", master's dissertation between 1850-54 "Course of Higher Algebra", 1864-65, 1865-66 at the Main School in Warsaw "Differential and Integral Calculus", 1867-68 "Introduction to Higher Dynamics", doctoral dissertation in 1872 "On the multiplication of symmetric algebraic, rational integer functions", Zeit. Math. Physik 17 (1872), 147–158. == Awards and honors == His master's degree dissertation "On the phenomena of induction," was awarded a gold medal at the University of St. Petersburg. == References ==
Wikipedia:Tõnu Möls#0
Tõnu Möls (12 June 1939 in Tartu – 1 December 2019 ) was an Estonian mathematician and biologist. In 1965 he described the moth Epirrhoe tartuensis. From 1994 until 2004, he was the president of Estonian Naturalists' Society. In 2001, he was awarded with Order of the White Star, V class. == References ==
Wikipedia:Udayadivākara#0
Udayadivākara (c. 1073 CE) was an Indian astronomer and mathematician who authored an influential and elaborate commentary, called Sundari, on Laghu-bhāskarīya of Bhāskara I. No personal details about Udayadivākara are known. Since the commentary Sundari takes the year 1073 CE as its epoch, probably the commentary was completed about that year. Sundari has not yet been published and is available only in manuscript form. Some of these manuscripts are preserved in the manuscript depositories in Thiruvananthapuram. According to K. V. Sarma, historian of the astronomy and mathematics of the Kerala school, Udayadivākara probably hailed from Kerala, India. == Historical significance of Sundari == Apart from the fact that Sundari is an elaborate commentary, it has some historical significance. It has quoted extensively from a now lost work by a little-known mathematician Jayadeva. The quotations relate to the cakravala method for solving indeterminate integral equations of the form N x 2 + 1 = y 2 {\displaystyle Nx^{2}+1=y^{2}} . This shows that the method predates Bhaskara II contrary to generally held beliefs. Another important reference to Jayadeva’s work is the solution of the indeterminate equation of the form N x 2 + C = y 2 {\displaystyle Nx^{2}+C=y^{2}} , C {\displaystyle C} being positive or negative. == A problem and its solution == Udayadivākara used his method for solving the equation N x 2 + C = y 2 {\displaystyle Nx^{2}+C=y^{2}} to obtain some particular solutions to a certain algebraic problem. The problem and Udayadivākara's solution to the problem are presented below only to illustrate the techniques used by Indian astronomers for solving algebraic equations. === Problem === Find positive integers x {\displaystyle x} and y {\displaystyle y} satisfying the following conditions: x + y = a prefect square , x − y = a prefect square , x y + 1 = a prefect square . {\displaystyle {\begin{aligned}x+y&={\text{a prefect square}},\\x-y&={\text{a prefect square}},\\xy+1&={\text{a prefect square}}.\end{aligned}}} === Solution === To solve the problem, Udayadivākara makes a series of apparently arbitrary assumptions all aimed at reducing the problem to one of solving an indeterminate equation of the form N x 2 + C = y 2 {\displaystyle Nx^{2}+C=y^{2}} . Udayadivākara begins by assuming that x y + 1 = ( 2 y + 1 ) 2 {\displaystyle xy+1=(2y+1)^{2}} which can be written in the form x − y = 3 y + 4 {\displaystyle x-y=3y+4} . He next assumes that 3 y + 4 = ( 3 z + 2 ) 2 {\displaystyle 3y+4=(3z+2)^{2}} which, together with the earlier equation, yields x = 12 z 2 + 16 z + 4 , y = 3 z 2 + 4 z , x + y = 15 z 2 + 20 z + 4. {\displaystyle {\begin{aligned}x&=12z^{2}+16z+4,\\y&=3z^{2}+4z,\\x+y&=15z^{2}+20z+4.\end{aligned}}} Now, Udayadivākara puts 15 z 2 + 20 z + 4 = u 2 {\displaystyle 15z^{2}+20z+4=u^{2}} which is then transformed to the equation ( 30 z + 20 ) 2 = 60 u 2 + 160. {\displaystyle (30z+20)^{2}=60u^{2}+160.} This equation is of the form N x 2 + C = λ 2 {\displaystyle Nx^{2}+C=\lambda ^{2}} with N = 60 {\displaystyle N=60} , C = 160 {\displaystyle C=160} and λ = 30 z + 20 {\displaystyle \lambda =30z+20} . Using the method for solving the equation N x 2 + C = y 2 {\displaystyle Nx^{2}+C=y^{2}} , Udayadivākara finds the following solutions ( u = 2 , λ = 20 ) {\displaystyle (u=2,\lambda =20)} , ( u = 18 , λ = 140 ) {\displaystyle (u=18,\lambda =140)} and ( u = 8802 , λ = 68180 ) {\displaystyle (u=8802,\lambda =68180)} from which the values of x {\displaystyle x} and y {\displaystyle y} are obtained by back substitution. == See also == List of astronomers and mathematicians of the Kerala school == References ==
Wikipedia:Udo of Aachen#0
Udo of Aachen (c.1200–1270) is a fictional monk, a creation of British technical writer Ray Girvan, who introduced him in an April Fool's hoax article in 1999. According to the article, Udo was an illustrator and theologian who discovered the Mandelbrot set some 700 years before Benoit Mandelbrot. Udo's works were allegedly discovered by the also-fictional Bob Schipke, a Harvard mathematician, who supposedly saw a picture of the Mandelbrot set in an illumination for a 13th-century carol. Girvan also attributed Udo as a mystic and poet whose poetry was set to music by Carl Orff with the haunting O Fortuna in Carmina Burana. Later Schipke uncovered Udo's work which described how Udo had come to this kind of design while working on a method of determining whether one's soul would reach heaven. == Aspects of the hoax == The poetry of O Fortuna was actually the work of itinerant goliards, found in the German Benedictine monastery of Benediktbeuern Abbey. The hoax was lent an air of credibility because often medieval monks did discover scientific and mathematical theories, only to have them hidden or shelved due to persecution or simply ignored because publication prior to the invention of the printing press was difficult at best. Mr. Girvan adds to this suggestion by associating Udo with several other more legitimate discoveries where an author was considered ahead of his time in terms of a scientific theory of some sort that is now established as a mainstream theory but was considered fringe science at the time. Another aspect of the deception was that it was very common for pre-20th century mathematicians to spend incredible amounts of time on hand calculations such as a logarithm table or trigonometric functions. Calculating all of the points for a Mandelbrot set is a comparable activity that would seem tedious today but would be routine for people of the time. == References == Ray Girvan (1999-04-01). "The Mandelbrot Monk". Archived from the original on 2002-10-27. John Allen Paulos (1999-04-01). "Monk's "Startling" Math Discovery". Who's Counting, ABC News. == External links == Garry J. Tee (August 2001). "Mandelbrot Monk". Newsletter of the New Zealand Mathematical Society, number 82. Jeff "Hemos" Bates (2001-03-22). "Mandelbrot Set Originally Found In 13th Century (Early April's Fool)". Slashdot. John Armstrong (March 2008). "Hoax!". The Unapologetic Mathematician. Archived from the original on 2008-04-02.
Wikipedia:Uffe Haagerup#0
Uffe Valentin Haagerup (19 December 1949 – 5 July 2015) was a mathematician from Denmark. == Biography == Uffe Haagerup was born in Kolding, but grew up on the island of Funen, in the small town of Fåborg. The field of mathematics had his interest from early on, encouraged and inspired by his older brother. In fourth grade Uffe was doing trigonometric and logarithmic calculations. He graduated as a student from Svendborg Gymnasium in 1968, whereupon he relocated to Copenhagen and immediately began his studies of mathematics and physics at the University of Copenhagen, again inspired by his older brother who also studied the same subjects at the same university. Early university studies in Einstein's general theory of relativity and quantum mechanics, sparked a lasting interest in the mathematical field of operator algebra, in particular Von Neumann algebra and Tomita–Takesaki theory. In 1974 he received his Candidate's degree (cand. scient.) from the University of Copenhagen and in 1981 – at the age of 31 – Uffe was appointed at the University of Odense – now University of Southern Denmark, as the youngest professor of mathematics (dr. scient.) in the country at the time. Pregraduate summer schools at the university and later on extended professional research stays abroad, helped him discover and build a diverse and lasting international network of colleagues. Haagerup accidentally drowned on 5 July 2015, aged 65, while swimming in the Baltic Sea close to Fåborg where his family owned a cabin. == Work == Uffe Haagerup's mathematical focus has been on the fields of operator algebra, group theory and geometry, but his publications has a broad scope and also involves free probability theory and random matrices. He has participated in many international mathematical groups and networks from early on, and has worked as ordinary contributor and participator, organizer, lecturer and editor. Following his appointment as professor at Odense, Haagerup got acquainted with Vaughan Jones, when he did research in Philadelphia and later at the UCLA in Los Angeles. Jones inspired him to take up studies in and work on subfactor theory. Uffe Haagerup has done extensive work with fellow mathematician Alain Connes on Von Neumann algebra. His solution to the so-called "Champagne Problem", secured him the Samuel Friedmann Award in April 1985, although it was first published in Acta Mathematica in 1987. Uffe considered this his best work. An early contact and collaboration was established with Swedish colleagues at the Mittag-Leffler Institute and the Norwegian group on operator algebra, where Uffe Haagerup has a long history of collaboration with Erling Størmer for example. In the mathematical literature, Uffe Haagerup is known for the Haagerup property, the Haagerup subfactor, the Asaeda-Haagerup subfactor and the Haagerup list. From 2000 to 2006 Uffe served as editor-in-chief of the journal Acta Mathematica. He was a member of the Royal Danish Academy of Sciences and Letters and the Norwegian Academy of Science and Letters. He worked at the Department of Mathematics at the University of Copenhagen from 2010 to 2014, where he was involved in the Centre for Symmetry and Deformation (SYM), but was appointed professor of mathematics in 2015 at the University of Southern Denmark in Odense. == Prizes and honors == Uffe Haagerup received several awards and honours throughout his academic career. Amongst the most academically prestigious were the Danish Ole Rømer Medal, the international Humboldt Research Award and the European Latsis Prize. 1985. The Samuel Friedman Award (UCLA and Copenhagen) 1986. Invited speaker at ICM1986 (Berkeley) 1989. The Ole Rømer Medal (Copenhagen).The Ole Rømer Medal (est. 1944) is a Danish medal awarded by the University of Copenhagen and the municipality of Copenhagen, for outstanding research. It is considered amongst the most honourable scientific awards in the country, established in commemoration of Ole Rømer on his 300th anniversary. 2002. Plenary speaker at ICM2002 (Beijing) 2007. Distinguished lecturer at the Fields Institute of Mathematical Research (Toronto) 2008. The Humboldt Research Award (Münster) 2010–2014 European Research Council Advanced Grant 2012. Plenary speaker at International Congress on Mathematical Physics ICMP12 (Aalborg) 2012. 14th European Latsis Prize from the European Science Foundation, ESF (Brussels) 2013. Honorary Doctorate from East China Normal University, ECNU (Shanghai) == Works (selection) == Uffe Haagerup: Principal graphs and subfactors in the index range 4 < M:N < 3 + sqrt{2}; pp. 1–38 in Subfactors – Proceedings of the Taniguchi Symposium Katata (1994). == See also == Approximately finite-dimensional C*-algebra Khintchine inequality Planar algebra Quasitrace == References == == Sources == European Science Foundation (ESF): ESF awards 14th European Latsis Prize to Professor Uffe Haagerup for ground-breaking and important contributions to the theory of operator algebras 26 November 2012 Curriculum Vitae (Uffe Haagerup) University of Copenhagen Jacob Hjelmborg: Interview with Uffe Haagerup, Matilde (2002), DMF Aarhus University (in Danish)
Wikipedia:Ulam–Warburton automaton#0
The Ulam–Warburton cellular automaton (UWCA) is a 2-dimensional fractal pattern that grows on a regular grid of cells consisting of squares. Starting with one square initially ON and all others OFF, successive iterations are generated by turning ON all squares that share precisely one edge with an ON square. This is the von Neumann neighborhood. The automaton is named after the Polish-American mathematician and scientist Stanislaw Ulam and the Scottish engineer, inventor and amateur mathematician Mike Warburton. == Properties and relations == The UWCA is a 2D 5-neighbor outer totalistic cellular automaton using rule 686. The number of cells turned ON in each iteration is denoted u ( n ) , {\displaystyle u(n),} with an explicit formula: u ( 0 ) = 0 , u ( 1 ) = 1 , {\displaystyle u(0)=0,u(1)=1,} and for n ≥ 2 {\displaystyle n\geq 2} u ( n ) = 4 ⋅ 3 w t ( n − 1 ) − 1 {\displaystyle u(n)=4\cdot 3^{wt(n-1)-1}} where w t ( n ) {\displaystyle wt(n)} is the Hamming weight function which counts the number of 1's in the binary expansion of n {\displaystyle n} w t ( n ) = n − ∑ k = 1 ∞ ⌊ n 2 k ⌋ {\displaystyle wt(n)=n-\sum _{k=1}^{\infty }\left\lfloor {\frac {n}{2^{k}}}\right\rfloor } The minimum upper bound of summation for k {\displaystyle k} is such that 2 k ≥ n {\displaystyle 2^{k}\geq n} The total number of cells turned ON is denoted U ( n ) {\displaystyle U(n)} U ( n ) = ∑ i = ⁡ 0 n u ( i ) = 4 3 ∑ i = ⁡ 0 n − 1 3 w t ( i ) − 1 3 {\displaystyle U(n)=\sum _{i\mathop {=} 0}^{n}u(i)={\frac {4}{3}}\sum _{i\mathop {=} 0}^{n-1}3^{wt(i)}-{\frac {1}{3}}} === Table of wt(n), u(n) and U(n) === The table shows that different inputs to w t ( n ) {\displaystyle wt(n)} can lead to the same output. This surjective property emerges from the simple rule of growth – a new cell is born if it shares only one-edge with an existing ON cell - the process appears disorderly and is modeled by functions involving w t ( n ) {\displaystyle wt(n)} but within the chaos there is regularity. U ( n ) {\displaystyle U(n)} is OEIS sequence A147562 and u ( n ) {\displaystyle u(n)} is OEIS sequence A147582 === Counting cells with quadratics === For all integer sequences of the form n m = m ⋅ 2 k {\displaystyle n_{m}=m\cdot 2^{k}} where m ≥ 1 {\displaystyle m\geq 1} and k ≥ 0 {\displaystyle k\geq 0} Let a m = ∑ i = ⁡ 0 m − 1 3 w t ( i ) {\displaystyle a_{m}=\sum _{i\mathop {=} 0}^{m-1}3^{wt(i)}} ( a m {\displaystyle a_{m}} is OEIS sequence A130665) Then the total number of ON cells in the integer sequence n m {\displaystyle n_{m}} is given by U m ( n m ) = a m m 2 4 3 n m 2 − 1 3 {\displaystyle U_{m}(n_{m})={\frac {a_{m}}{m^{2}}}{\frac {4}{3}}n_{m}^{2}-{\frac {1}{3}}} Or in terms of k {\displaystyle k} we have U m ( k ) = a m 4 3 2 2 k − 1 3 {\displaystyle U_{m}(k)=a_{m}{\frac {4}{3}}2^{2k}-{\frac {1}{3}}} === Table of integer sequences nm and Um === == Upper and lower bounds == U ( n ) {\displaystyle U(n)} has fractal-like behavior with a sharp upper bound for n ≥ 1 {\displaystyle n\geq 1} given by U sub ( n ) = 4 3 n 2 − 1 3 {\displaystyle U_{\text{sub}}(n)={\frac {4}{3}}n^{2}-{\frac {1}{3}}} The upper bound only contacts U ( n ) {\displaystyle U(n)} at 'high-water' points when n = 2 k {\displaystyle n=2^{k}} . These are also the generations at which the UWCA based on squares, the Hex–UWCA based on hexagons and the Sierpinski triangle return to their base shape. === Limit superior and limit inferior === We have 0.9026116569... = lim inf n → ∞ U ( n ) n 2 < lim sup n → ∞ U ( n ) n 2 = 4 3 {\displaystyle 0.9026116569...=\liminf _{n\to \infty }{\frac {U(n)}{n^{2}}}<\limsup _{n\to \infty }{\frac {U(n)}{n^{2}}}={\frac {4}{3}}} The lower limit was obtained by Robert Price (OEIS sequence A261313 ) and took several weeks to compute and is believed to be twice the lower limit of T ( n ) n 2 {\displaystyle {\frac {T(n)}{n^{2}}}} where T ( n ) {\displaystyle T(n)} is the total number of toothpicks in the toothpick sequence up to generation n {\displaystyle n} == Relationship to == === Hexagonal UWCA === The Hexagonal-Ulam–Warburton cellular automaton (Hex-UWCA) is a 2-dimensional fractal pattern that grows on a regular grid of cells consisting of hexagons. The same growth rule for the UWCA applies and the pattern returns to a hexagon in generations n = 2 k {\displaystyle n=2^{k}} , when the first hexagon is considered as generation 1 {\displaystyle 1} . The UWCA has two reflection lines that pass through the corners of the initial cell dividing the square into four quadrants, similarly the Hex-UWCA has three reflection lines dividing the hexagon into six sections and the growth rule follows the symmetries. Cells whose centers lie on a line of reflection symmetry are never born. The Hex-UWCA pattern can be explored here. === Sierpinski triangle === The Sierpinski triangle appears in 13th century Italian floor mosaics. Wacław Sierpiński described the triangle in 1915. If we consider the growth of the triangle, with each row corresponding to a generation and the top row generation 1 {\displaystyle 1} is a single triangle, then like the UWCA and the Hex-UWCA it returns to its starting shape, in generations n = 2 k . {\displaystyle n=2^{k}.} === Toothpick sequence === The toothpick pattern is constructed by placing a single toothpick of unit length on a square grid, aligned with the vertical axis. At each subsequent stage, for every exposed toothpick end, place a perpendicular toothpick centred at that end. The resulting structure has a fractal-like appearance. The toothpick and UWCA structures are examples of cellular automata defined on a graph and when considered as a subgraph of the infinite square grid the structure is a tree. The toothpick sequence returns to its base rotated ‘H’ shape in generations n = 2 k {\displaystyle n=2^{k}} where k ≥ 1 {\displaystyle k\geq 1} The toothpick sequence and various toothpick-like sequences can be explored here. === Combinatorial game theory === A subtraction game called LIM, in which two players alternately modify three piles of tokens by taking an equal amount of tokens from two of the piles and adding the same amount to the third pile, has a set of winning positions that can be described using the Ulam–Warburton automaton. == History == The beginnings of automata go back to a conversation Ulam had with Stanislaw Mazur in a coffee house in Lwów Poland when Ulam was twenty in 1929. Ulam worked with John von Neumann during the war years when they became good friends and discussed cellular automaton. Von Neumann’s used these ideas in his concept of a universal constructor and the digital computer. Ulam focussed on biological and ‘crystal like’ patterns publishing a sketch of the growth of a square based cell structure using a simple rule in 1962. Mike Warburton is an amateur mathematician working in probabilistic number theory who was educated at George Heriot's School in Edinburgh. His son's mathematics GCSE coursework involved investigating the growth of equilateral triangles or squares in the Euclidean plane with the rule – a new generation is born if and only if connected to the last by only one-edge. That coursework concluded with a recursive formula for the number of ON cells born in each generation. Later, Warburton found the sharp upper bound formula which he wrote up as a note in the Open University’s M500 magazine in 2002. David Singmaster read the article, analysed the structure and named the object the Ulam-Warburton cellular automaton in his 2003 article. Since then it has given rise to numerous integer sequences. == References == == External links == Explore the UWCA, Hex-UWCA and related integer sequence animations Neil Sloane: Terrific Toothpick Patterns - Numberphile. (The UWCA starts at time 8:20)
Wikipedia:Ulf Grenander#0
Ulf Grenander (23 July 1923 – 12 May 2016) was a Swedish statistician and professor of applied mathematics at Brown University. His early research was in probability theory, stochastic processes, time series analysis, and statistical theory (particularly the order-constrained estimation of cumulative distribution functions using his sieve estimator). In recent decades, Grenander contributed to computational statistics, image processing, pattern recognition, and artificial intelligence. He coined the term pattern theory to distinguish from pattern recognition. == Honors == In 1966 Grenander was elected to the Royal Academy of Sciences of Sweden, and in 1996 to the US National Academy of Sciences. In 1998 he was an Invited Speaker of the International Congress of Mathematicians in Berlin. He received an honorary doctorate in 1994 from the University of Chicago, and in 2005 from the Royal Institute of Technology of Stockholm, Sweden. == Education == Grenander earned his undergraduate degree at Uppsala University. He earned his Ph.D. at Stockholm University in 1950 under the supervision of Harald Cramér. == Appointments == He was active as a 1950–1951 Associate Professor at Stockholm University, 1951–1952 at University of Chicago, At 1952–1953 University of California–Berkeley, At Stockholm University 1953–1957, at Brown University 1957–1958 and 1958–1966 again at Stockholm University, where he succeeded in 1959 Harald Cramér as the Professor in actuarial science and mathematical statistics. From 1966 until his retirement, Grenander was L. Herbert Ballou University Professor at Brown University. In 1969–1974 he was also professor of Applied Mathematics at The Royal Institute of Technology. == Selected works == Grenander, Ulf (2012). A Calculus of Ideas: A Mathematical Study of Human Thought. World Scientific Publishing. ISBN 978-9814383189. Grenander, Ulf; Miller, Michael (2007). Pattern Theory: From Representation to Inference. Oxford University Press. ISBN 978-0199297061. Grenander, Ulf (1996). Elements of Pattern Theory. Johns Hopkins University Press. ISBN 978-0801851889. Grenander, Ulf (1994). General Pattern Theory. Oxford Science Publications. ISBN 978-0198536710. Grenander, Ulf (1982). Mathematical Experiments on the Computer. Academic Press. ISBN 9780123017505. Grenander, Ulf (1981). Abstract Inference. Wiley. ISBN 978-0471082675. Grenander, Ulf (1963). Probabilities on Algebraic Structures. Wiley. Grenander, Ulf (1959). Probability and Statistics: The Harald Cramér Volume. Wiley. Szegő, Gábor; Grenander, Ulf (1958). Toeplitz forms and their applications. Chelsea. Grenander, Ulf; Rosenblatt, M (1957). Statistical Analysis of Stationary Time Series. American Mathematical Society. ISBN 978-0-8284-0320-7. {{cite book}}: ISBN / Date incompatibility (help) == Notes == == References == Mukhopadhyay, Nitis (2006). "A conversation with Ulf Grenander". Statistical Science. 21 (3): 404–426. arXiv:math/0701092. Bibcode:2007math......1092M. doi:10.1214/088342305000000313. ISSN 0883-4237. MR 2339138. S2CID 62516244. == External links == Homepage of Ulf Grenander at Brown University Pattern Theory: Grenander's Ideas and Examples – a video lecture by David Mumford Ulf Grenander at the Mathematics Genealogy Project
Wikipedia:Ulla Dinger#0
Ulla Margarete Dinger (born 1955) is a Swedish mathematician specializing in mathematical analysis. She was the first woman to earn a doctorate in mathematics at the University of Gothenburg. Dinger completed her doctorate at the University of Gothenburg in 1989. Her dissertation, On the ball problem and the Laguerre maximal operators, was jointly supervised by Christer Borell (of the Borell–Brascamp–Lieb inequality) and Peter Sjögren. She is a senior lecturer in mathematics at the Chalmers University of Technology, where she used to teach real analysis and heads the program for the Preparatory Year in Natural Sciences. == References ==
Wikipedia:Ulla Pursiheimo#0
Ulla Irmeli Pursiheimo (born May 4, 1944) is a Finnish mathematician who became the first female mathematics professor in Finland. Her areas of interest in mathematics include mathematical optimization, control theory, search games, and later in her career mathematics education. Pursiheimo earned her doctorate from the University of Turku in 1971. Her dissertation, Optimization of Search With Constant Spreading Speed of Effort, was supervised by Olavi Hellman. She became a full professor of mathematics at the University of Turku in 1974, and retired to become a professor emerita in 1999. == References ==
Wikipedia:Ulrike Leopold-Wildburger#0
Ulrike Leopold-Wildburger (born 1949) is an Austrian mathematical economist, applied mathematician, and operations researcher. She is a professor emeritus at the University of Graz, where she headed the department of statistics and operations research, and is a former president of the Austrian Society of Operations Research. == Education and career == Leopold-Wildburger studied mathematics, philosophy, and logic at the University of Graz from 1967 to 1972, earning a master of science in 1971 and a master of philosophy in 1972. She completed a Ph.D. at the University of Graz in 1975, and earned a habilitation in operations research and mathematical economics there in 1982. She joined the teaching staff at the University of Graz as a lecturer in mathematical economics in 1972, and became an assistant professor in 1983. She became a professor of mathematics and informatics at the University of Klagenfurt in 1986, a professor of operations research at the University of Zurich in 1988, and a professor of mathematical economics at the University of Minnesota in 1991, before returning to Graz as a professor of statistics and operations research. She headed the department from 1996 to 1998, and was dean of studies in the faculty of economics and social sciences from 2001 to 2004. She returned to her position as head of department in 2010. She was president of the Austrian Society of Operations Research from 1993 to 1997. == Books == With Gerald A. Heuer, Leopold-Wildburger is the coauthor of the books Balanced Silverman Games on General Discrete Sets (1991) and Silverman’s Game: A Special Class of Two-Person Zero-Sum Games (1995), concerning Silverman's game. Leopold-Wildburger is a coauthor of The Knowledge Ahead Approach to Risk: Theory and Experimental Evidence (With Robin Pope and Johannes Leitner, 2007). She is also a coauthor of two German-language textbooks, Einführung in die Wirtschaftsmathematik (Introduction to Mathematical Economics, with Jochen Hülsmann, Wolf Gamerith, and Werner Steindl, 1998; 5th ed., 2010) and Verfassen und Vortragen: Wissenschaftliche Arbeiten und Vorträge leicht gemacht (with Jörg Schütze, 2002). == Recognition == Leopold-Wildburger was given the Austrian Cross of Honour for Science and Art, First Class in 2010. She became a member of the Academia Europaea in 2011. == References ==
Wikipedia:Ulrike Meier Yang#0
Ulrike Meier Yang (born 1959) is a German-American applied mathematician and computer scientist specializing in numerical algorithms for scientific computing. She directs the Mathematical Algorithms & Computing group in the Center for Applied Scientific Computing at the Lawrence Livermore National Laboratory, and is one of the developers of the Hypre library of parallel methods for solving linear systems. == Education and career == Meier Yang did her undergraduate studies in mathematics at Ruhr University Bochum in Germany, and worked in the Central Institute of Applied Mathematics of the Forschungszentrum Jülich in Germany from 1983 to 1985 and at the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign from 1985 to 1995. She completed her doctorate through the University of Illinois in 1995 with the dissertation A Family of Preconditioned Iterative Solvers for Sparse Linear Systems, supervised by Kyle Gallivan. She joined the Lawrence Livermore National Laboratory research staff in 1998. As of January 1st, 2023 Yang took office as a member of the SIAM Board of Trustees. == Recognition == She is a SIAM Fellow, in the 2024 class of fellows, elected for "pioneering work on parallel algebraic multigrid and software, and broad impact on high-performance computing". == References == == External links == Ulrike Meier Yang publications indexed by Google Scholar
Wikipedia:Ultradistribution#0
In functional analysis, an ultradistribution (also called an ultra-distribution) is a generalized function that extends the concept of a distributions by allowing test functions whose Fourier transforms have compact support. They form an element of the dual space 𝒵′, where 𝒵 is the space of test functions whose Fourier transforms belong to 𝒟, the space of infinitely differentiable functions with compact support. == See also == Distribution (mathematics) Generalized function == References == Vilela Mendes, Rui (2012). "Stochastic solutions of nonlinear PDE's and an extension of superprocesses". arXiv:1209.3263.
Wikipedia:Ultrahyperbolic equation#0
In the mathematical field of differential equations, the ultrahyperbolic equation is a partial differential equation (PDE) for an unknown scalar function u of 2n variables x1, ..., xn, y1, ..., yn of the form ∂ 2 u ∂ x 1 2 + ⋯ + ∂ 2 u ∂ x n 2 − ∂ 2 u ∂ y 1 2 − ⋯ − ∂ 2 u ∂ y n 2 = 0. {\displaystyle {\frac {\partial ^{2}u}{\partial x_{1}^{2}}}+\cdots +{\frac {\partial ^{2}u}{\partial x_{n}^{2}}}-{\frac {\partial ^{2}u}{\partial y_{1}^{2}}}-\cdots -{\frac {\partial ^{2}u}{\partial y_{n}^{2}}}=0.} More generally, if a is any quadratic form in 2n variables with signature (n, n), then any PDE whose principal part is a i j u x i x j {\displaystyle a_{ij}u_{x_{i}x_{j}}} is said to be ultrahyperbolic. Any such equation can be put in the form above by means of a change of variables. The ultrahyperbolic equation has been studied from a number of viewpoints. On the one hand, it resembles the classical wave equation. This has led to a number of developments concerning its characteristics, one of which is due to Fritz John: the John equation. In 2008, Walter Craig and Steven Weinstein proved that under a nonlocal constraint, the initial value problem is well-posed for initial data given on a codimension-one hypersurface. And later, in 2022, a research team at the University of Michigan extended the conditions for solving ultrahyperbolic wave equations to complex-time (kime), demonstrated space-kime dynamics, and showed data science applications using tensor-based linear modeling of functional magnetic resonance imaging data. The equation has also been studied from the point of view of symmetric spaces, and elliptic differential operators. In particular, the ultrahyperbolic equation satisfies an analog of the mean value theorem for harmonic functions. == Notes == == References == Richard Courant; David Hilbert (1962). Methods of Mathematical Physics, Vol. 2. Wiley-Interscience. pp. 744–752. ISBN 978-0-471-50439-9. {{cite book}}: ISBN / Date incompatibility (help) Lars Hörmander (20 August 2001). "Asgeirsson's Mean Value Theorem and Related Identities". Journal of Functional Analysis. 2 (184): 377–401. doi:10.1006/jfan.2001.3743. Lars Hörmander (1990). The Analysis of Linear Partial Differential Operators I. Springer-Verlag. Theorem 7.3.4. ISBN 978-3-540-52343-7. Sigurdur Helgason (2000). Groups and Geometric Analysis. American Mathematical Society. pp. 319–323. ISBN 978-0-8218-2673-7. Fritz John (1938). "The Ultrahyperbolic Differential Equation with Four Independent Variables". Duke Math. J. 4 (2): 300–322. doi:10.1215/S0012-7094-38-00423-5.
Wikipedia:Ultrapolynomial#0
In mathematics, an ultrapolynomial is a power series in several variables whose coefficients are bounded in some specific sense. == Definition == Let d ∈ N {\displaystyle d\in \mathbb {N} } and K {\displaystyle K} a field (typically R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } ) equipped with a norm (typically the absolute value). Then a function P : K d → K {\displaystyle P:K^{d}\rightarrow K} of the form P ( x ) = ∑ α ∈ N d c α x α {\displaystyle P(x)=\sum _{\alpha \in \mathbb {N} ^{d}}c_{\alpha }x^{\alpha }} is called an ultrapolynomial of class { M p } {\displaystyle \left\{M_{p}\right\}} , if the coefficients c α {\displaystyle c_{\alpha }} satisfy | c α | ≤ C L | α | / M α {\displaystyle \left|c_{\alpha }\right|\leq CL^{\left|\alpha \right|}/M_{\alpha }} for all α ∈ N d {\displaystyle \alpha \in \mathbb {N} ^{d}} , for some L > 0 {\displaystyle L>0} and C > 0 {\displaystyle C>0} (resp. for every L > 0 {\displaystyle L>0} and some C ( L ) > 0 {\displaystyle C(L)>0} ). == References ==
Wikipedia:Ultraviolet fixed point#0
In a quantum field theory, one may calculate an effective or running coupling constant that defines the coupling of the theory measured at a given momentum scale. One example of such a coupling constant is the electric charge. In approximate calculations in several quantum field theories, notably quantum electrodynamics and theories of the Higgs particle, the running coupling appears to become infinite at a finite momentum scale. This is sometimes called the Landau pole problem. It is not known whether the appearance of these inconsistencies is an artifact of the approximation, or a real fundamental problem in the theory. However, the problem can be avoided if an ultraviolet or UV fixed point appears in the theory. A quantum field theory has a UV fixed point if its renormalization group flow approaches a fixed point in the ultraviolet (i.e. short length scale/large energy) limit. This is related to zeroes of the beta-function appearing in the Callan–Symanzik equation. The large length scale/small energy limit counterpart is the infrared fixed point. == Specific cases and details == Among other things, it means that a theory possessing a UV fixed point may not be an effective field theory, because it is well-defined at arbitrarily small distance scales. At the UV fixed point itself, the theory can behave as a conformal field theory. The converse statement, that any QFT which is valid at all distance scales (i.e. isn't an effective field theory) has a UV fixed point is false. See, for example, cascading gauge theory. Noncommutative quantum field theories have a UV cutoff even though they are not effective field theories. Physicists distinguish between trivial and nontrivial fixed points. If a UV fixed point is trivial (generally known as Gaussian fixed point), the theory is said to be asymptotically free. On the other hand, a scenario, where a non-Gaussian (i.e. nontrivial) fixed point is approached in the UV limit, is referred to as asymptotic safety. Asymptotically safe theories may be well defined at all scales despite being nonrenormalizable in perturbative sense (according to the classical scaling dimensions). == Asymptotic safety scenario in quantum gravity == Steven Weinberg has proposed that the problematic UV divergences appearing in quantum theories of gravity may be cured by means of a nontrivial UV fixed point. Such an asymptotically safe theory is renormalizable in a nonperturbative sense, and due to the fixed point physical quantities are free from divergences. As yet, a general proof for the existence of the fixed point is still lacking, but there is mounting evidence for this scenario. == See also == Ultraviolet divergence Landau pole Quantum triviality Asymptotic safety in quantum gravity Asymptotic freedom == References ==
Wikipedia:Ulugh Beg#0
Mīrzā Muhammad Tarāghāy bin Shāhrukh (Chagatay: میرزا محمد تراغای بن شاهرخ; Persian: میرزا محمد طارق بن شاهرخ), better known as Ulugh Beg (Persian: الغ‌بیک; 22 March 1394 – 27 October 1449), was a Timurid sultan, as well as an astronomer and mathematician. Ulugh Beg was notable for his work in astronomy-related mathematics, such as trigonometry and spherical geometry, as well as his general interests in the arts and intellectual activities. It is thought that he spoke five languages: Arabic, Persian, Chaghatai Turkic, Mongolian, and a small amount of Chinese. During his rule (first as a governor, then outright) the Timurid Empire achieved the cultural peak of the Timurid Renaissance through his attention and patronage. Samarkand was captured and given to Ulugh Beg by his father Shah Rukh. He built the great Ulugh Beg Observatory in Samarkand between 1424 and 1429. It was considered by scholars to have been one of the finest observatories in the Islamic world at the time and the largest in Central Asia. Ulugh Beg was subsequently recognized as the most important observational astronomer from the 15th century by many scholars. He also built the Ulugh Beg Madrasah (1417–1420) in Samarkand and Bukhara, transforming the cities into cultural centers of learning in Central Asia. However, Ulugh Beg's scientific expertise was not matched by his skills in governance. During his short reign, he failed to establish his power and authority. As a result, other rulers, including his family, took advantage of his lack of control, and he was subsequently overthrown and assassinated. == Early life == He was a grandson of the great conqueror and king, Timur (Tamerlane) (1336–1405), and the oldest son of Shah Rukh, both of whom came from the Turkicized Mongol Barlas tribe of Transoxiana (now Uzbekistan). His mother was a noblewoman named Gawhar Shad, daughter of a member of the representative Turkic tribal aristocracy, Ghiyasuddin Tarkhan. Ulugh Beg was born in Sultaniyeh during his grandfather's invasion of Persia. He was given the name Mīrzā Muhammad Tāraghay. Ulugh Beg, the name he was most commonly known by, was not truly a personal name, but rather a moniker, which can be loosely translated as "Great Ruler" (compare modern Turkish ulu, "great", and bey, "chief") and is the Turkic equivalent of Timur's Perso-Arabic title Amīr-e Kabīr. As a child he wandered through a substantial part of the Middle East and India as his grandfather expanded his conquests in those areas. After Timur's death, Shah Rukh moved the empire's capital to Herat (in modern Afghanistan). Sixteen-year-old Ulugh Beg subsequently became the governor of the former capital of Samarkand in 1409. In 1411, he was named the sovereign ruler of the whole of Mavarannahr. == Science == The teenage ruler set out to turn the city into an intellectual center for the empire. Between 1417 and 1420, he built a madrasa ("university" or "institute") on Registan Square in Samarkand (currently in Uzbekistan), and he invited numerous Islamic astronomers and mathematicians to study there. The madrasa building still survives. Ulugh Beg's most famous pupil in astronomy was Ali Qushchi (died in 1474). Qadi Zada al-Rumi was the most notable teacher at Ulugh Beg's madrasa and Jamshid al-Kashi, an astronomer, later came to join the staff. === Astronomy === Astronomy piqued Ulugh Beg's interest when he visited the Maragheh Observatory at a young age. This observatory, located in Maragheh, Iran, is where the well-known astronomer Nasir al-Din al-Tusi practised. In 1428, Ulugh Beg built an enormous observatory, similar to Tycho Brahe's later Uraniborg as well as Taqi al-Din's observatory in Constantinople. Lacking telescopes to work with, he increased his accuracy by increasing the length of his sextant; the so-called Fakhri sextant had a radius of about 36 meters (118 feet) and the optical separability of 180" (seconds of arc). The Fakhri sextant was the largest instrument at the observatory in Samarkand (an image of the sextant is on the side of this article). There were many other astronomical instruments located at the observatory, but the Fakhri sextant is the most well-known instrument there. The purpose of the Fakhri sextant was to measure the transit altitudes of the stars. This was a measurement of the maximum altitude above the horizon of the stars. It was only possible to use this device to measure the declination of celestial objects. The image, which can be found in this article, shows the remaining portion of the instrument, which consists of the underground, lower portion of the instrument that was not destroyed. The observatory built by Ulugh Beg was the most pervasive and well-known observatory throughout the Islamic world. With the instruments located in the observatory in Samarkand, Ulugh Beg composed a star catalogue consisting of 1018 stars, which is eleven fewer stars than are present in the star catalogue of Ptolemy. Ulugh Beg utilized dimensions from al-Sufi and based his star catalogue on a new analysis that was autonomous from the data used by Ptolemy. Throughout his life as an astronomer, Ulugh Beg came to realize that there were multiple mistakes in the work and subsequent data of Ptolemy that had been in use for many years. Using it, he compiled the 1437 Zij-i-Sultani of 994 stars, generally considered the greatest star catalogue between those of Ptolemy and Tycho Brahe, a work that stands alongside Abd al-Rahman al-Sufi's Book of Fixed Stars. The serious errors which he found in previous Arabian star catalogues (many of which had simply updated Ptolemy's work, adding the effect of precession to the longitudes) induced him to redetermine the positions of 992 fixed stars, to which he added 27 stars from Abd al-Rahman al-Sufi's catalogue Book of Fixed Stars from the year 964, which were too far south for observation from Samarkand. This catalogue, one of the most original of the Middle Ages, was first edited by Thomas Hyde at Oxford in 1665 under the title Jadāvil-i Mavāzi' S̱avābit, sive, Tabulae Long. ac Lat. Stellarum Fixarum ex Observatione Ulugh Beighi and reprinted in 1767 by G. Sharpe. More recent editions are those by Francis Baily in 1843 in Vol. XIII of the Memoirs of the Royal Astronomical Society, and by Edward Ball Knobel in Ulugh Beg's Catalogue of Stars, Revised from all Persian Manuscripts Existing in Great Britain, with a Vocabulary of Persian and Arabic Words (1917). In 1437, Ulugh Beg determined the length of the sidereal year as 365.2570370...d = 365d 6h 10m 8s (an error of +58 seconds). In his measurements over the course of many years he used a 50 m high gnomon. This value was improved by 28 seconds in 1525 by Nicolaus Copernicus, who appealed to the estimation of Thabit ibn Qurra (826–901), which had an error of +2 seconds. However, Ulugh Beg later measured another more precise value of the tropical year as 365d 5h 49m 15s, which has an error of +25 seconds, making it more accurate than Copernicus's estimate which had an error of +30 seconds. Ulugh Beg also determined the Earth's axial tilt as 23°30'17" in the sexagesimal system of degrees, minutes and seconds of arc, which in decimal notation converts to 23.5047°. === Mathematics === In mathematics, Ulugh Beg wrote accurate trigonometric tables of sine and tangent values correct to at least eight decimal places. == Foreign relations == Once Ulugh Beg became governor of Samarqand, he fostered diplomatic relations with the Yongle emperor of the Ming dynasty. In 1416, Ming envoys Chen Cheng and Lu An presented silk and silver stuffs to Ulugh Beg on behalf of the Yongle emperor. In 1419, The Timurid sent his own emissaries, Sultan-Shah and Muhammad Bakhshi, to the Ming court. Ulugh Beg's emissaries came across Ghiyāth al-dīn Naqqāsh and other envoys representing Shah Rukh, Prince Baysunghur, and other Timurid authorities in Beijing; however, they stayed at separate hostelries. Ghiyāth al-dīn Naqqāsh even saw the Yongle emperor riding a black horse with white feet which had been gifted by Ulugh Beg. Ulugh Beg led two major campaigns against his neighbours. This first one took place in 1425 and was directed against Moghulistan and its ruler Shir Muhammad. He was victorious but the impact of the campaign was limited and Shir Muhammad remained in power. A year later, Baraq, Khan of the Golden Horde and former protégé of Ulugh Beg, laid claim to Timurid possessions around the Syr Darya, including the town of Sighnaq. In response to that, in 1427 Ulugh Beg, accompanied by his brother Muhummad Juki, marched against Baraq. In a hill close to Sighnaq the Timurid army was surprised by a smaller enemy force but was soundly defeated. The humiliation suffered at the hands of Baraq was to have a lasting effect on Ulugh Beg. His campaign against the Golden Horde would be the last he would undertake against a neighbouring power. The armies he later sent against them would not win any resounding victories and by the end of his reign his territories would be raided by his northern and easterly foes. In 1439, the Zhengtong emperor ordered an artist to produce a painting of a black horse with white feet and a white forehead that had been sent by Ulugh Beg. Six years later, the Ming emperor sent a letter to Ulugh Beg in order to express his gratitude for all the "tribute" from Samarqand. The emperor sent "vessels made of gold and jade, a spear with a dragon's head, a fine horse with saddle, and variegated gold-embroidered silk stuffs" to Ulugh Beg, as well as silk stuffs and garments for the Timurid prince's family. == War of succession and death == In 1447, upon learning of the death of his father Shah Rukh, Ulugh Beg went to Balkh. Here, he heard that Ala al-Dawla, the son of his late brother Baysunghur, had claimed the rulership of the Timurid Empire in Herat. Consequently, Ulugh Beg marched against Ala al-Dawla and met him in battle at Murghab. He defeated his nephew and advanced toward Herat, massacring its people in 1448. However, Abul-Qasim Babur Mirza, Ala al-Dawla's brother, came to the latter's aid and defeated Ulugh Beg. Ulugh Beg retreated to Balkh where he found that its governor, his oldest son Abdal-Latif Mirza, had rebelled against him. Another civil war ensued. Abdal-Latif recruited troops to meet his father's army on the banks of the Amu Darya river. However, Ulugh Beg was forced to retreat to Samarkand before any fighting took place, having heard news of turmoil in the city. Abdal-Latif soon reached Samarkand and Ulugh Beg involuntarily surrendered to his son. Abd-al-Latif released his father from custody, allowing him to make pilgrimage to Mecca. However, he ensured Ulugh Beg never reached his destination, having him, as well as his brother Abdal-Aziz assassinated in 1449. Eventually, Ulugh Beg's reputation was rehabilitated by his nephew, Abdallah Mirza (1450–1451), who placed his remains at Timur's feet in the Gur-e-Amir in Samarkand, where they were found by Soviet archaeologists in 1941. == Marriages == Ulugh Beg had thirteen wives: Aka Begi Begum, daughter of Muhammad Sultan Mirza bin Jahangir Mirza and Khan Sultan Khanika, mother of Habiba Sultan known as Khanzada Begum and another Khanzada Begum; Sultan Badi al-mulk Begum, daughter of Khalil Sultan bin Miran Shah and Shad Malik Agha; Aqi Sultan Khanika, daughter of Sultan Mahmud Khan Ogeday; Husn Nigar Khanika, daughter of Shams-i-Jahan Khan Chaghatay; Shukr Bīka Khanika, daughter of Darwīsh Khan of the Golden Horde; Rukaiya Sultan Agha, an Arlat lady, and mother of Abdal-Latif Mirza, Ak Bash Begum and Sultan Bakht Begum; Mihr Sultan Agha, daughter of Tukal bin Sarbuka; Sa'adat Bakht Agha, daughter of Bayan Kukaltash, mother of Qutlugh Turkhan Agha; Daulat Sultan Agha, daughter of Khawand Sa'id; Bakhti Bi Agha, daughter of Aka Sufi Uzbek; Daulat Bakht Agha, daughter of Sheikh Muhammad Barlas; Sultanim Agha, mother of Abdul Hamid Mirza and Abdul Jabrar Mirza; Sultan Malik Agha, daughter of Nasir-al-Din, mother of Ubaydullah Mirza, Abdullah Mirza and another Abdullah Mirza; == Legacy == The crater, Ulugh Beigh, on the Moon, was named after him by the German astronomer Johann Heinrich von Mädler on his 1830 map of the Moon. 2439 Ulugbek, a main-belt asteroid which was discovered on 21 August 1977 by N. Chernykh at Nauchnyj, was named after him. The dinosaur Ulughbegsaurus was named after him in 2021. == Exhumation == Soviet anthropologist Mikhail M. Gerasimov reconstructed the face of Ulugh Beg. Like his grandfather Timurlane, Ulugh Beg is close to the Mongoloid type with slightly Europoid features. His father Shah Rukh had predominantly Caucasoid features, with no obvious Mongoloid feature. == See also == Aryabhata, ancient Indian astronomer Ulugh Beg Observatory and Museum Ulugh Beg Madrasa in Samarkand Ulugh beg Madrasa in Bukhara == Notes == == References == == Bibliography == O'Connor, John J.; Robertson, Edmund F., "Ulugh Beg", MacTutor History of Mathematics Archive, University of St Andrews 1839. L. P. E. A. Sedillot (1808–1875). Tables astronomiques d’Oloug Beg, commentees et publiees avec le texte en regard, Tome I, 1 fascicule, Paris. A very rare work, but referenced in the Bibliographie generale de l’astronomie jusqu’en 1880, by J. 1847. L. P. E. A. Sedillot (1808–1875). Prolegomenes des Tables astronomiques d’Oloug Beg, publiees avec Notes et Variantes, et precedes d’une Introduction. Paris: F. Didot. 1853. L. P. E. A. Sedillot (1808–1875). Prolegomenes des Tables astronomiques d’Oloug Beg, traduction et commentaire. Paris. Le Prince Savant annexe les étoiles, Frédérique Beaupertuis-Bressand, in Samarcande 1400–1500, La cité-oasis de Tamerlan : coeur d'un Empire et d'une Renaissance, book directed by Vincent Fourniau, éditions Autrement, 1995, ISSN 1157-4488. L'âge d'or de l'astronomie ottomane, Antoine Gautier, in L'Astronomie, (Monthly magazine created by Camille Flammarion in 1882), December 2005, volume 119. L'observatoire du prince Ulugh Beg, Antoine Gautier, in L'Astronomie, (Monthly magazine created by Camille Flammarion in 1882), October 2008, volume 122. Le recueil de calendriers du prince timouride Ulug Beg (1394–1449), Antoine Gautier, in Le Bulletin, n° spécial Les calendriers, Institut National des Langues et Civilisations Orientales, juin 2007, pp. 117–123. d Jean-Marie Thiébaud, Personnages marquants d'Asie centrale, du Turkestan et de l'Ouzbékistan, Paris, éditions L'Harmattan, 2004. ISBN 2-7475-7017-7. == Further reading == Dalen, Benno van (2007). "Ulugh Beg: Muḥammad Ṭaraghāy ibn Shāhrukh ibn Tīmūr". In Thomas Hockey; et al. (eds.). The Biographical Encyclopedia of Astronomers. New York: Springer. pp. 1157–9. ISBN 978-0-387-31022-0. (PDF version) == External links == Chisholm, Hugh, ed. (1911). "Ulugh Beg" . Encyclopædia Britannica (11th ed.). Cambridge University Press. Ulugh Beg: a short biography March 18. 2025 The observatory and memorial museum of Ulugbek Bukhara Ulugbek Madrasah Registan the heart of ancient Samarkand. Biography by School of Mathematics and Statistics University of St Andrews, Scotland Legacy of Ulug Beg Archived May 19, 2019, at the Wayback Machine BBC's History of the World in 100 Objects, jade dragon cup, discusses its patronage by Ulugh Beg
Wikipedia:Umbral calculus#0
The term umbral calculus has two related but distinct meanings. In mathematics, before the 1970s, umbral calculus referred to the surprising similarity between seemingly unrelated polynomial equations and certain shadowy techniques used to prove them. These techniques were introduced in 1861 by John Blissard and are sometimes called Blissard's symbolic method. They are often attributed to Édouard Lucas (or James Joseph Sylvester), who used the technique extensively. The use of shadowy techniques was put on a solid mathematical footing starting in the 1970s, and the resulting mathematical theory is also referred to as "umbral calculus". == History == In the 1930s and 1940s, Eric Temple Bell attempted to set the umbral calculus on a rigorous footing, however his attempt in making this kind of argument logically rigorous was unsuccessful. The combinatorialist John Riordan in his book Combinatorial Identities published in the 1960s, used techniques of this sort extensively. In the 1970s, Steven Roman, Gian-Carlo Rota, and others developed the umbral calculus by means of linear functionals on spaces of polynomials. Currently, umbral calculus refers to the study of Sheffer sequences, including polynomial sequences of binomial type and Appell sequences, but may encompass systematic correspondence techniques of the calculus of finite differences. == 19th-century umbral calculus == The method is a notational procedure used for deriving identities involving indexed sequences of numbers by pretending that the indices are exponents. Construed literally, it is absurd, and yet it is successful: identities derived via the umbral calculus can also be properly derived by more complicated methods that can be taken literally without logical difficulty. An example involves the Bernoulli polynomials. Consider, for example, the ordinary binomial expansion (which contains a binomial coefficient): ( y + x ) n = ∑ k = 0 n ( n k ) y n − k x k {\displaystyle (y+x)^{n}=\sum _{k=0}^{n}{n \choose k}y^{n-k}x^{k}} and the remarkably similar-looking relation on the Bernoulli polynomials: B n ( y + x ) = ∑ k = 0 n ( n k ) B n − k ( y ) x k . {\displaystyle B_{n}(y+x)=\sum _{k=0}^{n}{n \choose k}B_{n-k}(y)x^{k}.} Compare also the ordinary derivative d d x x n = n x n − 1 {\displaystyle {\frac {d}{dx}}x^{n}=nx^{n-1}} to a very similar-looking relation on the Bernoulli polynomials: d d x B n ( x ) = n B n − 1 ( x ) . {\displaystyle {\frac {d}{dx}}B_{n}(x)=nB_{n-1}(x).} These similarities allow one to construct umbral proofs, which on the surface cannot be correct, but seem to work anyway. Thus, for example, by pretending that the subscript n − k is an exponent: B n ( x ) = ∑ k = 0 n ( n k ) b n − k x k = ( b + x ) n , {\displaystyle B_{n}(x)=\sum _{k=0}^{n}{n \choose k}b^{n-k}x^{k}=(b+x)^{n},} and then differentiating, one gets the desired result: B n ′ ( x ) = n ( b + x ) n − 1 = n B n − 1 ( x ) . {\displaystyle B_{n}'(x)=n(b+x)^{n-1}=nB_{n-1}(x).} In the above, the variable b is an "umbra" (Latin for shadow). See also Faulhaber's formula. == Umbral Taylor series == In differential calculus, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. That is, a real or complex-valued function f (x) that is analytic at a {\displaystyle a} can be written as: f ( x ) = ∑ n = 0 ∞ f ( n ) ( a ) n ! ( x − a ) n {\displaystyle f(x)=\sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n}} Similar relationships were also observed in the theory of finite differences. The umbral version of the Taylor series is given by a similar expression involving the k-th forward differences Δ k [ f ] {\displaystyle \Delta ^{k}[f]} of a polynomial function f, f ( x ) = ∑ k = 0 ∞ Δ k [ f ] ( a ) k ! ( x − a ) k {\displaystyle f(x)=\sum _{k=0}^{\infty }{\frac {\Delta ^{k}[f](a)}{k!}}(x-a)_{k}} where ( x − a ) k = ( x − a ) ( x − a − 1 ) ( x − a − 2 ) ⋯ ( x − a − k + 1 ) {\displaystyle (x-a)_{k}=(x-a)(x-a-1)(x-a-2)\cdots (x-a-k+1)} is the Pochhammer symbol used here for the falling sequential product. A similar relationship holds for the backward differences and rising factorial. This series is also known as the Newton series or Newton's forward difference expansion. The analogy to Taylor's expansion is utilized in the calculus of finite differences. == Modern umbral calculus == Another combinatorialist, Gian-Carlo Rota, pointed out that the mystery vanishes if one considers the linear functional L on polynomials in z defined by L ( z n ) = B n ( 0 ) = B n . {\displaystyle L(z^{n})=B_{n}(0)=B_{n}.} Then, using the definition of the Bernoulli polynomials and the definition and linearity of L, one can write B n ( x ) = ∑ k = 0 n ( n k ) B n − k x k = ∑ k = 0 n ( n k ) L ( z n − k ) x k = L ( ∑ k = 0 n ( n k ) z n − k x k ) = L ( ( z + x ) n ) {\displaystyle {\begin{aligned}B_{n}(x)&=\sum _{k=0}^{n}{n \choose k}B_{n-k}x^{k}\\&=\sum _{k=0}^{n}{n \choose k}L\left(z^{n-k}\right)x^{k}\\&=L\left(\sum _{k=0}^{n}{n \choose k}z^{n-k}x^{k}\right)\\&=L\left((z+x)^{n}\right)\end{aligned}}} This enables one to replace occurrences of B n ( x ) {\displaystyle B_{n}(x)} by L ( ( z + x ) n ) {\displaystyle L((z+x)^{n})} , that is, move the n from a subscript to a superscript (the key operation of umbral calculus). For instance, we can now prove that: ∑ k = 0 n ( n k ) B n − k ( y ) x k = ∑ k = 0 n ( n k ) L ( ( z + y ) n − k ) x k = L ( ∑ k = 0 n ( n k ) ( z + y ) n − k x k ) = L ( ( z + x + y ) n ) = B n ( x + y ) . {\displaystyle {\begin{aligned}\sum _{k=0}^{n}{n \choose k}B_{n-k}(y)x^{k}&=\sum _{k=0}^{n}{n \choose k}L\left((z+y)^{n-k}\right)x^{k}\\&=L\left(\sum _{k=0}^{n}{n \choose k}(z+y)^{n-k}x^{k}\right)\\&=L\left((z+x+y)^{n}\right)\\&=B_{n}(x+y).\end{aligned}}} Rota later stated that much confusion resulted from the failure to distinguish between three equivalence relations that occur frequently in this topic, all of which were denoted by "=". In a paper published in 1964, Rota used umbral methods to establish the recursion formula satisfied by the Bell numbers, which enumerate partitions of finite sets. In the paper of Roman and Rota cited below, the umbral calculus is characterized as the study of the umbral algebra, defined as the algebra of linear functionals on the vector space of polynomials in a variable x, with a product L1L2 of linear functionals defined by ⟨ L 1 L 2 | x n ⟩ = ∑ k = 0 n ( n k ) ⟨ L 1 | x k ⟩ ⟨ L 2 | x n − k ⟩ . {\displaystyle \left\langle L_{1}L_{2}|x^{n}\right\rangle =\sum _{k=0}^{n}{n \choose k}\left\langle L_{1}|x^{k}\right\rangle \left\langle L_{2}|x^{n-k}\right\rangle .} When polynomial sequences replace sequences of numbers as images of yn under the linear mapping L, then the umbral method is seen to be an essential component of Rota's general theory of special polynomials, and that theory is the umbral calculus by some more modern definitions of the term. A small sample of that theory can be found in the article on polynomial sequences of binomial type. Another is the article titled Sheffer sequence. Rota later applied umbral calculus extensively in his paper with Shen to study the various combinatorial properties of the cumulants. == See also == Bernoulli umbra Umbral composition of polynomial sequences Calculus of finite differences Pidduck polynomials Symbolic method in invariant theory Narumi polynomials == Notes == == References == Bell, E. T. (1938), "The History of Blissard's Symbolic Method, with a Sketch of its Inventor's Life", The American Mathematical Monthly, 45 (7), Mathematical Association of America: 414–421, doi:10.1080/00029890.1938.11990829, ISSN 0002-9890, JSTOR 2304144 Roman, Steven M.; Rota, Gian-Carlo (1978), "The umbral calculus", Advances in Mathematics, 27 (2): 95–188, doi:10.1016/0001-8708(78)90087-7, ISSN 0001-8708, MR 0485417 G.-C. Rota, D. Kahaner, and A. Odlyzko, "Finite Operator Calculus," Journal of Mathematical Analysis and its Applications, vol. 42, no. 3, June 1973. Reprinted in the book with the same title, Academic Press, New York, 1975. Roman, Steven (1984), The umbral calculus, Pure and Applied Mathematics, vol. 111, London: Academic Press Inc. [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-594380-2, MR 0741185. Reprinted by Dover, 2005. Roman, S. (2001) [1994], "Umbral calculus", Encyclopedia of Mathematics, EMS Press == External links == Weisstein, Eric W. "Umbral Calculus". MathWorld. A. Di Bucchianico, D. Loeb (2000). "A Selected Survey of Umbral Calculus" (PDF). Electronic Journal of Combinatorics. Dynamic Surveys. DS3. Archived from the original (PDF) on 2012-02-24. Roman, S. (1982), The Theory of the Umbral Calculus, I
Wikipedia:Unary function#0
In mathematics, a unary function is a function that takes one argument. A unary operator belongs to a subset of unary functions, in that its codomain coincides with its domain. In contrast, a unary function's domain need not coincide with its range. == Examples == The successor function, denoted succ {\displaystyle \operatorname {succ} } , is a unary operator. Its domain and codomain are the natural numbers; its definition is as follows: succ : N → N n ↦ ( n + 1 ) {\displaystyle {\begin{aligned}\operatorname {succ} :\quad &\mathbb {N} \rightarrow \mathbb {N} \\&n\mapsto (n+1)\end{aligned}}} In some programming languages such as C, executing this operation is denoted by postfixing ++ to the operand, i.e. the use of n++ is equivalent to executing the assignment n := succ ⁡ ( n ) {\displaystyle n:=\operatorname {succ} (n)} . Many of the elementary functions are unary functions, including the trigonometric functions, logarithm with a specified base, exponentiation to a particular power or base, and hyperbolic functions. == See also == Arity Binary function Binary operation Iterated binary operation Ternary operation Unary operation == Bibliography == Foundations of Genetic Programming
Wikipedia:Unary operation#0
In mathematics, a binary operation or dyadic operation is a rule for combining two elements (called operands) to produce another element. More formally, a binary operation is an operation of arity two. More specifically, a binary operation on a set is a binary function that maps every pair of elements of the set to an element of the set. Examples include the familiar arithmetic operations like addition, subtraction, multiplication, set operations like union, complement, intersection. Other examples are readily found in different areas of mathematics, such as vector addition, matrix multiplication, and conjugation in groups. A binary function that involves several sets is sometimes also called a binary operation. For example, scalar multiplication of vector spaces takes a scalar and a vector to produce a vector, and scalar product takes two vectors to produce a scalar. Binary operations are the keystone of most structures that are studied in algebra, in particular in semigroups, monoids, groups, rings, fields, and vector spaces. == Terminology == More precisely, a binary operation on a set S {\displaystyle S} is a mapping of the elements of the Cartesian product S × S {\displaystyle S\times S} to S {\displaystyle S} : f : S × S → S . {\displaystyle \,f\colon S\times S\rightarrow S.} If f {\displaystyle f} is not a function but a partial function, then f {\displaystyle f} is called a partial binary operation. For instance, division is a partial binary operation on the set of all real numbers, because one cannot divide by zero: a 0 {\displaystyle {\frac {a}{0}}} is undefined for every real number a {\displaystyle a} . In both model theory and classical universal algebra, binary operations are required to be defined on all elements of S × S {\displaystyle S\times S} . However, partial algebras generalize universal algebras to allow partial operations. Sometimes, especially in computer science, the term binary operation is used for any binary function. == Properties and examples == Typical examples of binary operations are the addition ( + {\displaystyle +} ) and multiplication ( × {\displaystyle \times } ) of numbers and matrices as well as composition of functions on a single set. For instance, On the set of real numbers R {\displaystyle \mathbb {R} } , f ( a , b ) = a + b {\displaystyle f(a,b)=a+b} is a binary operation since the sum of two real numbers is a real number. On the set of natural numbers N {\displaystyle \mathbb {N} } , f ( a , b ) = a + b {\displaystyle f(a,b)=a+b} is a binary operation since the sum of two natural numbers is a natural number. This is a different binary operation than the previous one since the sets are different. On the set M ( 2 , R ) {\displaystyle M(2,\mathbb {R} )} of 2 × 2 {\displaystyle 2\times 2} matrices with real entries, f ( A , B ) = A + B {\displaystyle f(A,B)=A+B} is a binary operation since the sum of two such matrices is a 2 × 2 {\displaystyle 2\times 2} matrix. On the set M ( 2 , R ) {\displaystyle M(2,\mathbb {R} )} of 2 × 2 {\displaystyle 2\times 2} matrices with real entries, f ( A , B ) = A B {\displaystyle f(A,B)=AB} is a binary operation since the product of two such matrices is a 2 × 2 {\displaystyle 2\times 2} matrix. For a given set C {\displaystyle C} , let S {\displaystyle S} be the set of all functions h : C → C {\displaystyle h\colon C\rightarrow C} . Define f : S × S → S {\displaystyle f\colon S\times S\rightarrow S} by f ( h 1 , h 2 ) ( c ) = ( h 1 ∘ h 2 ) ( c ) = h 1 ( h 2 ( c ) ) {\displaystyle f(h_{1},h_{2})(c)=(h_{1}\circ h_{2})(c)=h_{1}(h_{2}(c))} for all c ∈ C {\displaystyle c\in C} , the composition of the two functions h 1 {\displaystyle h_{1}} and h 2 {\displaystyle h_{2}} in S {\displaystyle S} . Then f {\displaystyle f} is a binary operation since the composition of the two functions is again a function on the set C {\displaystyle C} (that is, a member of S {\displaystyle S} ). Many binary operations of interest in both algebra and formal logic are commutative, satisfying f ( a , b ) = f ( b , a ) {\displaystyle f(a,b)=f(b,a)} for all elements a {\displaystyle a} and b {\displaystyle b} in S {\displaystyle S} , or associative, satisfying f ( f ( a , b ) , c ) = f ( a , f ( b , c ) ) {\displaystyle f(f(a,b),c)=f(a,f(b,c))} for all a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} in S {\displaystyle S} . Many also have identity elements and inverse elements. The first three examples above are commutative and all of the above examples are associative. On the set of real numbers R {\displaystyle \mathbb {R} } , subtraction, that is, f ( a , b ) = a − b {\displaystyle f(a,b)=a-b} , is a binary operation which is not commutative since, in general, a − b ≠ b − a {\displaystyle a-b\neq b-a} . It is also not associative, since, in general, a − ( b − c ) ≠ ( a − b ) − c {\displaystyle a-(b-c)\neq (a-b)-c} ; for instance, 1 − ( 2 − 3 ) = 2 {\displaystyle 1-(2-3)=2} but ( 1 − 2 ) − 3 = − 4 {\displaystyle (1-2)-3=-4} . On the set of natural numbers N {\displaystyle \mathbb {N} } , the binary operation exponentiation, f ( a , b ) = a b {\displaystyle f(a,b)=a^{b}} , is not commutative since, a b ≠ b a {\displaystyle a^{b}\neq b^{a}} (cf. Equation xy = yx), and is also not associative since f ( f ( a , b ) , c ) ≠ f ( a , f ( b , c ) ) {\displaystyle f(f(a,b),c)\neq f(a,f(b,c))} . For instance, with a = 2 {\displaystyle a=2} , b = 3 {\displaystyle b=3} , and c = 2 {\displaystyle c=2} , f ( 2 3 , 2 ) = f ( 8 , 2 ) = 8 2 = 64 {\displaystyle f(2^{3},2)=f(8,2)=8^{2}=64} , but f ( 2 , 3 2 ) = f ( 2 , 9 ) = 2 9 = 512 {\displaystyle f(2,3^{2})=f(2,9)=2^{9}=512} . By changing the set N {\displaystyle \mathbb {N} } to the set of integers Z {\displaystyle \mathbb {Z} } , this binary operation becomes a partial binary operation since it is now undefined when a = 0 {\displaystyle a=0} and b {\displaystyle b} is any negative integer. For either set, this operation has a right identity (which is 1 {\displaystyle 1} ) since f ( a , 1 ) = a {\displaystyle f(a,1)=a} for all a {\displaystyle a} in the set, which is not an identity (two sided identity) since f ( 1 , b ) ≠ b {\displaystyle f(1,b)\neq b} in general. Division ( ÷ {\displaystyle \div } ), a partial binary operation on the set of real or rational numbers, is not commutative or associative. Tetration ( ↑↑ {\displaystyle \uparrow \uparrow } ), as a binary operation on the natural numbers, is not commutative or associative and has no identity element. == Notation == Binary operations are often written using infix notation such as a ∗ b {\displaystyle a\ast b} , a + b {\displaystyle a+b} , a ⋅ b {\displaystyle a\cdot b} or (by juxtaposition with no symbol) a b {\displaystyle ab} rather than by functional notation of the form f ( a , b ) {\displaystyle f(a,b)} . Powers are usually also written without operator, but with the second argument as superscript. Binary operations are sometimes written using prefix or (more frequently) postfix notation, both of which dispense with parentheses. They are also called, respectively, Polish notation ∗ a b {\displaystyle \ast ab} and reverse Polish notation a b ∗ {\displaystyle ab\ast } . == Binary operations as ternary relations == A binary operation f {\displaystyle f} on a set S {\displaystyle S} may be viewed as a ternary relation on S {\displaystyle S} , that is, the set of triples ( a , b , f ( a , b ) ) {\displaystyle (a,b,f(a,b))} in S × S × S {\displaystyle S\times S\times S} for all a {\displaystyle a} and b {\displaystyle b} in S {\displaystyle S} . == Other binary operations == For example, scalar multiplication in linear algebra. Here K {\displaystyle K} is a field and S {\displaystyle S} is a vector space over that field. Also the dot product of two vectors maps S × S {\displaystyle S\times S} to K {\displaystyle K} , where K {\displaystyle K} is a field and S {\displaystyle S} is a vector space over K {\displaystyle K} . It depends on authors whether it is considered as a binary operation. == See also == Category:Properties of binary operations Iterated binary operation – Repeated application of an operation to a sequence Magma (algebra) – Algebraic structure with a binary operation Operator (programming) – Basic programming language constructPages displaying short descriptions of redirect targets Ternary operation – Mathematical operation that combines three elements to produce another element Truth table § Binary operations Unary operation – Mathematical operation with only one operand == Notes == == References == Fraleigh, John B. (1976), A First Course in Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-01984-1 Hall, Marshall Jr. (1959), The Theory of Groups, New York: Macmillan Hardy, Darel W.; Walker, Carol L. (2002), Applied Algebra: Codes, Ciphers and Discrete Algorithms, Upper Saddle River, NJ: Prentice-Hall, ISBN 0-13-067464-8 Rotman, Joseph J. (1973), The Theory of Groups: An Introduction (2nd ed.), Boston: Allyn and Bacon == External links == Weisstein, Eric W. "Binary Operation". MathWorld.
Wikipedia:Uncertainty exponent#0
In mathematics, the uncertainty exponent is a method of measuring the fractal dimension of a basin boundary. In a chaotic scattering system, the invariant set of the system is usually not directly accessible because it is non-attracting and typically of measure zero. Therefore, the only way to infer the presence of members and to measure the properties of the invariant set is through the basins of attraction. Note that in a scattering system, basins of attraction are not limit cycles therefore do not constitute members of the invariant set. Suppose we start with a random trajectory and perturb it by a small amount, ϵ {\displaystyle \epsilon } , in a random direction. If the new trajectory ends up in a different basin from the old one, then it is called epsilon uncertain. If we take a large number of such trajectories, then the fraction of them that are epsilon uncertain is the uncertainty fraction, f ( ϵ ) {\displaystyle f(\epsilon )} , and we expect it to scale exponentially with ε {\displaystyle \varepsilon } : f ( ε ) ∼ ε γ {\displaystyle f(\varepsilon )\sim \varepsilon ^{\gamma }\,} Thus the uncertainty exponent, γ {\displaystyle \gamma } , is defined as follows: γ = lim ε → 0 ln ⁡ f ( ε ) ln ⁡ ε {\displaystyle \gamma =\lim _{\varepsilon \to 0}{\frac {\ln f(\varepsilon )}{\ln \varepsilon }}} The uncertainty exponent can be shown to approximate the box-counting dimension as follows: D 0 = N − γ {\displaystyle D_{0}=N-\gamma \,} where N is the embedding dimension. Please refer to the article on chaotic mixing for an example of numerical computation of the uncertainty dimension compared with that of a box-counting dimension. == References == C. Grebogi, S. W. McDonald, E. Ott and J. A. Yorke, Final state sensitivity: An obstruction to predictability, Phys. Letters 99A: 415-418 (1983). Edward Ott (1993). Chaos in Dynamical Systems. Cambridge University Press.
Wikipedia:Unconditional convergence#0
In mathematics, specifically functional analysis, a series is unconditionally convergent if all reorderings of the series converge to the same value. In contrast, a series is conditionally convergent if it converges but different orderings do not all converge to that same value. Unconditional convergence is equivalent to absolute convergence in finite-dimensional vector spaces, but is a weaker property in infinite dimensions. == Definition == Let X {\displaystyle X} be a topological vector space. Let I {\displaystyle I} be an index set and x i ∈ X {\displaystyle x_{i}\in X} for all i ∈ I . {\displaystyle i\in I.} The series ∑ i ∈ I x i {\displaystyle \textstyle \sum _{i\in I}x_{i}} is called unconditionally convergent to x ∈ X , {\displaystyle x\in X,} if the indexing set I 0 := { i ∈ I : x i ≠ 0 } {\displaystyle I_{0}:=\left\{i\in I:x_{i}\neq 0\right\}} is countable, and for every permutation (bijection) σ : I 0 → I 0 {\displaystyle \sigma :I_{0}\to I_{0}} of I 0 = { i k } k = 1 ∞ {\displaystyle I_{0}=\left\{i_{k}\right\}_{k=1}^{\infty }} the following relation holds: ∑ k = 1 ∞ x σ ( i k ) = x . {\displaystyle \sum _{k=1}^{\infty }x_{\sigma \left(i_{k}\right)}=x.} == Alternative definition == Unconditional convergence is often defined in an equivalent way: A series is unconditionally convergent if for every sequence ( ε n ) n = 1 ∞ , {\displaystyle \left(\varepsilon _{n}\right)_{n=1}^{\infty },} with ε n ∈ { − 1 , + 1 } , {\displaystyle \varepsilon _{n}\in \{-1,+1\},} the series ∑ n = 1 ∞ ε n x n {\displaystyle \sum _{n=1}^{\infty }\varepsilon _{n}x_{n}} converges. If X {\displaystyle X} is a Banach space, every absolutely convergent series is unconditionally convergent, but the converse implication does not hold in general. Indeed, if X {\displaystyle X} is an infinite-dimensional Banach space, then by Dvoretzky–Rogers theorem there always exists an unconditionally convergent series in this space that is not absolutely convergent. However, when X = R n , {\displaystyle X=\mathbb {R} ^{n},} by the Riemann series theorem, the series ∑ n x n {\textstyle \sum _{n}x_{n}} is unconditionally convergent if and only if it is absolutely convergent. == See also == Absolute convergence – Mode of convergence of an infinite series Modes of convergence (annotated index) – Annotated index of various modes of convergence Rearrangements and unconditional convergence/Dvoretzky–Rogers theorem – Mode of convergence of an infinite series Riemann series theorem – Unconditionally convergent series converge absolutely == References == Ch. Heil: A Basis Theory Primer Knopp, Konrad (1956). Infinite Sequences and Series. Dover Publications. ISBN 9780486601533. {{cite book}}: ISBN / Date incompatibility (help) Knopp, Konrad (1990). Theory and Application of Infinite Series. Dover Publications. ISBN 9780486661650. Wojtaszczyk, P. (1996). Banach spaces for analysts. Cambridge University Press. ISBN 9780521566759. This article incorporates material from Unconditional convergence on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia:Underdetermined system#0
In mathematics, a system of linear equations or a system of polynomial equations is considered underdetermined if there are fewer equations than unknowns (in contrast to an overdetermined system, where there are more equations than unknowns). The terminology can be explained using the concept of constraint counting. Each unknown can be seen as an available degree of freedom. Each equation introduced into the system can be viewed as a constraint that restricts one degree of freedom. Therefore, the critical case (between overdetermined and underdetermined) occurs when the number of equations and the number of free variables are equal. For every variable giving a degree of freedom, there exists a corresponding constraint removing a degree of freedom. An indeterminate system additional constraints that are not equations, such as restricting the solutions to integers. The underdetermined case, by contrast, occurs when the system has been underconstrained—that is, when the unknowns outnumber the equations. == Solutions of underdetermined systems == An underdetermined linear system has either no solution or infinitely many solutions. For example, x + y + z = 1 x + y + z = 0 {\displaystyle {\begin{aligned}x+y+z&=1\\x+y+z&=0\end{aligned}}} is an underdetermined system without any solution; any system of equations having no solution is said to be inconsistent. On the other hand, the system x + y + z = 1 x + y + 2 z = 3 {\displaystyle {\begin{aligned}x+y+z&=1\\x+y+2z&=3\end{aligned}}} is consistent and has an infinitude of solutions, such as (x, y, z) = (1, −2, 2), (2, −3, 2), and (3, −4, 2). All of these solutions can be characterized by first subtracting the first equation from the second, to show that all solutions obey z = 2; using this in either equation shows that any value of y is possible, with x = −1 − y. More specifically, according to the Rouché–Capelli theorem, any system of linear equations (underdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution; since in an underdetermined system this rank is necessarily less than the number of unknowns, there are indeed an infinitude of solutions, with the general solution having k free parameters where k is the difference between the number of variables and the rank. There are algorithms to decide whether an underdetermined system has solutions, and if it has any, to express all solutions as linear functions of k of the variables (same k as above). The simplest one is Gaussian elimination. See System of linear equations for more details. == Homogeneous case == The homogeneous (with all constant terms equal to zero) underdetermined linear system always has non-trivial solutions (in addition to the trivial solution where all the unknowns are zero). There are an infinity of such solutions, which form a vector space, whose dimension is the difference between the number of unknowns and the rank of the matrix of the system. == Underdetermined polynomial systems == The main property of linear underdetermined systems, of having either no solution or infinitely many, extends to systems of polynomial equations in the following way. A system of polynomial equations which has fewer equations than unknowns is said to be underdetermined. It has either infinitely many complex solutions (or, more generally, solutions in an algebraically closed field) or is inconsistent. It is inconsistent if and only if 0 = 1 is a linear combination (with polynomial coefficients) of the equations (this is Hilbert's Nullstellensatz). If an underdetermined system of t equations in n variables (t < n) has solutions, then the set of all complex solutions is an algebraic set of dimension at least n - t. If the underdetermined system is chosen at random the dimension is equal to n - t with probability one. == Underdetermined systems with other constraints and in optimization problems == In general, an underdetermined system of linear equations has an infinite number of solutions, if any. However, in optimization problems that are subject to linear equality constraints, only one of the solutions is relevant, namely the one giving the highest or lowest value of an objective function. Some problems specify that one or more of the variables are constrained to take on integer values. An integer constraint leads to integer programming and Diophantine equations problems, which may have only a finite number of solutions. Another kind of constraint, which appears in coding theory, especially in error correcting codes and signal processing (for example compressed sensing), consists in an upper bound on the number of variables which may be different from zero. In error correcting codes, this bound corresponds to the maximal number of errors that may be corrected simultaneously. == See also == Overdetermined system Regularization (mathematics) == References ==
Wikipedia:Undergraduate Ambassadors Scheme#0
The Undergraduate Ambassadors Scheme (UAS) is a program in the United Kingdom devised to encourage students enrolled in science, technology, engineering and mathematics (STEM) programs to enter teaching by awarding them with degree course credits. == History == Noting the declining enrollment in STEM subjects at UK universities, a team including author Simon Singh devised the idea with three aims: to encourage undergraduates in those fields to go into teaching, to support teachers and to provide role models for school students who might otherwise never meet a young person who had chosen to study a STEM subject. UAS was set up to provide a structure to get undergraduates into the classroom, based on a model pioneered at Imperial College London, but adding the incentive of academic credit for program participants. After receiving approval to pilot UAS from the University of Surrey, Singh backed a launch of the program with his own money, with the assistance of Ravi Kapur and others. Student interest in the program was high. Singh indicated that in the pilot year of the program 10 of 13 math undergraduates who participate at the University of Southampton subsequently entered teacher training. By the midpoint of its second year, in February 2004, the program was being described by the Times Educational Supplement (TES) as a success, with nine universities onboard and an additional 30 expressing interest. In October 2005, Singh wrote in The Guardian that UAS was established in "over 50 university departments, mainly mathematics, science and engineering, with more coming on board each year." In the 2007–2008 academic year, involvement had risen to 107 university departments, with 750 undergraduate participants. == Function == According to TES, undergraduates involved first participate in a one-day program to give them basic information on instructing students in math and science. After this training, they observe a local classroom and then put together a project for the students in a class. The UAS website indicates that the program, available in the last two years of a student's undergraduate career, carries ten to 30 credits for ten weeks of work in the classroom alongside the classroom's regular teacher, who helps evaluate the undergraduate's performance. == References == == External links == Official site
Wikipedia:Unfolding (functions)#0
In mathematics, an unfolding of a smooth real-valued function ƒ on a smooth manifold, is a certain family of functions that includes ƒ. == Definition == Let M {\displaystyle M} be a smooth manifold and consider a smooth mapping f : M → R . {\displaystyle f:M\to \mathbb {R} .} Let us assume that for given x 0 ∈ M {\displaystyle x_{0}\in M} and y 0 ∈ R {\displaystyle y_{0}\in \mathbb {R} } we have f ( x 0 ) = y 0 {\displaystyle f(x_{0})=y_{0}} . Let N {\displaystyle N} be a smooth k {\displaystyle k} -dimensional manifold, and consider the family of mappings (parameterised by N {\displaystyle N} ) given by F : M × N → R . {\displaystyle F:M\times N\to \mathbb {R} .} We say that F {\displaystyle F} is a k {\displaystyle k} -parameter unfolding of f {\displaystyle f} if F ( x , 0 ) = f ( x ) {\displaystyle F(x,0)=f(x)} for all x . {\displaystyle x.} In other words the functions f : M → R {\displaystyle f:M\to \mathbb {R} } and F : M × { 0 } → R {\displaystyle F:M\times \{0\}\to \mathbb {R} } are the same: the function f {\displaystyle f} is contained in, or is unfolded by, the family F . {\displaystyle F.} == Example == Let f : R 2 → R {\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} } be given by f ( x , y ) = x 2 + y 5 . {\displaystyle f(x,y)=x^{2}+y^{5}.} An example of an unfolding of f {\displaystyle f} would be F : R 2 × R 3 → R {\displaystyle F:\mathbb {R} ^{2}\times \mathbb {R} ^{3}\to \mathbb {R} } given by F ( ( x , y ) , ( a , b , c ) ) = x 2 + y 5 + a y + b y 2 + c y 3 . {\displaystyle F((x,y),(a,b,c))=x^{2}+y^{5}+ay+by^{2}+cy^{3}.} As is the case with unfoldings, x {\displaystyle x} and y {\displaystyle y} are called variables, and a , {\displaystyle a,} b , {\displaystyle b,} and c {\displaystyle c} are called parameters, since they parameterise the unfolding. == Well-behaved unfoldings == In practice we require that the unfoldings have certain properties. In R {\displaystyle \mathbb {R} } , f {\displaystyle f} is a smooth mapping from M {\displaystyle M} to R {\displaystyle \mathbb {R} } and so belongs to the function space C ∞ ( M , R ) . {\displaystyle C^{\infty }(M,\mathbb {R} ).} As we vary the parameters of the unfolding, we get different elements of the function space. Thus, the unfolding induces a function Φ : N → C ∞ ( M , R ) . {\displaystyle \Phi :N\to C^{\infty }(M,\mathbb {R} ).} The space diff ⁡ ( M ) × diff ⁡ ( R ) {\displaystyle \operatorname {diff} (M)\times \operatorname {diff} (\mathbb {R} )} , where diff ⁡ ( M ) {\displaystyle \operatorname {diff} (M)} denotes the group of diffeomorphisms of M {\displaystyle M} etc., acts on C ∞ ( M , R ) . {\displaystyle C^{\infty }(M,\mathbb {R} ).} The action is given by ( ϕ , ψ ) ⋅ f = ψ ∘ f ∘ ϕ − 1 . {\displaystyle (\phi ,\psi )\cdot f=\psi \circ f\circ \phi ^{-1}.} If g {\displaystyle g} lies in the orbit of f {\displaystyle f} under this action then there is a diffeomorphic change of coordinates in M {\displaystyle M} and R {\displaystyle \mathbb {R} } , which takes g {\displaystyle g} to f {\displaystyle f} (and vice versa). One property that we can impose is that Im ⁡ ( Φ ) ⋔ orb ⁡ ( f ) {\displaystyle \operatorname {Im} (\Phi )\pitchfork \operatorname {orb} (f)} where " ⋔ {\displaystyle \pitchfork } " denotes "transverse to". This property ensures that as we vary the unfolding parameters we can predict – by knowing how the orbit foliates C ∞ ( M , R ) {\displaystyle C^{\infty }(M,\mathbb {R} )} – how the resulting functions will vary. == Versal unfoldings == There is an idea of a versal unfolding. Every versal unfolding has the property that Im ⁡ ( Φ ) ⋔ orb ⁡ ( f ) {\displaystyle \operatorname {Im} (\Phi )\pitchfork \operatorname {orb} (f)} , but the converse is false. Let x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} be local coordinates on M {\displaystyle M} , and let O ( x 1 , … , x n ) {\displaystyle {\mathcal {O}}(x_{1},\ldots ,x_{n})} denote the ring of smooth functions. We define the Jacobian ideal of f {\displaystyle f} , denoted by J f {\displaystyle J_{f}} , as follows: J f := ⟨ ∂ f ∂ x 1 , … , ∂ f ∂ x n ⟩ . {\displaystyle J_{f}:=\left\langle {\frac {\partial f}{\partial x_{1}}},\ldots ,{\frac {\partial f}{\partial x_{n}}}\right\rangle .} Then a basis for a versal unfolding of f {\displaystyle f} is given by the quotient O ( x 1 , … , x n ) J f {\displaystyle {\frac {{\mathcal {O}}(x_{1},\ldots ,x_{n})}{J_{f}}}} . This quotient is known as the local algebra of f {\displaystyle f} . The dimension of the local algebra is called the Milnor number of f {\displaystyle f} . The minimum number of unfolding parameters for a versal unfolding is equal to the Milnor number; that is not to say that every unfolding with that many parameters will be versal. Consider the function f ( x , y ) = x 2 + y 5 {\displaystyle f(x,y)=x^{2}+y^{5}} . A calculation shows that O ( x , y ) ⟨ 2 x , 5 y 4 ⟩ = { y , y 2 , y 3 } . {\displaystyle {\frac {{\mathcal {O}}(x,y)}{\langle 2x,5y^{4}\rangle }}=\{y,y^{2},y^{3}\}\ .} This means that { y , y 2 , y 3 } {\displaystyle \{y,y^{2},y^{3}\}} give a basis for a versal unfolding, and that F ( ( x , y ) , ( a , b , c ) ) = x 2 + y 5 + a y + b y 2 + c y 3 {\displaystyle F((x,y),(a,b,c))=x^{2}+y^{5}+ay+by^{2}+cy^{3}} is a versal unfolding. A versal unfolding with the minimum possible number of unfolding parameters is called a miniversal unfolding. == Bifurcations sets of unfoldings == An important object associated to an unfolding is its bifurcation set. This set lives in the parameter space of the unfolding, and gives all parameter values for which the resulting function has degenerate singularities. == Other terminology == Sometimes unfoldings are called deformations, versal unfoldings are called versal deformations, etc. == References == V. I. Arnold, S. M. Gussein-Zade & A. N. Varchenko, Singularities of differentiable maps, Volume 1, Birkhäuser, (1985). J. W. Bruce & P. J. Giblin, Curves & singularities, second edition, Cambridge University press, (1992).
Wikipedia:Uniform absolute-convergence#0
In mathematics, uniform absolute-convergence is a type of convergence for series of functions. Like absolute-convergence, it has the useful property that it is preserved when the order of summation is changed. == Motivation == A convergent series of numbers can often be reordered in such a way that the new series diverges. This is not possible for series of nonnegative numbers, however, so the notion of absolute-convergence precludes this phenomenon. When dealing with uniformly convergent series of functions, the same phenomenon occurs: the series can potentially be reordered into a non-uniformly convergent series, or a series which does not even converge pointwise. This is impossible for series of nonnegative functions, so the notion of uniform absolute-convergence can be used to rule out these possibilities. == Definition == Given a set X and functions f n : X → C {\displaystyle f_{n}:X\to \mathbb {C} } (or to any normed vector space), the series ∑ n = 0 ∞ f n ( x ) {\displaystyle \sum _{n=0}^{\infty }f_{n}(x)} is called uniformly absolutely-convergent if the series of nonnegative functions ∑ n = 0 ∞ | f n ( x ) | {\displaystyle \sum _{n=0}^{\infty }|f_{n}(x)|} is uniformly convergent. == Distinctions == A series can be uniformly convergent and absolutely convergent without being uniformly absolutely-convergent. For example, if ƒn(x) = xn/n on the open interval (−1,0), then the series Σfn(x) converges uniformly by comparison of the partial sums to those of Σ(−1)n/n, and the series Σ|fn(x)| converges absolutely at each point by the geometric series test, but Σ|fn(x)| does not converge uniformly. Intuitively, this is because the absolute-convergence gets slower and slower as x approaches −1, where convergence holds but absolute convergence fails. == Generalizations == If a series of functions is uniformly absolutely-convergent on some neighborhood of each point of a topological space, it is locally uniformly absolutely-convergent. If a series is uniformly absolutely-convergent on all compact subsets of a topological space, it is compactly (uniformly) absolutely-convergent. If the topological space is locally compact, these notions are equivalent. == Properties == If a series of functions into C (or any Banach space) is uniformly absolutely-convergent, then it is uniformly convergent. Uniform absolute-convergence is independent of the ordering of a series. This is because, for a series of nonnegative functions, uniform convergence is equivalent to the property that, for any ε > 0, there are finitely many terms of the series such that excluding these terms results in a series with total sum less than the constant function ε, and this property does not refer to the ordering. == See also == Modes of convergence (annotated index) == References ==
Wikipedia:Uniform boundedness#0
In mathematics, the uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators (and thus bounded operators) whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm. The theorem was first published in 1927 by Stefan Banach and Hugo Steinhaus, but it was also proven independently by Hans Hahn. == Theorem == The first inequality (that is, sup T ∈ F ‖ T ( x ) ‖ < ∞ {\textstyle \sup _{T\in F}\|T(x)\|<\infty } for all x {\displaystyle x} ) states that the functionals in F {\displaystyle F} are pointwise bounded while the second states that they are uniformly bounded. The second supremum always equals sup T ∈ F ‖ T ‖ B ( X , Y ) = sup ‖ x ‖ ≤ 1 T ∈ F ‖ T ( x ) ‖ Y = sup T ∈ F sup ‖ x ‖ ≤ 1 ‖ T ( x ) ‖ Y {\displaystyle \sup _{T\in F}\|T\|_{B(X,Y)}=\sup _{\stackrel {T\in F}{\|x\|\leq 1}}\|T(x)\|_{Y}=\sup _{T\in F}\sup _{\|x\|\leq 1}\|T(x)\|_{Y}} and if X {\displaystyle X} is not the trivial vector space (or if the supremum is taken over [ 0 , ∞ ] {\displaystyle [0,\infty ]} rather than [ − ∞ , ∞ ] {\displaystyle [-\infty ,\infty ]} ) then closed unit ball can be replaced with the unit sphere sup T ∈ F ‖ T ‖ B ( X , Y ) = sup ‖ x ‖ = 1 T ∈ F , ‖ T ( x ) ‖ Y . {\displaystyle \sup _{T\in F}\|T\|_{B(X,Y)}=\sup _{\stackrel {T\in F,}{\|x\|=1}}\|T(x)\|_{Y}.} The completeness of the Banach space X {\displaystyle X} enables the following short proof, using the Baire category theorem. There are also simple proofs not using the Baire theorem (Sokal 2011). == Corollaries == The above corollary does not claim that T n {\displaystyle T_{n}} converges to T {\displaystyle T} in operator norm, that is, uniformly on bounded sets. However, since { T n } {\displaystyle \left\{T_{n}\right\}} is bounded in operator norm, and the limit operator T {\displaystyle T} is continuous, a standard " 3 ε {\displaystyle 3\varepsilon } " estimate shows that T n {\displaystyle T_{n}} converges to T {\displaystyle T} uniformly on compact sets. Indeed, the elements of S {\displaystyle S} define a pointwise bounded family of continuous linear forms on the Banach space X := Y ′ , {\displaystyle X:=Y',} which is the continuous dual space of Y . {\displaystyle Y.} By the uniform boundedness principle, the norms of elements of S , {\displaystyle S,} as functionals on X , {\displaystyle X,} that is, norms in the second dual Y ″ , {\displaystyle Y'',} are bounded. But for every s ∈ S , {\displaystyle s\in S,} the norm in the second dual coincides with the norm in Y , {\displaystyle Y,} by a consequence of the Hahn–Banach theorem. Let L ( X , Y ) {\displaystyle L(X,Y)} denote the continuous operators from X {\displaystyle X} to Y , {\displaystyle Y,} endowed with the operator norm. If the collection F {\displaystyle F} is unbounded in L ( X , Y ) , {\displaystyle L(X,Y),} then the uniform boundedness principle implies: R = { x ∈ X : sup T ∈ F ‖ T x ‖ Y = ∞ } ≠ ∅ . {\displaystyle R=\left\{x\in X\ :\ \sup \nolimits _{T\in F}\|Tx\|_{Y}=\infty \right\}\neq \varnothing .} In fact, R {\displaystyle R} is dense in X . {\displaystyle X.} The complement of R {\displaystyle R} in X {\displaystyle X} is the countable union of closed sets ⋃ X n . {\textstyle \bigcup X_{n}.} By the argument used in proving the theorem, each X n {\displaystyle X_{n}} is nowhere dense, i.e. the subset ⋃ X n {\textstyle \bigcup X_{n}} is of first category. Therefore R {\displaystyle R} is the complement of a subset of first category in a Baire space. By definition of a Baire space, such sets (called comeagre or residual sets) are dense. Such reasoning leads to the principle of condensation of singularities, which can be formulated as follows: == Example: pointwise convergence of Fourier series == Let T {\displaystyle \mathbb {T} } be the circle, and let C ( T ) {\displaystyle C(\mathbb {T} )} be the Banach space of continuous functions on T , {\displaystyle \mathbb {T} ,} with the uniform norm. Using the uniform boundedness principle, one can show that there exists an element in C ( T ) {\displaystyle C(\mathbb {T} )} for which the Fourier series does not converge pointwise. For f ∈ C ( T ) , {\displaystyle f\in C(\mathbb {T} ),} its Fourier series is defined by ∑ k ∈ Z f ^ ( k ) e i k x = ∑ k ∈ Z 1 2 π ( ∫ 0 2 π f ( t ) e − i k t d t ) e i k x , {\displaystyle \sum _{k\in \mathbb {Z} }{\hat {f}}(k)e^{ikx}=\sum _{k\in \mathbb {Z} }{\frac {1}{2\pi }}\left(\int _{0}^{2\pi }f(t)e^{-ikt}dt\right)e^{ikx},} and the N-th symmetric partial sum is S N ( f ) ( x ) = ∑ k = − N N f ^ ( k ) e i k x = 1 2 π ∫ 0 2 π f ( t ) D N ( x − t ) d t , {\displaystyle S_{N}(f)(x)=\sum _{k=-N}^{N}{\hat {f}}(k)e^{ikx}={\frac {1}{2\pi }}\int _{0}^{2\pi }f(t)D_{N}(x-t)\,dt,} where D N {\displaystyle D_{N}} is the N {\displaystyle N} -th Dirichlet kernel. Fix x ∈ T {\displaystyle x\in \mathbb {T} } and consider the convergence of { S N ( f ) ( x ) } . {\displaystyle \left\{S_{N}(f)(x)\right\}.} The functional φ N , x : C ( T ) → C {\displaystyle \varphi _{N,x}:C(\mathbb {T} )\to \mathbb {C} } defined by φ N , x ( f ) = S N ( f ) ( x ) , f ∈ C ( T ) , {\displaystyle \varphi _{N,x}(f)=S_{N}(f)(x),\qquad f\in C(\mathbb {T} ),} is bounded. The norm of φ N , x , {\displaystyle \varphi _{N,x},} in the dual of C ( T ) , {\displaystyle C(\mathbb {T} ),} is the norm of the signed measure ( 2 ( 2 π ) − 1 D N ( x − t ) d t , {\displaystyle (2(2\pi )^{-1}D_{N}(x-t)dt,} namely ‖ φ N , x ‖ = 1 2 π ∫ 0 2 π | D N ( x − t ) | d t = 1 2 π ∫ 0 2 π | D N ( s ) | d s = ‖ D N ‖ L 1 ( T ) . {\displaystyle \left\|\varphi _{N,x}\right\|={\frac {1}{2\pi }}\int _{0}^{2\pi }\left|D_{N}(x-t)\right|\,dt={\frac {1}{2\pi }}\int _{0}^{2\pi }\left|D_{N}(s)\right|\,ds=\left\|D_{N}\right\|_{L^{1}(\mathbb {T} )}.} It can be verified that 1 2 π ∫ 0 2 π | D N ( t ) | d t ≥ 1 2 π ∫ 0 2 π | sin ⁡ ( ( N + 1 2 ) t ) | t / 2 d t → ∞ . {\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }|D_{N}(t)|\,dt\geq {\frac {1}{2\pi }}\int _{0}^{2\pi }{\frac {\left|\sin \left((N+{\tfrac {1}{2}})t\right)\right|}{t/2}}\,dt\to \infty .} So the collection ( φ N , x ) {\displaystyle \left(\varphi _{N,x}\right)} is unbounded in C ( T ) ∗ , {\displaystyle C(\mathbb {T} )^{\ast },} the dual of C ( T ) . {\displaystyle C(\mathbb {T} ).} Therefore, by the uniform boundedness principle, for any x ∈ T , {\displaystyle x\in \mathbb {T} ,} the set of continuous functions whose Fourier series diverges at x {\displaystyle x} is dense in C ( T ) . {\displaystyle C(\mathbb {T} ).} More can be concluded by applying the principle of condensation of singularities. Let ( x m ) {\displaystyle \left(x_{m}\right)} be a dense sequence in T . {\displaystyle \mathbb {T} .} Define φ N , x m {\displaystyle \varphi _{N,x_{m}}} in the similar way as above. The principle of condensation of singularities then says that the set of continuous functions whose Fourier series diverges at each x m {\displaystyle x_{m}} is dense in C ( T ) {\displaystyle C(\mathbb {T} )} (however, the Fourier series of a continuous function f {\displaystyle f} converges to f ( x ) {\displaystyle f(x)} for almost every x ∈ T , {\displaystyle x\in \mathbb {T} ,} by Carleson's theorem). == Generalizations == In a topological vector space (TVS) X , {\displaystyle X,} "bounded subset" refers specifically to the notion of a von Neumann bounded subset. If X {\displaystyle X} happens to also be a normed or seminormed space, say with (semi)norm ‖ ⋅ ‖ , {\displaystyle \|\cdot \|,} then a subset B {\displaystyle B} is (von Neumann) bounded if and only if it is norm bounded, which by definition means sup b ∈ B ‖ b ‖ < ∞ . {\textstyle \sup _{b\in B}\|b\|<\infty .} === Barrelled spaces === Attempts to find classes of locally convex topological vector spaces on which the uniform boundedness principle holds eventually led to barrelled spaces. That is, the least restrictive setting for the uniform boundedness principle is a barrelled space, where the following generalized version of the theorem holds (Bourbaki 1987, Theorem III.2.1): === Uniform boundedness in topological vector spaces === A family B {\displaystyle {\mathcal {B}}} of subsets of a topological vector space Y {\displaystyle Y} is said to be uniformly bounded in Y , {\displaystyle Y,} if there exists some bounded subset D {\displaystyle D} of Y {\displaystyle Y} such that B ⊆ D for every B ∈ B , {\displaystyle B\subseteq D\quad {\text{ for every }}B\in {\mathcal {B}},} which happens if and only if ⋃ B ∈ B B {\displaystyle \bigcup _{B\in {\mathcal {B}}}B} is a bounded subset of Y {\displaystyle Y} ; if Y {\displaystyle Y} is a normed space then this happens if and only if there exists some real M ≥ 0 {\displaystyle M\geq 0} such that sup B ∈ B b ∈ B ‖ b ‖ ≤ M . {\textstyle \sup _{\stackrel {b\in B}{B\in {\mathcal {B}}}}\|b\|\leq M.} In particular, if H {\displaystyle H} is a family of maps from X {\displaystyle X} to Y {\displaystyle Y} and if C ⊆ X {\displaystyle C\subseteq X} then the family { h ( C ) : h ∈ H } {\displaystyle \{h(C):h\in H\}} is uniformly bounded in Y {\displaystyle Y} if and only if there exists some bounded subset D {\displaystyle D} of Y {\displaystyle Y} such that h ( C ) ⊆ D for all h ∈ H , {\displaystyle h(C)\subseteq D{\text{ for all }}h\in H,} which happens if and only if H ( C ) := ⋃ h ∈ H h ( C ) {\textstyle H(C):=\bigcup _{h\in H}h(C)} is a bounded subset of Y . {\displaystyle Y.} === Generalizations involving nonmeager subsets === Although the notion of a nonmeager set is used in the following version of the uniform bounded principle, the domain X {\displaystyle X} is not assumed to be a Baire space. Every proper vector subspace of a TVS X {\displaystyle X} has an empty interior in X . {\displaystyle X.} So in particular, every proper vector subspace that is closed is nowhere dense in X {\displaystyle X} and thus of the first category (meager) in X {\displaystyle X} (and the same is thus also true of all its subsets). Consequently, any vector subspace of a TVS X {\displaystyle X} that is of the second category (nonmeager) in X {\displaystyle X} must be a dense subset of X {\displaystyle X} (since otherwise its closure in X {\displaystyle X} would a closed proper vector subspace of X {\displaystyle X} and thus of the first category). === Sequences of continuous linear maps === The following theorem establishes conditions for the pointwise limit of a sequence of continuous linear maps to be itself continuous. If in addition the domain is a Banach space and the codomain is a normed space then ‖ h ‖ ≤ lim inf n → ∞ ‖ h n ‖ < ∞ . {\displaystyle \|h\|\leq \liminf _{n\to \infty }\left\|h_{n}\right\|<\infty .} ==== Complete metrizable domain ==== Dieudonné (1970) proves a weaker form of this theorem with Fréchet spaces rather than the usual Banach spaces. == See also == Barrelled space – Type of topological vector space Ursescu theorem – Generalization of closed graph, open mapping, and uniform boundedness theorem == Notes == == Citations == == Bibliography == Banach, Stefan; Steinhaus, Hugo (1927), "Sur le principe de la condensation de singularités" (PDF), Fundamenta Mathematicae, 9: 50–61, doi:10.4064/fm-9-1-50-61. (in French) Banach, Stefan (1932). Théorie des Opérations Linéaires [Theory of Linear Operations] (PDF). Monografie Matematyczne (in French). Vol. 1. Warszawa: Subwencji Funduszu Kultury Narodowej. Zbl 0005.20901. Archived from the original (PDF) on 2014-01-11. Retrieved 2020-07-11. Bourbaki, Nicolas (1987) [1981]. Topological Vector Spaces: Chapters 1–5. Éléments de mathématique. Translated by Eggleston, H.G.; Madan, S. Berlin New York: Springer-Verlag. ISBN 3-540-13627-4. OCLC 17499190. Dieudonné, Jean (1970), Treatise on analysis, Volume 2, Academic Press. Husain, Taqdir; Khaleelulla, S. M. (1978). Barrelledness in Topological and Ordered Vector Spaces. Lecture Notes in Mathematics. Vol. 692. Berlin, New York, Heidelberg: Springer-Verlag. ISBN 978-3-540-09096-0. OCLC 4493665. Khaleelulla, S. M. (1982). Counterexamples in Topological Vector Spaces. Lecture Notes in Mathematics. Vol. 936. Berlin, Heidelberg, New York: Springer-Verlag. ISBN 978-3-540-11565-6. OCLC 8588370. Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. Rudin, Walter (1966), Real and complex analysis, McGraw-Hill. Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365. Shtern, A.I. (2001) [1994], "Uniform boundedness principle", Encyclopedia of Mathematics, EMS Press. Sokal, Alan (2011), "A really simple elementary proof of the uniform boundedness theorem", Amer. Math. Monthly, 118 (5): 450–452, arXiv:1005.1585, doi:10.4169/amer.math.monthly.118.05.450, S2CID 41853641. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
Wikipedia:Uniform continuity#0
In mathematics, a real function f {\displaystyle f} of real numbers is said to be uniformly continuous if there is a positive real number δ {\displaystyle \delta } such that function values over any function domain interval of the size δ {\displaystyle \delta } are as close to each other as we want. In other words, for a uniformly continuous real function of real numbers, if we want function value differences to be less than any positive real number ε {\displaystyle \varepsilon } , then there is a positive real number δ {\displaystyle \delta } such that | f ( x ) − f ( y ) | < ε {\displaystyle |f(x)-f(y)|<\varepsilon } for any x {\displaystyle x} and y {\displaystyle y} in any interval of length δ {\displaystyle \delta } within the domain of f {\displaystyle f} . The difference between uniform continuity and (ordinary) continuity is that, in uniform continuity there is a globally applicable δ {\displaystyle \delta } (the size of a function domain interval over which function value differences are less than ε {\displaystyle \varepsilon } ) that depends on only ε {\displaystyle \varepsilon } , while in (ordinary) continuity there is a locally applicable δ {\displaystyle \delta } that depends on both ε {\displaystyle \varepsilon } and x {\displaystyle x} . So uniform continuity is a stronger continuity condition than continuity; a function that is uniformly continuous is continuous but a function that is continuous is not necessarily uniformly continuous. The concepts of uniform continuity and continuity can be expanded to functions defined between metric spaces. Continuous functions can fail to be uniformly continuous if they are unbounded on a bounded domain, such as f ( x ) = 1 x {\displaystyle f(x)={\tfrac {1}{x}}} on ( 0 , 1 ) {\displaystyle (0,1)} , or if their slopes become unbounded on an infinite domain, such as f ( x ) = x 2 {\displaystyle f(x)=x^{2}} on the real (number) line. However, any Lipschitz map between metric spaces is uniformly continuous, in particular any isometry (distance-preserving map). Although continuity can be defined for functions between general topological spaces, defining uniform continuity requires more structure. The concept relies on comparing the sizes of neighbourhoods of distinct points, so it requires a metric space, or more generally a uniform space. == Definition for functions on metric spaces == For a function f : X → Y {\displaystyle f:X\to Y} with metric spaces ( X , d 1 ) {\displaystyle (X,d_{1})} and ( Y , d 2 ) {\displaystyle (Y,d_{2})} , the following definitions of uniform continuity and (ordinary) continuity hold. === Definition of uniform continuity === f {\displaystyle f} is called uniformly continuous if for every real number ε > 0 {\displaystyle \varepsilon >0} there exists a real number δ > 0 {\displaystyle \delta >0} such that for every x , y ∈ X {\displaystyle x,y\in X} with d 1 ( x , y ) < δ {\displaystyle d_{1}(x,y)<\delta } , we have d 2 ( f ( x ) , f ( y ) ) < ε {\displaystyle d_{2}(f(x),f(y))<\varepsilon } . The set { y ∈ X : d 1 ( x , y ) < δ } {\displaystyle \{y\in X:d_{1}(x,y)<\delta \}} for each x {\displaystyle x} is a neighbourhood of x {\displaystyle x} and the set { x ∈ X : d 1 ( x , y ) < δ } {\displaystyle \{x\in X:d_{1}(x,y)<\delta \}} for each y {\displaystyle y} is a neighbourhood of y {\displaystyle y} by the definition of a neighbourhood in a metric space. If X {\displaystyle X} and Y {\displaystyle Y} are subsets of the real line, then d 1 {\displaystyle d_{1}} and d 2 {\displaystyle d_{2}} can be the standard one-dimensional Euclidean distance, yielding the following definition: for every real number ε > 0 {\displaystyle \varepsilon >0} there exists a real number δ > 0 {\displaystyle \delta >0} such that for every x , y ∈ X {\displaystyle x,y\in X} , | x − y | < δ ⟹ | f ( x ) − f ( y ) | < ε {\displaystyle |x-y|<\delta \implies |f(x)-f(y)|<\varepsilon } (where A ⟹ B {\displaystyle A\implies B} is a material conditional statement saying "if A {\displaystyle A} , then B {\displaystyle B} "). Equivalently, f {\displaystyle f} is said to be uniformly continuous if ∀ ε > 0 ∃ δ > 0 ∀ x ∈ X ∀ y ∈ X : d 1 ( x , y ) < δ ⇒ d 2 ( f ( x ) , f ( y ) ) < ε {\displaystyle \forall \varepsilon >0\;\exists \delta >0\;\forall x\in X\;\forall y\in X:\,d_{1}(x,y)<\delta \,\Rightarrow \,d_{2}(f(x),f(y))<\varepsilon } . Here quantifications ( ∀ ε > 0 {\displaystyle \forall \varepsilon >0} , ∃ δ > 0 {\displaystyle \exists \delta >0} , ∀ x ∈ X {\displaystyle \forall x\in X} , and ∀ y ∈ X {\displaystyle \forall y\in X} ) are used. Equivalently, f {\displaystyle f} is uniformly continuous if it admits a modulus of continuity. === Definition of (ordinary) continuity === f {\displaystyle f} is called continuous at x _ {\displaystyle {\underline {{\text{at }}x}}} if for every real number ε > 0 {\displaystyle \varepsilon >0} there exists a real number δ > 0 {\displaystyle \delta >0} such that for every y ∈ X {\displaystyle y\in X} with d 1 ( x , y ) < δ {\displaystyle d_{1}(x,y)<\delta } , we have d 2 ( f ( x ) , f ( y ) ) < ε {\displaystyle d_{2}(f(x),f(y))<\varepsilon } . The set { y ∈ X : d 1 ( x , y ) < δ } {\displaystyle \{y\in X:d_{1}(x,y)<\delta \}} is a neighbourhood of x {\displaystyle x} . Thus, (ordinary) continuity is a local property of the function at the point x {\displaystyle x} . Equivalently, a function f {\displaystyle f} is said to be continuous if ∀ x ∈ X ∀ ε > 0 ∃ δ > 0 ∀ y ∈ X : d 1 ( x , y ) < δ ⇒ d 2 ( f ( x ) , f ( y ) ) < ε {\displaystyle \forall x\in X\;\forall \varepsilon >0\;\exists \delta >0\;\forall y\in X:\,d_{1}(x,y)<\delta \,\Rightarrow \,d_{2}(f(x),f(y))<\varepsilon } . Alternatively, a function f {\displaystyle f} is said to be continuous if there is a function of all positive real numbers ε {\displaystyle \varepsilon } and x ∈ X {\displaystyle x\in X} , δ ( ε , x ) {\displaystyle \delta (\varepsilon ,x)} representing the maximum positive real number, such that at each x {\displaystyle x} if y ∈ X {\displaystyle y\in X} satisfies d 1 ( x , y ) < δ ( ε , x ) {\displaystyle d_{1}(x,y)<\delta (\varepsilon ,x)} then d 2 ( f ( x ) , f ( y ) ) < ε {\displaystyle d_{2}(f(x),f(y))<\varepsilon } . At every x {\displaystyle x} , δ ( ε , x ) {\displaystyle \delta (\varepsilon ,x)} is a monotonically non-decreasing function. == Local continuity versus global uniform continuity == In the definitions, the difference between uniform continuity and continuity is that, in uniform continuity there is a globally applicable δ {\displaystyle \delta } (the size of a neighbourhood in X {\displaystyle X} over which values of the metric for function values in Y {\displaystyle Y} are less than ε {\displaystyle \varepsilon } ) that depends on only ε {\displaystyle \varepsilon } while in continuity there is a locally applicable δ {\displaystyle \delta } that depends on the both ε {\displaystyle \varepsilon } and x {\displaystyle x} . Continuity is a local property of a function — that is, a function f {\displaystyle f} is continuous, or not, at a particular point x {\displaystyle x} of the function domain X {\displaystyle X} , and this can be determined by looking at only the values of the function in an arbitrarily small neighbourhood of that point. When we speak of a function being continuous on an interval, we mean that the function is continuous at every point of the interval. In contrast, uniform continuity is a global property of f {\displaystyle f} , in the sense that the standard definition of uniform continuity refers to every point of X {\displaystyle X} . On the other hand, it is possible to give a definition that is local in terms of the natural extension f ∗ {\displaystyle f^{*}} (the characteristics of which at nonstandard points are determined by the global properties of f {\displaystyle f} ), although it is not possible to give a local definition of uniform continuity for an arbitrary hyperreal-valued function, see below. A mathematical definition that a function f {\displaystyle f} is continuous on an interval I {\displaystyle I} and a definition that f {\displaystyle f} is uniformly continuous on I {\displaystyle I} are structurally similar as shown in the following. Continuity of a function f : X → Y {\displaystyle f:X\to Y} for metric spaces ( X , d 1 ) {\displaystyle (X,d_{1})} and ( Y , d 2 ) {\displaystyle (Y,d_{2})} at every point x {\displaystyle x} of an interval I ⊆ X {\displaystyle I\subseteq X} (i.e., continuity of f {\displaystyle f} on the interval I {\displaystyle I} ) is expressed by a formula starting with quantifications ∀ x ∈ I ∀ ε > 0 ∃ δ > 0 ∀ y ∈ I : d 1 ( x , y ) < δ ⇒ d 2 ( f ( x ) , f ( y ) ) < ε {\displaystyle \forall x\in I\;\forall \varepsilon >0\;\exists \delta >0\;\forall y\in I:\,d_{1}(x,y)<\delta \,\Rightarrow \,d_{2}(f(x),f(y))<\varepsilon } , (metrics d 1 ( x , y ) {\displaystyle d_{1}(x,y)} and d 2 ( f ( x ) , f ( y ) ) {\displaystyle d_{2}(f(x),f(y))} are | x − y | {\displaystyle |x-y|} and | f ( x ) − f ( y ) | {\displaystyle |f(x)-f(y)|} for f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } for the set of real numbers R {\displaystyle \mathbb {R} } ). For uniform continuity, the order of the first, second, and third quantifications ( ∀ x ∈ I {\displaystyle \forall x\in I} , ∀ ε > 0 {\displaystyle \forall \varepsilon >0} , and ∃ δ > 0 {\displaystyle \exists \delta >0} ) are rotated: ∀ ε > 0 ∃ δ > 0 ∀ x ∈ I ∀ y ∈ I : d 1 ( x , y ) < δ ⇒ d 2 ( f ( x ) , f ( y ) ) < ε {\displaystyle \forall \varepsilon >0\;\exists \delta >0\;\forall x\in I\;\forall y\in I:\,d_{1}(x,y)<\delta \,\Rightarrow \,d_{2}(f(x),f(y))<\varepsilon } . Thus for continuity on the interval, one takes an arbitrary point x {\displaystyle x} of the interval, and then there must exist a distance δ {\displaystyle \delta } , ⋯ ∀ x ∃ δ ⋯ , {\displaystyle \cdots \forall x\,\exists \delta \cdots ,} while for uniform continuity, a single δ {\displaystyle \delta } must work uniformly for all points x {\displaystyle x} of the interval, ⋯ ∃ δ ∀ x ⋯ . {\displaystyle \cdots \exists \delta \,\forall x\cdots .} == Properties == Every uniformly continuous function is continuous, but the converse does not hold. Consider for instance the continuous function f : R → R , x ↦ x 2 {\displaystyle f\colon \mathbb {R} \rightarrow \mathbb {R} ,x\mapsto x^{2}} where R {\displaystyle \mathbb {R} } is the set of real numbers. Given a positive real number ε {\displaystyle \varepsilon } , uniform continuity requires the existence of a positive real number δ {\displaystyle \delta } such that for all x 1 , x 2 ∈ R {\displaystyle x_{1},x_{2}\in \mathbb {R} } with | x 1 − x 2 | < δ {\displaystyle |x_{1}-x_{2}|<\delta } , we have | f ( x 1 ) − f ( x 2 ) | < ε {\displaystyle |f(x_{1})-f(x_{2})|<\varepsilon } . But f ( x + δ ) − f ( x ) = 2 x ⋅ δ + δ 2 , {\displaystyle f\left(x+\delta \right)-f(x)=2x\cdot \delta +\delta ^{2},} and as x {\displaystyle x} goes to be a higher and higher value, δ {\displaystyle \delta } needs to be lower and lower to satisfy | f ( x + β ) − f ( x ) | < ε {\displaystyle |f(x+\beta )-f(x)|<\varepsilon } for positive real numbers β < δ {\displaystyle \beta <\delta } and the given ε {\displaystyle \varepsilon } . This means that there is no specifiable (no matter how small it is) positive real number δ {\displaystyle \delta } to satisfy the condition for f {\displaystyle f} to be uniformly continuous so f {\displaystyle f} is not uniformly continuous. Any absolutely continuous function (over a compact interval) is uniformly continuous. On the other hand, the Cantor function is uniformly continuous but not absolutely continuous. The image of a totally bounded subset under a uniformly continuous function is totally bounded. However, the image of a bounded subset of an arbitrary metric space under a uniformly continuous function need not be bounded: as a counterexample, consider the identity function from the integers endowed with the discrete metric to the integers endowed with the usual Euclidean metric. The Heine–Cantor theorem asserts that every continuous function on a compact set is uniformly continuous. In particular, if a function is continuous on a closed bounded interval of the real line, it is uniformly continuous on that interval. The Darboux integrability of continuous functions follows almost immediately from this theorem. If a real-valued function f {\displaystyle f} is continuous on [ 0 , ∞ ) {\displaystyle [0,\infty )} and lim x → ∞ f ( x ) {\displaystyle \lim _{x\to \infty }f(x)} exists (and is finite), then f {\displaystyle f} is uniformly continuous. In particular, every element of C 0 ( R ) {\displaystyle C_{0}(\mathbb {R} )} , the space of continuous functions on R {\displaystyle \mathbb {R} } that vanish at infinity, is uniformly continuous. This is a generalization of the Heine-Cantor theorem mentioned above, since C c ( R ) ⊂ C 0 ( R ) {\displaystyle C_{c}(\mathbb {R} )\subset C_{0}(\mathbb {R} )} . == Examples and nonexamples == === Examples === Linear functions x ↦ a x + b {\displaystyle x\mapsto ax+b} are the simplest examples of uniformly continuous functions. Any continuous function on the interval [ 0 , 1 ] {\displaystyle [0,1]} is also uniformly continuous, since [ 0 , 1 ] {\displaystyle [0,1]} is a compact set. If a function is differentiable on an open interval and its derivative is bounded, then the function is uniformly continuous on that interval. Every Lipschitz continuous map between two metric spaces is uniformly continuous. More generally, every Hölder continuous function is uniformly continuous. The absolute value function is uniformly continuous, despite not being differentiable at x = 0 {\displaystyle x=0} . This shows uniformly continuous functions are not always differentiable. Despite being nowhere differentiable, the Weierstrass function is uniformly continuous. Every member of a uniformly equicontinuous set of functions is uniformly continuous. === Nonexamples === Functions that are unbounded on a bounded domain are not uniformly continuous. The tangent function is continuous on the interval ( − π / 2 , π / 2 ) {\displaystyle (-\pi /2,\pi /2)} but is not uniformly continuous on that interval, as it goes to infinity as x → π / 2 {\displaystyle x\to \pi /2} . Functions whose derivative tends to infinity as x {\displaystyle x} grows large cannot be uniformly continuous. The exponential function x ↦ e x {\displaystyle x\mapsto e^{x}} is continuous everywhere on the real line but is not uniformly continuous on the line, since its derivative is e x {\displaystyle e^{x}} , and e x → ∞ {\displaystyle e^{x}\to \infty } as x → ∞ {\displaystyle x\to \infty } . == Visualization == For a uniformly continuous function, for every positive real number ε > 0 {\displaystyle \varepsilon >0} there is a positive real number δ > 0 {\displaystyle \delta >0} such that two function values f ( x ) {\displaystyle f(x)} and f ( y ) {\displaystyle f(y)} have the maximum distance ε {\displaystyle \varepsilon } whenever x {\displaystyle x} and y {\displaystyle y} are within the maximum distance δ {\displaystyle \delta } . Thus at each point ( x , f ( x ) ) {\displaystyle (x,f(x))} of the graph, if we draw a rectangle with a height slightly less than 2 ε {\displaystyle 2\varepsilon } and width a slightly less than 2 δ {\displaystyle 2\delta } around that point, then the graph lies completely within the height of the rectangle, i.e., the graph do not pass through the top or the bottom side of the rectangle. For functions that are not uniformly continuous, this isn't possible; for these functions, the graph might lie inside the height of the rectangle at some point on the graph but there is a point on the graph where the graph lies above or below the rectangle. (the graph penetrates the top or bottom side of the rectangle.) == History == The first published definition of uniform continuity was by Heine in 1870, and in 1872 he published a proof that a continuous function on an open interval need not be uniformly continuous. The proofs are almost verbatim given by Dirichlet in his lectures on definite integrals in 1854. The definition of uniform continuity appears earlier in the work of Bolzano where he also proved that continuous functions on an open interval do not need to be uniformly continuous. In addition he also states that a continuous function on a closed interval is uniformly continuous, but he does not give a complete proof. == Other characterizations == === Non-standard analysis === In non-standard analysis, a real-valued function f {\displaystyle f} of a real variable is microcontinuous at a point a {\displaystyle a} precisely if the difference f ∗ ( a + δ ) − f ∗ ( a ) {\displaystyle f^{*}(a+\delta )-f^{*}(a)} is infinitesimal whenever δ {\displaystyle \delta } is infinitesimal. Thus f {\displaystyle f} is continuous on a set A {\displaystyle A} in R {\displaystyle \mathbb {R} } precisely if f ∗ {\displaystyle f^{*}} is microcontinuous at every real point a ∈ A {\displaystyle a\in A} . Uniform continuity can be expressed as the condition that (the natural extension of) f {\displaystyle f} is microcontinuous not only at real points in A {\displaystyle A} , but at all points in its non-standard counterpart (natural extension) ∗ A {\displaystyle ^{*}A} in ∗ R {\displaystyle ^{*}\mathbb {R} } . Note that there exist hyperreal-valued functions which meet this criterion but are not uniformly continuous, as well as uniformly continuous hyperreal-valued functions which do not meet this criterion, however, such functions cannot be expressed in the form f ∗ {\displaystyle f^{*}} for any real-valued function f {\displaystyle f} . (see non-standard calculus for more details and examples). === Cauchy continuity === For a function between metric spaces, uniform continuity implies Cauchy continuity (Fitzpatrick 2006). More specifically, let A {\displaystyle A} be a subset of R n {\displaystyle \mathbb {R} ^{n}} . If a function f : A → R n {\displaystyle f:A\to \mathbb {R} ^{n}} is uniformly continuous then for every pair of sequences x n {\displaystyle x_{n}} and y n {\displaystyle y_{n}} such that lim n → ∞ | x n − y n | = 0 {\displaystyle \lim _{n\to \infty }|x_{n}-y_{n}|=0} we have lim n → ∞ | f ( x n ) − f ( y n ) | = 0. {\displaystyle \lim _{n\to \infty }|f(x_{n})-f(y_{n})|=0.} == Relations with the extension problem == Let X {\displaystyle X} be a metric space, S {\displaystyle S} a subset of X {\displaystyle X} , R {\displaystyle R} a complete metric space, and f : S → R {\displaystyle f:S\rightarrow R} a continuous function. A question to answer: When can f {\displaystyle f} be extended to a continuous function on all of X {\displaystyle X} ? If S {\displaystyle S} is closed in X {\displaystyle X} , the answer is given by the Tietze extension theorem. So it is necessary and sufficient to extend f {\displaystyle f} to the closure of S {\displaystyle S} in X {\displaystyle X} : that is, we may assume without loss of generality that S {\displaystyle S} is dense in X {\displaystyle X} , and this has the further pleasant consequence that if the extension exists, it is unique. A sufficient condition for f {\displaystyle f} to extend to a continuous function f : X → R {\displaystyle f:X\rightarrow R} is that it is Cauchy-continuous, i.e., the image under f {\displaystyle f} of a Cauchy sequence remains Cauchy. If X {\displaystyle X} is complete (and thus the completion of S {\displaystyle S} ), then every continuous function from X {\displaystyle X} to a metric space Y {\displaystyle Y} is Cauchy-continuous. Therefore when X {\displaystyle X} is complete, f {\displaystyle f} extends to a continuous function f : X → R {\displaystyle f:X\rightarrow R} if and only if f {\displaystyle f} is Cauchy-continuous. It is easy to see that every uniformly continuous function is Cauchy-continuous and thus extends to X {\displaystyle X} . The converse does not hold, since the function f : R → R , x ↦ x 2 {\displaystyle f:R\rightarrow R,x\mapsto x^{2}} is, as seen above, not uniformly continuous, but it is continuous and thus Cauchy continuous. In general, for functions defined on unbounded spaces like R {\displaystyle R} , uniform continuity is a rather strong condition. It is desirable to have a weaker condition from which to deduce extendability. For example, suppose a > 1 {\displaystyle a>1} is a real number. At the precalculus level, the function f : x ↦ a x {\displaystyle f:x\mapsto a^{x}} can be given a precise definition only for rational values of x {\displaystyle x} (assuming the existence of qth roots of positive real numbers, an application of the Intermediate Value Theorem). One would like to extend f {\displaystyle f} to a function defined on all of R {\displaystyle R} . The identity f ( x + δ ) − f ( x ) = a x ( a δ − 1 ) {\displaystyle f(x+\delta )-f(x)=a^{x}\left(a^{\delta }-1\right)} shows that f {\displaystyle f} is not uniformly continuous on the set Q {\displaystyle Q} of all rational numbers; however for any bounded interval I {\displaystyle I} the restriction of f {\displaystyle f} to Q ∩ I {\displaystyle Q\cap I} is uniformly continuous, hence Cauchy-continuous, hence f {\displaystyle f} extends to a continuous function on I {\displaystyle I} . But since this holds for every I {\displaystyle I} , there is then a unique extension of f {\displaystyle f} to a continuous function on all of R {\displaystyle R} . More generally, a continuous function f : S → R {\displaystyle f:S\rightarrow R} whose restriction to every bounded subset of S {\displaystyle S} is uniformly continuous is extendable to X {\displaystyle X} , and the converse holds if X {\displaystyle X} is locally compact. A typical application of the extendability of a uniformly continuous function is the proof of the inverse Fourier transformation formula. We first prove that the formula is true for test functions, there are densely many of them. We then extend the inverse map to the whole space using the fact that linear map is continuous; thus, uniformly continuous. == Generalization to topological vector spaces == In the special case of two topological vector spaces V {\displaystyle V} and W {\displaystyle W} , the notion of uniform continuity of a map f : V → W {\displaystyle f:V\to W} becomes: for any neighborhood B {\displaystyle B} of zero in W {\displaystyle W} , there exists a neighborhood A {\displaystyle A} of zero in V {\displaystyle V} such that v 1 − v 2 ∈ A {\displaystyle v_{1}-v_{2}\in A} implies f ( v 1 ) − f ( v 2 ) ∈ B . {\displaystyle f(v_{1})-f(v_{2})\in B.} For linear transformations f : V → W {\displaystyle f:V\to W} , uniform continuity is equivalent to continuity. This fact is frequently used implicitly in functional analysis to extend a linear map off a dense subspace of a Banach space. == Generalization to uniform spaces == Just as the most natural and general setting for continuity is topological spaces, the most natural and general setting for the study of uniform continuity are the uniform spaces. A function f : X → Y {\displaystyle f:X\to Y} between uniform spaces is called uniformly continuous if for every entourage V {\displaystyle V} in Y {\displaystyle Y} there exists an entourage U {\displaystyle U} in X {\displaystyle X} such that for every ( x 1 , x 2 ) {\displaystyle (x_{1},x_{2})} in U {\displaystyle U} we have ( f ( x 1 ) , f ( x 2 ) ) {\displaystyle (f(x_{1}),f(x_{2}))} in V {\displaystyle V} . In this setting, it is also true that uniformly continuous maps transform Cauchy sequences into Cauchy sequences. Each compact Hausdorff space possesses exactly one uniform structure compatible with the topology. A consequence is a generalization of the Heine-Cantor theorem: each continuous function from a compact Hausdorff space to a uniform space is uniformly continuous. == See also == Contraction mapping – Function reducing distance between all points Uniform convergence – Mode of convergence of a function sequence Uniform isomorphism – Uniformly continuous homeomorphism == References == == Further reading == Bourbaki, Nicolas (1989). General Topology: Chapters 1–4 [Topologie Générale]. Springer. ISBN 0-387-19374-X. Chapter II is a comprehensive reference of uniform spaces. Dieudonné, Jean (1960). Foundations of Modern Analysis. Academic Press. Fitzpatrick, Patrick (2006). Advanced Calculus. Brooks/Cole. ISBN 0-534-92612-6. Kelley, John L. (1955). General topology. Graduate Texts in Mathematics. Springer-Verlag. ISBN 0-387-90125-6. {{cite book}}: ISBN / Date incompatibility (help) Kudryavtsev, L.D. (2001) [1994], "Uniform continuity", Encyclopedia of Mathematics, EMS Press Rudin, Walter (1976). Principles of Mathematical Analysis. New York: McGraw-Hill. ISBN 978-0-07-054235-8. Rusnock, P.; Kerr-Lawson, A. (2005), "Bolzano and uniform continuity", Historia Mathematica, 32 (3): 303–311, doi:10.1016/j.hm.2004.11.003
Wikipedia:Unimodality#0
In mathematics, unimodality means possessing a unique mode. More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object. == Unimodal probability distribution == In statistics, a unimodal probability distribution or unimodal distribution is a probability distribution which has a single peak. The term "mode" in this context refers to any peak of the distribution, not just to the strict definition of mode which is usual in statistics. If there is a single mode, the distribution function is called "unimodal". If it has more modes it is "bimodal" (2), "trimodal" (3), etc., or in general, "multimodal". Figure 1 illustrates normal distributions, which are unimodal. Other examples of unimodal distributions include Cauchy distribution, Student's t-distribution, chi-squared distribution and exponential distribution. Among discrete distributions, the binomial distribution and Poisson distribution can be seen as unimodal, though for some parameters they can have two adjacent values with the same probability. Figure 2 and Figure 3 illustrate bimodal distributions. === Other definitions === Other definitions of unimodality in distribution functions also exist. In continuous distributions, unimodality can be defined through the behavior of the cumulative distribution function (cdf). If the cdf is convex for x < m and concave for x > m, then the distribution is unimodal, m being the mode. Note that under this definition the uniform distribution is unimodal, as well as any other distribution in which the maximum distribution is achieved for a range of values, e.g. trapezoidal distribution. Usually this definition allows for a discontinuity at the mode; usually in a continuous distribution the probability of any single value is zero, while this definition allows for a non-zero probability, or an "atom of probability", at the mode. Criteria for unimodality can also be defined through the characteristic function of the distribution or through its Laplace–Stieltjes transform. Another way to define a unimodal discrete distribution is by the occurrence of sign changes in the sequence of differences of the probabilities. A discrete distribution with a probability mass function, { p n : n = … , − 1 , 0 , 1 , … } {\displaystyle \{p_{n}:n=\dots ,-1,0,1,\dots \}} , is called unimodal if the sequence … , p − 2 − p − 1 , p − 1 − p 0 , p 0 − p 1 , p 1 − p 2 , … {\displaystyle \dots ,p_{-2}-p_{-1},p_{-1}-p_{0},p_{0}-p_{1},p_{1}-p_{2},\dots } has exactly one sign change (when zeroes don't count). === Uses and results === One reason for the importance of distribution unimodality is that it allows for several important results. Several inequalities are given below which are only valid for unimodal distributions. Thus, it is important to assess whether or not a given data set comes from a unimodal distribution. Several tests for unimodality are given in the article on multimodal distribution. === Inequalities === ==== Gauss's inequality ==== A first important result is Gauss's inequality. Gauss's inequality gives an upper bound on the probability that a value lies more than any given distance from its mode. This inequality depends on unimodality. ==== Vysochanskiï–Petunin inequality ==== A second is the Vysochanskiï–Petunin inequality, a refinement of the Chebyshev inequality. The Chebyshev inequality guarantees that in any probability distribution, "nearly all" the values are "close to" the mean value. The Vysochanskiï–Petunin inequality refines this to even nearer values, provided that the distribution function is continuous and unimodal. Further results were shown by Sellke and Sellke. ==== Mode, median and mean ==== Gauss also showed in 1823 that for a unimodal distribution σ ≤ ω ≤ 2 σ {\displaystyle \sigma \leq \omega \leq 2\sigma } and | ν − μ | ≤ 3 4 ω , {\displaystyle |\nu -\mu |\leq {\sqrt {\frac {3}{4}}}\omega ,} where the median is ν, the mean is μ and ω is the root mean square deviation from the mode. It can be shown for a unimodal distribution that the median ν and the mean μ lie within (3/5)1/2 ≈ 0.7746 standard deviations of each other. In symbols, | ν − μ | σ ≤ 3 5 {\displaystyle {\frac {|\nu -\mu |}{\sigma }}\leq {\sqrt {\frac {3}{5}}}} where | . | is the absolute value. In 2020, Bernard, Kazzi, and Vanduffel generalized the previous inequality by deriving the maximum distance between the symmetric quantile average q α + q ( 1 − α ) 2 {\displaystyle {\frac {q_{\alpha }+q_{(1-\alpha )}}{2}}} and the mean, | q α + q ( 1 − α ) 2 − μ | σ ≤ { 4 9 ( 1 − α ) − 1 + 1 − α 1 / 3 + α 2 for α ∈ [ 5 6 , 1 ) , 3 α 4 − 3 α + 1 − α 1 / 3 + α 2 for α ∈ ( 1 6 , 5 6 ) , 3 α 4 − 3 α + 4 9 α − 1 2 for α ∈ ( 0 , 1 6 ] . {\displaystyle {\frac {\left|{\frac {q_{\alpha }+q_{(1-\alpha )}}{2}}-\mu \right|}{\sigma }}\leq \left\{{\begin{array}{cl}{\frac {{\sqrt[{}]{{\frac {4}{9(1-\alpha )}}-1}}{\text{ }}+{\text{ }}{\sqrt[{}]{\frac {1-\alpha }{1/3+\alpha }}}}{2}}&{\text{for }}\alpha \in \left[{\frac {5}{6}},1\right)\!,\\{\frac {{\sqrt[{}]{\frac {3\alpha }{4-3\alpha }}}{\text{ }}+{\text{ }}{\sqrt[{}]{\frac {1-\alpha }{1/3+\alpha }}}}{2}}&{\text{for }}\alpha \in \left({\frac {1}{6}},{\frac {5}{6}}\right)\!,\\{\frac {{\sqrt[{}]{\frac {3\alpha }{4-3\alpha }}}{\text{ }}+{\text{ }}{\sqrt[{}]{{\frac {4}{9\alpha }}-1}}}{2}}&{\text{for }}\alpha \in \left(0,{\frac {1}{6}}\right]\!.\end{array}}\right.} The maximum distance is minimized at α = 0.5 {\displaystyle \alpha =0.5} (i.e., when the symmetric quantile average is equal to q 0.5 = ν {\displaystyle q_{0.5}=\nu } ), which indeed motivates the common choice of the median as a robust estimator for the mean. Moreover, when α = 0.5 {\displaystyle \alpha =0.5} , the bound is equal to 3 / 5 {\displaystyle {\sqrt {3/5}}} , which is the maximum distance between the median and the mean of a unimodal distribution. A similar relation holds between the median and the mode θ: they lie within 31/2 ≈ 1.732 standard deviations of each other: | ν − θ | σ ≤ 3 . {\displaystyle {\frac {|\nu -\theta |}{\sigma }}\leq {\sqrt {3}}.} It can also be shown that the mean and the mode lie within 31/2 of each other: | μ − θ | σ ≤ 3 . {\displaystyle {\frac {|\mu -\theta |}{\sigma }}\leq {\sqrt {3}}.} ==== Skewness and kurtosis ==== Rohatgi and Szekely claimed that the skewness and kurtosis of a unimodal distribution are related by the inequality: γ 2 − κ ≤ 6 5 = 1.2 {\displaystyle \gamma ^{2}-\kappa \leq {\frac {6}{5}}=1.2} where κ is the kurtosis and γ is the skewness. Klaassen, Mokveld, and van Es showed that this only applies in certain settings, such as the set of unimodal distributions where the mode and mean coincide. They derived a weaker inequality which applies to all unimodal distributions: γ 2 − κ ≤ 186 125 = 1.488 {\displaystyle \gamma ^{2}-\kappa \leq {\frac {186}{125}}=1.488} This bound is sharp, as it is reached by the equal-weights mixture of the uniform distribution on [0,1] and the discrete distribution at {0}. == Unimodal function == As the term "modal" applies to data sets and probability distribution, and not in general to functions, the definitions above do not apply. The definition of "unimodal" was extended to functions of real numbers as well. A common definition is as follows: a function f(x) is a unimodal function if for some value m, it is monotonically increasing for x ≤ m and monotonically decreasing for x ≥ m. In that case, the maximum value of f(x) is f(m) and there are no other local maxima. Proving unimodality is often hard. One way consists in using the definition of that property, but it turns out to be suitable for simple functions only. A general method based on derivatives exists, but it does not succeed for every function despite its simplicity. Examples of unimodal functions include quadratic polynomial functions with a negative quadratic coefficient, tent map functions, and more. The above is sometimes related to as strong unimodality, from the fact that the monotonicity implied is strong monotonicity. A function f(x) is a weakly unimodal function if there exists a value m for which it is weakly monotonically increasing for x ≤ m and weakly monotonically decreasing for x ≥ m. In that case, the maximum value f(m) can be reached for a continuous range of values of x. An example of a weakly unimodal function which is not strongly unimodal is every other row in Pascal's triangle. Depending on context, unimodal function may also refer to a function that has only one local minimum, rather than maximum. For example, local unimodal sampling, a method for doing numerical optimization, is often demonstrated with such a function. It can be said that a unimodal function under this extension is a function with a single local extremum. One important property of unimodal functions is that the extremum can be found using search algorithms such as golden section search, ternary search or successive parabolic interpolation. == Other extensions == A function f(x) is "S-unimodal" (often referred to as "S-unimodal map") if its Schwarzian derivative is negative for all x ≠ c {\displaystyle x\neq c} , where c {\displaystyle c} is the critical point. In computational geometry if a function is unimodal it permits the design of efficient algorithms for finding the extrema of the function. A more general definition, applicable to a function f(X) of a vector variable X is that f is unimodal if there is a one-to-one differentiable mapping X = G(Z) such that f(G(Z)) is convex. Usually one would want G(Z) to be continuously differentiable with nonsingular Jacobian matrix. Quasiconvex functions and quasiconcave functions extend the concept of unimodality to functions whose arguments belong to higher-dimensional Euclidean spaces. == See also == Bimodal distribution Read's conjecture == References ==
Wikipedia:Unique homomorphic extension theorem#0
The unique homomorphic extension theorem is a result in mathematical logic which formalizes the intuition that the truth or falsity of a statement can be deduced from the truth values of its parts. == The lemma == Let A be a non-empty set, X a subset of A, F a set of functions in A, and X + {\displaystyle X_{+}} the inductive closure of X under F. Let be B any non-empty set and let G be the set of functions on B, such that there is a function d : F → G {\displaystyle d:F\to G} in G that maps with each function f of arity n in F the following function d ( f ) : B n → B {\displaystyle d(f):B^{n}\to B} in G (G cannot be a bijection). From this lemma we can now build the concept of unique homomorphic extension. == The theorem == If X + {\displaystyle X_{+}} is a free set generated by X and F, for each function h : X → B {\displaystyle h:X\to B} there is a single function h ^ : X + → B {\displaystyle {\hat {h}}:X_{+}\to B} such that: ∀ x ∈ X , h ^ ( x ) = h ( x ) ; ( 1 ) {\displaystyle \forall x\in X,{\hat {h}}(x)=h(x);\qquad (1)} For each function f of arity n > 0, for each x 1 , … , x n ∈ X + n , {\displaystyle x_{1},\ldots ,x_{n}\in X_{+}^{n},} h ^ ( f ( x 1 , … , x n ) ) = g ( h ^ ( x 1 ) , … , h ^ ( x n ) ) , where g = d ( f ) ( 2 ) {\displaystyle {\hat {h}}(f(x_{1},\ldots ,x_{n}))=g({\hat {h}}(x_{1}),\ldots ,{\hat {h}}(x_{n})),{\text{ where }}g=d(f)\qquad (2)} == Consequence == The identities seen in (1) e (2) show that h ^ {\displaystyle {\hat {h}}} is an homomorphism, specifically named the unique homomorphic extension of h {\displaystyle h} . To prove the theorem, two requirements must be met: to prove that the extension ( h ^ {\displaystyle {\hat {h}}} ) exists and is unique (assuring the lack of bijections). === Proof of the theorem === We must define a sequence of functions h i : X i → B {\displaystyle h_{i}:X_{i}\to B} inductively, satisfying conditions (1) and (2) restricted to X i {\displaystyle X_{i}} . For this, we define h 0 = h {\displaystyle h_{0}=h} , and given h i {\displaystyle h_{i}} then h i + 1 {\displaystyle h_{i+1}} shall have the following graph: { ( f ( x 1 , … , x n ) , g ( h i ( x 1 ) , … , h i ( x n ) ) ) ∣ ( x 1 , … , x n ) ∈ X i n − X i − 1 n , f ∈ F } ∪ graph ⁡ ( h i ) with g = d ( f ) {\displaystyle {\{(f(x_{1},\ldots ,x_{n}),g(h_{i}(x_{1}),\ldots ,h_{i}(x_{n})))\mid (x_{1},\ldots ,x_{n})\in X_{i}^{n}-X_{i-1}^{n},f\in F\}}\cup {\operatorname {graph} (h_{i})}{\text{ with }}g=d(f)} First we must be certain the graph actually has functionality, since X + {\displaystyle X_{+}} is a free set, from the lemma we have f ( x 1 , … , x n ) ∈ X i + 1 − X i {\displaystyle f(x_{1},\ldots ,x_{n})\in X_{i+1}-X_{i}} when ( x 1 , … , x n ) ∈ X i n − X i − 1 n , ( i ≥ 0 ) {\displaystyle (x_{1},\ldots ,x_{n})\in X_{i}^{n}-X_{i-1}^{n},(i\geq 0)} , so we only have to determine the functionality for the left side of the union. Knowing that the elements of G are functions(again, as defined by the lemma), the only instance where ( x , y ) ∈ g r a p h ( h i ) {\displaystyle (x,y)\in graph(h_{i})} and ( x , z ) ∈ g r a p h ( h i ) {\displaystyle (x,z)\in graph(h_{i})} for some x ∈ X i + 1 − X i {\displaystyle x\in X_{i+1}-X_{i}} is possible is if we have x = f ( x 1 , … , x m ) = f ′ ( y 1 , … , y n ) {\displaystyle x=f(x_{1},\ldots ,x_{m})=f'(y_{1},\ldots ,y_{n})} for some ( x 1 , … , x m ) ∈ X i m − X i − 1 m , ( y 1 , … , y n ) ∈ X i n − X i − 1 n {\displaystyle (x_{1},\ldots ,x_{m})\in X_{i}^{m}-X_{i-1}^{m},(y_{1},\ldots ,y_{n})\in X_{i}^{n}-X_{i-1}^{n}} and for some generators f {\displaystyle f} and f ′ {\displaystyle {f'}} in F {\displaystyle F} . Since f ( X + m ) {\displaystyle f(X_{+}^{m})} and f ′ ( X + n ) {\displaystyle {f'}(X_{+}^{n})} are disjoint when f ≠ f ′ , f ( x 1 , … , x m ) = f ′ ( y 1 , … , Y n ) {\displaystyle f\neq {f'},f(x_{1},\ldots ,x_{m})=f'(y_{1},\ldots ,Y_{n})} this implies f = f ′ {\displaystyle f=f'} and m = n {\displaystyle m=n} . Being all f ∈ F {\displaystyle f\in F} in X + n {\displaystyle X_{+}^{n}} , we must have x j = y j , ∀ j , 1 ≤ j ≤ n {\displaystyle x_{j}=y_{j},\forall j,1\leq j\leq n} . Then we have y = z = g ( x 1 , … , x n ) {\displaystyle y=z=g(x_{1},\ldots ,x_{n})} with g = d ( f ) {\displaystyle g=d(f)} , displaying functionality. Before moving further we must make use of a new lemma that determines the rules for partial functions, it may be written as: (3)Be ( f n ) n ≥ 0 {\displaystyle (f_{n})_{n\geq 0}} a sequence of partial functions f n : A → B {\displaystyle f_{n}:A\to B} such that f n ⊆ f n + 1 , ∀ n ≥ 0 {\displaystyle f_{n}\subseteq f_{n+1},\forall n\geq 0} . Then, g = ( A , ⋃ g r a p h ( f n ) , B ) {\displaystyle g=(A,\bigcup graph(f_{n}),B)} is a partial function. [1] Archived 2017-07-12 at the Wayback Machine Using (3), h ^ = ⋃ i ≥ 0 h i {\displaystyle {\hat {h}}=\bigcup _{i\geq 0}h_{i}} is a partial function. Since d o m ( h ^ ) = ⋃ d o m ( h i ) = ⋃ X i = X + {\displaystyle dom({\hat {h}})=\bigcup dom(h_{i})=\bigcup X_{i}=X_{+}} then h ^ {\displaystyle {\hat {h}}} is total in X + {\displaystyle X_{+}} . Furthermore, it is clear from the definition of h i {\displaystyle h_{i}} that h ^ {\displaystyle {\hat {h}}} satisfies (1) and (2). To prove the uniqueness of h ^ {\displaystyle {\hat {h}}} , or any other function h ′ {\displaystyle {h'}} that satisfies (1) and (2), it is enough to use a simple induction that shows h ^ {\displaystyle {\hat {h}}} and h ′ {\displaystyle {h'}} work for X i , ∀ i ≥ 0 {\displaystyle X_{i},\forall i\geq 0} , and such is proved the Theorem of the Unique Homomorphic Extension.[2] Archived 2017-07-12 at the Wayback Machine == Example of a particular case == We can use the theorem of unique homomorphic extension for calculating numeric expressions over whole numbers. First, we must define the following: A = Σ ∗ {\displaystyle A=\Sigma ^{*}} where Σ = V a r i a b l e s ∪ { 0 , 1 , 2 , … , 9 } ∪ { + , − , ∗ } ∪ { ( , ) } , where | ∗ = V a r i a b l e s ∪ { 0 , … , 9 } {\displaystyle \Sigma =\mathrm {Variables} \cup \{0,1,2,\ldots ,9\}\cup \{+,-,*\}\cup \{(,)\},{\text{ where }}|*=\mathrm {Variables} \cup \{{0,\ldots ,9}\}} Be F = { f − , f + , f ∗ } {\displaystyle F=\{{f-,f+,f*}\}} f : Σ ∗ → Σ w ↦ − w ∗ {\displaystyle f:\Sigma ^{*}\to \Sigma _{w\mapsto {-w}}^{*}} f : Σ ∗ x Σ ∗ → Σ w 1 , w 2 ↦ w 1 + w 2 ∗ {\displaystyle f:\Sigma ^{*}x\Sigma ^{*}\to \Sigma _{w_{1},w_{2}\mapsto {w_{1}+w_{2}}}^{*}} f : Σ ∗ x Σ ∗ → Σ w 1 , w 2 ↦ w 1 ∗ w 2 ∗ {\displaystyle f:\Sigma ^{*}x\Sigma ^{*}\to \Sigma _{w_{1},w_{2}\mapsto {w_{1}*w_{2}}}^{*}} Be E X P R {\displaystyle EXPR} he inductive closure of X {\displaystyle X} under F {\displaystyle F} and be B = Z , G = { S o m a ( − . − ) , M u l t ( − , − ) , M e n o s ( − ) } {\displaystyle B=\mathbb {Z} ,G={\{Soma(-.-),Mult(-,-),Menos(-)}\}} Be d : F → G {\displaystyle d:F\to G} d ( f − ) = m e n o s {\displaystyle d({f-})=menos} d ( f + ) = m a i s {\displaystyle d({f+})=mais} d ( f ∗ ) = m u l t {\displaystyle d({f*})=mult} Then h ^ : X + → { 0 , 1 } {\displaystyle {\hat {h}}:X_{+}\to \{{0,1}\}} will be a function that calculates recursively the truth-value of a proposition, and in a way, will be an extension of the function h : X → { 0 , 1 } {\displaystyle h:X\to \{{0,1}\}} that associates a truth-value to each atomic proposition, such that: (1) h ^ ( ϕ ) = h ( ϕ ) {\displaystyle {\hat {h}}(\phi )=h(\phi )} (2) h ^ ( ( ¬ ϕ ) ) = N A O ( h ^ ( ψ ) ) {\displaystyle {\hat {h}}({(\neg \phi )})=NAO({\hat {h}}(\psi ))} (Negation) h ^ ( ( ρ ∧ θ ) ) = E ( h ^ ( ρ ) , h ^ ( θ ) ) {\displaystyle {\hat {h}}({(\rho \land \theta )})=E({\hat {h}}(\rho ),{\hat {h}}(\theta ))} (AND Operator) h ^ ( ( ρ ∨ θ ) ) = O U ( h ^ ( ρ ) , h ^ ( θ ) ) {\displaystyle {\hat {h}}({(\rho \lor \theta )})=OU({\hat {h}}(\rho ),{\hat {h}}(\theta ))} (OR Operator) h ^ ( ( ρ → θ ) ) = S E E N T A O ( h ^ ( ρ ) , h ^ ( θ ) ) {\displaystyle {\hat {h}}({(\rho \to \theta )})=SE\,ENTAO({\hat {h}}(\rho ),{\hat {h}}(\theta ))} (IF-THEN Operator) == References == Gallier, Jean (2003), Logic For Computer Science: Foundations of Automatic Theorem Proving (PDF), Philadelphia, archived from the original (PDF) on 2017-07-12, retrieved 2017-10-25{{citation}}: CS1 maint: location missing publisher (link)
Wikipedia:Unit vector#0
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in v ^ {\displaystyle {\hat {\mathbf {v} }}} (pronounced "v-hat"). The term normalized vector is sometimes used as a synonym for unit vector. The normalized vector û of a non-zero vector u is the unit vector in the direction of u, i.e., u ^ = u ‖ u ‖ = ( u 1 ‖ u ‖ , u 2 ‖ u ‖ , . . . , u n ‖ u ‖ ) {\displaystyle \mathbf {\hat {u}} ={\frac {\mathbf {u} }{\|\mathbf {u} \|}}=({\frac {u_{1}}{\|\mathbf {u} \|}},{\frac {u_{2}}{\|\mathbf {u} \|}},...,{\frac {u_{n}}{\|\mathbf {u} \|}})} where ‖u‖ is the norm (or length) of u and ‖ u ‖ = ( u 1 , u 2 , . . . , u n ) {\textstyle \|\mathbf {u} \|=(u_{1},u_{2},...,u_{n})} . The proof is the following: ‖ u ^ ‖ = u 1 u 1 2 + . . . + u n 2 2 + . . . + u n u 1 2 + . . . + u n 2 2 = u 1 2 + . . . + u n 2 u 1 2 + . . . + u n 2 = 1 = 1 {\textstyle \|\mathbf {\hat {u}} \|={\sqrt {{\frac {u_{1}}{\sqrt {u_{1}^{2}+...+u_{n}^{2}}}}^{2}+...+{\frac {u_{n}}{\sqrt {u_{1}^{2}+...+u_{n}^{2}}}}^{2}}}={\sqrt {\frac {u_{1}^{2}+...+u_{n}^{2}}{u_{1}^{2}+...+u_{n}^{2}}}}={\sqrt {1}}=1} A unit vector is often used to represent directions, such as normal directions. Unit vectors are often chosen to form the basis of a vector space, and every vector in the space may be written as a linear combination form of unit vectors. == Orthogonal coordinates == === Cartesian coordinates === Unit vectors may be used to represent the axes of a Cartesian coordinate system. For instance, the standard unit vectors in the direction of the x, y, and z axes of a three dimensional Cartesian coordinate system are x ^ = [ 1 0 0 ] , y ^ = [ 0 1 0 ] , z ^ = [ 0 0 1 ] {\displaystyle \mathbf {\hat {x}} ={\begin{bmatrix}1\\0\\0\end{bmatrix}},\,\,\mathbf {\hat {y}} ={\begin{bmatrix}0\\1\\0\end{bmatrix}},\,\,\mathbf {\hat {z}} ={\begin{bmatrix}0\\0\\1\end{bmatrix}}} They form a set of mutually orthogonal unit vectors, typically referred to as a standard basis in linear algebra. They are often denoted using common vector notation (e.g., x or x → {\displaystyle {\vec {x}}} ) rather than standard unit vector notation (e.g., x̂). In most contexts it can be assumed that x, y, and z, (or x → , {\displaystyle {\vec {x}},} y → , {\displaystyle {\vec {y}},} and z → {\displaystyle {\vec {z}}} ) are versors of a 3-D Cartesian coordinate system. The notations (î, ĵ, k̂), (x̂1, x̂2, x̂3), (êx, êy, êz), or (ê1, ê2, ê3), with or without hat, are also used, particularly in contexts where i, j, k might lead to confusion with another quantity (for instance with index symbols such as i, j, k, which are used to identify an element of a set or array or sequence of variables). When a unit vector in space is expressed in Cartesian notation as a linear combination of x, y, z, its three scalar components can be referred to as direction cosines. The value of each component is equal to the cosine of the angle formed by the unit vector with the respective basis vector. This is one of the methods used to describe the orientation (angular position) of a straight line, segment of straight line, oriented axis, or segment of oriented axis (vector). === Cylindrical coordinates === The three orthogonal unit vectors appropriate to cylindrical symmetry are: ρ ^ {\displaystyle {\boldsymbol {\hat {\rho }}}} (also designated e ^ {\displaystyle \mathbf {\hat {e}} } or s ^ {\displaystyle {\boldsymbol {\hat {s}}}} ), representing the direction along which the distance of the point from the axis of symmetry is measured; φ ^ {\displaystyle {\boldsymbol {\hat {\varphi }}}} , representing the direction of the motion that would be observed if the point were rotating counterclockwise about the symmetry axis; z ^ {\displaystyle \mathbf {\hat {z}} } , representing the direction of the symmetry axis; They are related to the Cartesian basis x ^ {\displaystyle {\hat {x}}} , y ^ {\displaystyle {\hat {y}}} , z ^ {\displaystyle {\hat {z}}} by: ρ ^ = cos ⁡ ( φ ) x ^ + sin ⁡ ( φ ) y ^ {\displaystyle {\boldsymbol {\hat {\rho }}}=\cos(\varphi )\mathbf {\hat {x}} +\sin(\varphi )\mathbf {\hat {y}} } φ ^ = − sin ⁡ ( φ ) x ^ + cos ⁡ ( φ ) y ^ {\displaystyle {\boldsymbol {\hat {\varphi }}}=-\sin(\varphi )\mathbf {\hat {x}} +\cos(\varphi )\mathbf {\hat {y}} } z ^ = z ^ . {\displaystyle \mathbf {\hat {z}} =\mathbf {\hat {z}} .} The vectors ρ ^ {\displaystyle {\boldsymbol {\hat {\rho }}}} and φ ^ {\displaystyle {\boldsymbol {\hat {\varphi }}}} are functions of φ , {\displaystyle \varphi ,} and are not constant in direction. When differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. The derivatives with respect to φ {\displaystyle \varphi } are: ∂ ρ ^ ∂ φ = − sin ⁡ φ x ^ + cos ⁡ φ y ^ = φ ^ {\displaystyle {\frac {\partial {\boldsymbol {\hat {\rho }}}}{\partial \varphi }}=-\sin \varphi \mathbf {\hat {x}} +\cos \varphi \mathbf {\hat {y}} ={\boldsymbol {\hat {\varphi }}}} ∂ φ ^ ∂ φ = − cos ⁡ φ x ^ − sin ⁡ φ y ^ = − ρ ^ {\displaystyle {\frac {\partial {\boldsymbol {\hat {\varphi }}}}{\partial \varphi }}=-\cos \varphi \mathbf {\hat {x}} -\sin \varphi \mathbf {\hat {y}} =-{\boldsymbol {\hat {\rho }}}} ∂ z ^ ∂ φ = 0 . {\displaystyle {\frac {\partial \mathbf {\hat {z}} }{\partial \varphi }}=\mathbf {0} .} === Spherical coordinates === The unit vectors appropriate to spherical symmetry are: r ^ {\displaystyle \mathbf {\hat {r}} } , the direction in which the radial distance from the origin increases; φ ^ {\displaystyle {\boldsymbol {\hat {\varphi }}}} , the direction in which the angle in the x-y plane counterclockwise from the positive x-axis is increasing; and θ ^ {\displaystyle {\boldsymbol {\hat {\theta }}}} , the direction in which the angle from the positive z axis is increasing. To minimize redundancy of representations, the polar angle θ {\displaystyle \theta } is usually taken to lie between zero and 180 degrees. It is especially important to note the context of any ordered triplet written in spherical coordinates, as the roles of φ ^ {\displaystyle {\boldsymbol {\hat {\varphi }}}} and θ ^ {\displaystyle {\boldsymbol {\hat {\theta }}}} are often reversed. Here, the American "physics" convention is used. This leaves the azimuthal angle φ {\displaystyle \varphi } defined the same as in cylindrical coordinates. The Cartesian relations are: r ^ = sin ⁡ θ cos ⁡ φ x ^ + sin ⁡ θ sin ⁡ φ y ^ + cos ⁡ θ z ^ {\displaystyle \mathbf {\hat {r}} =\sin \theta \cos \varphi \mathbf {\hat {x}} +\sin \theta \sin \varphi \mathbf {\hat {y}} +\cos \theta \mathbf {\hat {z}} } θ ^ = cos ⁡ θ cos ⁡ φ x ^ + cos ⁡ θ sin ⁡ φ y ^ − sin ⁡ θ z ^ {\displaystyle {\boldsymbol {\hat {\theta }}}=\cos \theta \cos \varphi \mathbf {\hat {x}} +\cos \theta \sin \varphi \mathbf {\hat {y}} -\sin \theta \mathbf {\hat {z}} } φ ^ = − sin ⁡ φ x ^ + cos ⁡ φ y ^ {\displaystyle {\boldsymbol {\hat {\varphi }}}=-\sin \varphi \mathbf {\hat {x}} +\cos \varphi \mathbf {\hat {y}} } The spherical unit vectors depend on both φ {\displaystyle \varphi } and θ {\displaystyle \theta } , and hence there are 5 possible non-zero derivatives. For a more complete description, see Jacobian matrix and determinant. The non-zero derivatives are: ∂ r ^ ∂ φ = − sin ⁡ θ sin ⁡ φ x ^ + sin ⁡ θ cos ⁡ φ y ^ = sin ⁡ θ φ ^ {\displaystyle {\frac {\partial \mathbf {\hat {r}} }{\partial \varphi }}=-\sin \theta \sin \varphi \mathbf {\hat {x}} +\sin \theta \cos \varphi \mathbf {\hat {y}} =\sin \theta {\boldsymbol {\hat {\varphi }}}} ∂ r ^ ∂ θ = cos ⁡ θ cos ⁡ φ x ^ + cos ⁡ θ sin ⁡ φ y ^ − sin ⁡ θ z ^ = θ ^ {\displaystyle {\frac {\partial \mathbf {\hat {r}} }{\partial \theta }}=\cos \theta \cos \varphi \mathbf {\hat {x}} +\cos \theta \sin \varphi \mathbf {\hat {y}} -\sin \theta \mathbf {\hat {z}} ={\boldsymbol {\hat {\theta }}}} ∂ θ ^ ∂ φ = − cos ⁡ θ sin ⁡ φ x ^ + cos ⁡ θ cos ⁡ φ y ^ = cos ⁡ θ φ ^ {\displaystyle {\frac {\partial {\boldsymbol {\hat {\theta }}}}{\partial \varphi }}=-\cos \theta \sin \varphi \mathbf {\hat {x}} +\cos \theta \cos \varphi \mathbf {\hat {y}} =\cos \theta {\boldsymbol {\hat {\varphi }}}} ∂ θ ^ ∂ θ = − sin ⁡ θ cos ⁡ φ x ^ − sin ⁡ θ sin ⁡ φ y ^ − cos ⁡ θ z ^ = − r ^ {\displaystyle {\frac {\partial {\boldsymbol {\hat {\theta }}}}{\partial \theta }}=-\sin \theta \cos \varphi \mathbf {\hat {x}} -\sin \theta \sin \varphi \mathbf {\hat {y}} -\cos \theta \mathbf {\hat {z}} =-\mathbf {\hat {r}} } ∂ φ ^ ∂ φ = − cos ⁡ φ x ^ − sin ⁡ φ y ^ = − sin ⁡ θ r ^ − cos ⁡ θ θ ^ {\displaystyle {\frac {\partial {\boldsymbol {\hat {\varphi }}}}{\partial \varphi }}=-\cos \varphi \mathbf {\hat {x}} -\sin \varphi \mathbf {\hat {y}} =-\sin \theta \mathbf {\hat {r}} -\cos \theta {\boldsymbol {\hat {\theta }}}} === General unit vectors === Common themes of unit vectors occur throughout physics and geometry: == Curvilinear coordinates == In general, a coordinate system may be uniquely specified using a number of linearly independent unit vectors e ^ n {\displaystyle \mathbf {\hat {e}} _{n}} (the actual number being equal to the degrees of freedom of the space). For ordinary 3-space, these vectors may be denoted e ^ 1 , e ^ 2 , e ^ 3 {\displaystyle \mathbf {\hat {e}} _{1},\mathbf {\hat {e}} _{2},\mathbf {\hat {e}} _{3}} . It is nearly always convenient to define the system to be orthonormal and right-handed: e ^ i ⋅ e ^ j = δ i j {\displaystyle \mathbf {\hat {e}} _{i}\cdot \mathbf {\hat {e}} _{j}=\delta _{ij}} e ^ i ⋅ ( e ^ j × e ^ k ) = ε i j k {\displaystyle \mathbf {\hat {e}} _{i}\cdot (\mathbf {\hat {e}} _{j}\times \mathbf {\hat {e}} _{k})=\varepsilon _{ijk}} where δ i j {\displaystyle \delta _{ij}} is the Kronecker delta (which is 1 for i = j, and 0 otherwise) and ε i j k {\displaystyle \varepsilon _{ijk}} is the Levi-Civita symbol (which is 1 for permutations ordered as ijk, and −1 for permutations ordered as kji). == Right versor == A unit vector in R 3 {\displaystyle \mathbb {R} ^{3}} was called a right versor by W. R. Hamilton, as he developed his quaternions H ⊂ R 4 {\displaystyle \mathbb {H} \subset \mathbb {R} ^{4}} . In fact, he was the originator of the term vector, as every quaternion q = s + v {\displaystyle q=s+v} has a scalar part s and a vector part v. If v is a unit vector in R 3 {\displaystyle \mathbb {R} ^{3}} , then the square of v in quaternions is −1. Thus by Euler's formula, exp ⁡ ( θ v ) = cos ⁡ θ + v sin ⁡ θ {\displaystyle \exp(\theta v)=\cos \theta +v\sin \theta } is a versor in the 3-sphere. When θ is a right angle, the versor is a right versor: its scalar part is zero and its vector part v is a unit vector in R 3 {\displaystyle \mathbb {R} ^{3}} . Thus the right versors extend the notion of imaginary units found in the complex plane, where the right versors now range over the 2-sphere S 2 ⊂ R 3 ⊂ H {\displaystyle \mathbb {S} ^{2}\subset \mathbb {R} ^{3}\subset \mathbb {H} } rather than the pair {i, −i} in the complex plane. By extension, a right quaternion is a real multiple of a right versor. == See also == Cartesian coordinate system Coordinate system Curvilinear coordinates Four-velocity Jacobian matrix and determinant Normal vector Polar coordinate system Standard basis Unit interval Unit square, cube, circle, sphere, and hyperbola Vector notation Vector of ones Unit matrix == Notes == == References == G. B. Arfken & H. J. Weber (2000). Mathematical Methods for Physicists (5th ed.). Academic Press. ISBN 0-12-059825-6. Spiegel, Murray R. (1998). Schaum's Outlines: Mathematical Handbook of Formulas and Tables (2nd ed.). McGraw-Hill. ISBN 0-07-038203-4. Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN 0-13-805326-X.
Wikipedia:Unitary element#0
In mathematics, an element of a *-algebra is called unitary if it is invertible and its inverse element is the same as its adjoint element. == Definition == Let A {\displaystyle {\mathcal {A}}} be a *-algebra with unit e {\displaystyle e} . An element a ∈ A {\displaystyle a\in {\mathcal {A}}} is called unitary if a a ∗ = a ∗ a = e {\displaystyle aa^{*}=a^{*}a=e} . In other words, if a {\displaystyle a} is invertible and a − 1 = a ∗ {\displaystyle a^{-1}=a^{*}} holds, then a {\displaystyle a} is unitary. The set of unitary elements is denoted by A U {\displaystyle {\mathcal {A}}_{U}} or U ( A ) {\displaystyle U({\mathcal {A}})} . A special case from particular importance is the case where A {\displaystyle {\mathcal {A}}} is a complete normed *-algebra. This algebra satisfies the C*-identity ( ‖ a ∗ a ‖ = ‖ a ‖ 2 ∀ a ∈ A {\displaystyle \left\|a^{*}a\right\|=\left\|a\right\|^{2}\ \forall a\in {\mathcal {A}}} ) and is called a C*-algebra. == Criteria == Let A {\displaystyle {\mathcal {A}}} be a unital C*-algebra and a ∈ A N {\displaystyle a\in {\mathcal {A}}_{N}} a normal element. Then, a {\displaystyle a} is unitary if the spectrum σ ( a ) {\displaystyle \sigma (a)} consists only of elements of the circle group T {\displaystyle \mathbb {T} } , i.e. σ ( a ) ⊆ T = { λ ∈ C ∣ | λ | = 1 } {\displaystyle \sigma (a)\subseteq \mathbb {T} =\{\lambda \in \mathbb {C} \mid |\lambda |=1\}} . == Examples == The unit e {\displaystyle e} is unitary. Let A {\displaystyle {\mathcal {A}}} be a unital C*-algebra, then: Every projection, i.e. every element a ∈ A {\displaystyle a\in {\mathcal {A}}} with a = a ∗ = a 2 {\displaystyle a=a^{*}=a^{2}} , is unitary. For the spectrum of a projection consists of at most 0 {\displaystyle 0} and 1 {\displaystyle 1} , as follows from the continuous functional calculus. If a ∈ A N {\displaystyle a\in {\mathcal {A}}_{N}} is a normal element of a C*-algebra A {\displaystyle {\mathcal {A}}} , then for every continuous function f {\displaystyle f} on the spectrum σ ( a ) {\displaystyle \sigma (a)} the continuous functional calculus defines an unitary element f ( a ) {\displaystyle f(a)} , if f ( σ ( a ) ) ⊆ T {\displaystyle f(\sigma (a))\subseteq \mathbb {T} } . == Properties == Let A {\displaystyle {\mathcal {A}}} be a unital *-algebra and a , b ∈ A U {\displaystyle a,b\in {\mathcal {A}}_{U}} . Then: The element a b {\displaystyle ab} is unitary, since ( ( a b ) ∗ ) − 1 = ( b ∗ a ∗ ) − 1 = ( a ∗ ) − 1 ( b ∗ ) − 1 = a b {\textstyle ((ab)^{*})^{-1}=(b^{*}a^{*})^{-1}=(a^{*})^{-1}(b^{*})^{-1}=ab} . In particular, A U {\displaystyle {\mathcal {A}}_{U}} forms a multiplicative group. The element a {\displaystyle a} is normal. The adjoint element a ∗ {\displaystyle a^{*}} is also unitary, since a = ( a ∗ ) ∗ {\displaystyle a=(a^{*})^{*}} holds for the involution *. If A {\displaystyle {\mathcal {A}}} is a C*-algebra, a {\displaystyle a} has norm 1, i.e. ‖ a ‖ = 1 {\displaystyle \left\|a\right\|=1} . == See also == Unitary matrix Unitary operator == Notes == == References == Blackadar, Bruce (2006). Operator Algebras. Theory of C*-Algebras and von Neumann Algebras. Berlin/Heidelberg: Springer. pp. 57, 63. ISBN 3-540-28486-9. Dixmier, Jacques (1977). C*-algebras. Translated by Jellett, Francis. Amsterdam/New York/Oxford: North-Holland. ISBN 0-7204-0762-1. English translation of Les C*-algèbres et leurs représentations (in French). Gauthier-Villars. 1969. Kadison, Richard V.; Ringrose, John R. (1983). Fundamentals of the Theory of Operator Algebras. Volume 1 Elementary Theory. New York/London: Academic Press. ISBN 0-12-393301-3.
Wikipedia:Unitary method#0
In elementary algebra, the unitary method is a problem-solving technique taught to students as a method for solving word problems involving proportionality and units of measurement. It consists of first finding the value or proportional amount of a single unit, from the information given in the problem, and then multiplying the result by the number of units of the same kind, given in the problem, to obtain the result. As a simple example, to solve the problem: "A man walks 7 miles in 2 hours. How far does he walk in 7 hours?", one could first calculate how far the man walks in a single hour, as the ratio of the first two givens. 7 miles divided by 2 hours is 3 ⁠1/2⁠ miles per hour. Then, multiplying by the third given, 7 hours, gives the answer as 24 ⁠1/2⁠ miles. The same method can also be used as a step in more complicated problems, such as those involving the division of a good into different proportions. When used in this way, the value of a single unit, found in the unitary method, may depend on previously calculated values rather than being a simple ratio of givens. == See also == Cross-multiplication == References ==
Wikipedia:Unitary transformation#0
In mathematics, a unitary transformation is a linear isomorphism that preserves the inner product: the inner product of two vectors before the transformation is equal to their inner product after the transformation. == Formal definition == More precisely, a unitary transformation is an isometric isomorphism between two inner product spaces (such as Hilbert spaces). In other words, a unitary transformation is a bijective function U : H 1 → H 2 {\displaystyle U:H_{1}\to H_{2}} between two inner product spaces, H 1 {\displaystyle H_{1}} and H 2 , {\displaystyle H_{2},} such that ⟨ U x , U y ⟩ H 2 = ⟨ x , y ⟩ H 1 for all x , y ∈ H 1 . {\displaystyle \langle Ux,Uy\rangle _{H_{2}}=\langle x,y\rangle _{H_{1}}\quad {\text{ for all }}x,y\in H_{1}.} It is a linear isometry, as one can see by setting x = y . {\displaystyle x=y.} == Unitary operator == In the case when H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} are the same space, a unitary transformation is an automorphism of that Hilbert space, and then it is also called a unitary operator. == Antiunitary transformation == A closely related notion is that of antiunitary transformation, which is a bijective function U : H 1 → H 2 {\displaystyle U:H_{1}\to H_{2}\,} between two complex Hilbert spaces such that ⟨ U x , U y ⟩ = ⟨ x , y ⟩ ¯ = ⟨ y , x ⟩ {\displaystyle \langle Ux,Uy\rangle ={\overline {\langle x,y\rangle }}=\langle y,x\rangle } for all x {\displaystyle x} and y {\displaystyle y} in H 1 {\displaystyle H_{1}} , where the horizontal bar represents the complex conjugate. == See also == Antiunitary Orthogonal transformation Time reversal Unitary group Unitary operator Unitary matrix Wigner's theorem Unitary transformations in quantum mechanics
Wikipedia:United Kingdom Mathematics Trust#0
The United Kingdom Mathematics Trust (UKMT) is a charity founded in 1996 to help with the education of children in mathematics within the UK. == History == The national mathematics competitions had existed prior to the formation of the trust, but the foundation of the UKMT in the summer of 1996 enabled them to be run collectively. The Senior Mathematical Challenge was formerly called the National Mathematics Contest. Founded in 1961, it was run by the Mathematical Association from 1975 until its adoption by the UKMT in 1996. The Junior and Intermediate Mathematical Challenges were the initiative of Tony Gardiner in 1987, and were run by him under the name of the United Kingdom Mathematics Foundation until 1996. In 1995, Gardiner advertised for the formation of a committee and for a host institution that would lead to the establishment of the UKMT, enabling the challenges to be run effectively together under one organization. == Mathematical Challenges == The UKMT runs a series of mathematics challenges to encourage children's interest in mathematics and to develop their skills. The three main challenges are: Junior Mathematical Challenge (UK year 8/S2 and below) Intermediate Mathematical Challenge (UK year 11/S4 and below) Senior Mathematical Challenge (UK year 13/S6 and below) == Certificates == In the Junior and Intermediate Challenges the top scoring 50% of the entrants receive bronze, silver or gold certificates based on their mark in the paper. In the Senior Mathematical Challenge these certificates are awarded to top scoring 66% of the entries. In each case bronze, silver and gold certificates are awarded in the ratio 3 : 2 : 1. So in the Junior and Intermediate Challenges The Gold award is achieved by the top 8-9% of the entrants. The Silver award is achieved by 16-17% of the entrants. The Bronze award is achieved by 25% of the entrants. In the past, only the top 40% of participants received a certificate in the Junior and Intermediate Challenges, and only the top 60% of participants received a certificate in the Senior Challenge. The ratio of bronze, silver, and gold have not changed, still being 3 : 2 : 1. == Junior Mathematical Challenge == The Junior Mathematical Challenge (JMC) is an introductory challenge for pupils in Years 8 or below (aged 13) or below, taking place in spring each year. This takes the form of twenty-five multiple choice questions to be sat in exam conditions, to be completed within one hour. The first fifteen questions are designed to be easier, and a pupil will gain 5 marks for getting a question in this section correct. Questions 16-20 are more difficult and are worth 6 marks. The last five questions are intended to be the most challenging and so are also 6 marks. Questions to which no answer is entered will gain (and lose) 0 marks. However, in recent years there has been no negative marking so wrong questions will be given 0 marks. Previously, the top 40% of students (50% since the 2022 JMC) get a certificate of varying levels (Gold, Silver or Bronze) based on their score. === Junior Kangaroo === Over 10,000 participants from the JMC are invited to participate in the Junior Kangaroo. Most of the Junior Kangaroo participants are those who performed well in the JMC, however the Junior Kangaroo is open to discretionary entries for a fee. Similar to the JMC, the Junior Kangaroo is a 60 minute challenge consisting of 25 multiple-choice problems. Correct answers for Questions 1-15 earn 5 marks, and for Questions 16-25 earn 6 marks. Blank or incorrect answers are marked 0; there is no penalty for wrong answers. The top 25% of participants in the Junior Kangaroo receive a Certificate of Merit. === Junior Mathematical Olympiad === The highest 1200 scorers are also invited to take part in the Junior Mathematical Olympiad (JMO). Like the JMC, the JMO is sat in schools. Students are given 120 minutes to complete the JMO. This is also divided into two sections. Part A is composed of 10 questions in which the candidate gives just the answer (not multiple choice), worth 10 marks (1 mark each). Part B consists of 6 questions and encourages students to write out full solutions. Each question in section B is worth 10 marks and students are encouraged to write complete answers to 2-4 questions rather than hurry through incomplete answers to all 6. If the solution is judged to be incomplete, it is marked on a 0+ basis, maximum 3 marks. If it has an evident logical strategy, it is marked on a 10- basis. The total mark for the whole paper is 70. Everyone who participates in this challenge will gain a certificate (Participation 75%, Distinction 25%); the top 200 or so gaining medals (Gold, Silver, Bronze); with the top fifty winning a book prize. From 2025, this has changed as Part A has been omitted. Section B has stayed the same, though it is no longer called Section B (it is now the only section). This changes the total number of questions to 10 and the marks to 60. However the time given for the JMO, has stayed at 120 minutes. == Intermediate Mathematical Challenge == The Intermediate Mathematical Challenge (IMC) is aimed at school years equivalent to English Years 9-11, taking place in winter each year. Following the same structure as the JMC, this paper presents the student with twenty-five multiple choice questions to be done under exam conditions in one hour. The first fifteen questions are designed to be easier, and a pupil will gain 5 marks for getting a question in this section correct. Questions 16-20 are more difficult and are worth 6 marks, with a penalty of 1 point for a wrong answer which tries to stop pupils guessing. The last five questions are intended to be the most challenging and so are also 6 marks, but with a 2 point penalty for an incorrectly answered question. Questions to which no answer is entered will gain (and lose) 0 marks. Again, the top 40% of students taking this challenge get a certificate. There are two follow-on rounds to this competition: The European Kangaroo and the Intermediate Mathematical Olympiad. Additionally, top performers can be selected for the National Mathematics Summer Schools. === Intermediate Mathematical Olympiad === To prevent this getting confused with the International Mathematical Olympiad, this is often abbreviated to the IMOK Olympiad (IMOK = Intermediate Mathematical Olympiad and Kangaroo). The IMOK is sat by the top 500 scorers from each school year in the Intermediate Maths Challenge and consists of three papers, 'Cayley', 'Hamilton' and 'Maclaurin' named after famous mathematicians. The paper the student will undertake depends on the year group that student is in (Cayley for those in year 9 and below, Hamilton for year 10 and Maclaurin for year 11). Each paper contains six questions. Each solution is marked out of 10 on a 0+ and 10- scale; that is to say, if an answer is judged incomplete or unfinished, it is awarded a few marks for progress and relevant observations, whereas if it is presented as complete and correct, marks are deducted for faults, poor reasoning, or unproven assumptions. As a result, it is quite uncommon for an answer to score a middling mark (e.g. 4–6). This makes the maximum mark out of 60. For a student to get two questions fully correct is considered "very good". All people taking part in this challenge will get a certificate (participation for the bottom 50%, merit for the next 25% and distinction for the top 25%). The mark boundaries for these certificates change every year, but normally around 30 marks will gain a Distinction. Those scoring highly (the top 50) will gain a book prize; again, this changes every year, with 44 marks required in the Maclaurin paper in 2006. Also, the top 100 candidates will receive a medal; bronze for Cayley, silver for Hamilton and gold for Maclaurin. === European Kangaroo === The European Kangaroo is a competition which follows the same structure as the AMC (Australian Mathematics Competition). There are twenty-five multiple-choice questions and no penalty marking. This paper is taken throughout Europe by over 3 million pupils from more than 37 countries. Two different Kangaroo papers follow on from the Intermediate Maths Challenge and the next 5500 highest scorers below the Olympiad threshold are invited to take part (both papers are by invitation only). The Grey Kangaroo is sat by students in year 9 and below and the Pink Kangaroo is sat by those in years 10 and 11. The top 25% of scorers in each paper receive a certificate of merit and the rest receive a certificate of participation. All those who sit either Kangaroo also receive a keyfob containing a different mathematical puzzle each year. (The puzzles along with solutions) === National Mathematics Summer Schools === Selected by lottery, 48 of the top 1.5% of scorers in the IMC are invited to participate in one of three week-long National Mathematics Summer Schools in July. Each from a different school across the UK, the 24 boys and 24 girls are facilitated with a range of activities, including daily lectures, designed to go beyond the GCSE syllabus and explore wider and more challenging areas of mathematics. The UKMT aims to "promote mathematical thinking" and "provide an opportunity for participants to meet other students and adults who enjoy mathematics". They were delivered virtually during the COVID-19 pandemic but had reverted to in-person events by 2022. == Senior Mathematical Challenge == The Senior Mathematical Challenge (SMC) takes place in late-autumn each year, and is open to students who are aged 19 or below and are not registered to attend a university. SMC consists of twenty-five multiple choice questions to be answered in 90 minutes. All candidates start with 25 marks, each correct answer is awarded 4 marks and 1 mark is deducted for each incorrect answer. This gives a score between 0 and 125 marks. Unlike the JMC and IMC, the top 66% get one of the three certificates. Further, the top 1000 highest scorers who are eligible to represent the UK at the International Mathematical Olympiad, together with any discretionary and international candidates, are invited to compete in the British Mathematical Olympiad and the next around 6000 highest scorers are invited to sit the Senior Kangaroo. Discretionary candidates are those students who are entered by their mathematics teachers, on payment of a fee, who did not score quite well enough in the SMC, but who might cope well in the next round. === British Mathematical Olympiad === Round 1 of the Olympiad is a three-and-a-half hour examination including six more difficult, long answer questions, which serve to test entrants' problem-solving skills. As of 2005, a more accessible first question was added to the paper; before this, it only consisted of 5 questions. Approximately 100 highest scoring candidates from BMO1 are invited to sit the BMO2, which is the follow-up round that has the same time limit as BMO1, but in which 4 harder questions are posed. The top 24 scoring students from the second round are subsequently invited to a training camp at Trinity College, Cambridge or Oundle School for the first stage of the International Mathematical Olympiad UK team selection. === Senior Kangaroo === The Senior Kangaroo is a one-hour examination to which the next around 6000 highest scorers below the Olympiad threshold are invited. The paper consists of twenty questions, each of which require three digit answers (leading zeros are used if the answer is less than 100, since the paper is marked by machine). The top 25% of candidates receive a certificate of merit and the rest receive a certificate of participation. == Team Challenge == The UKMT Team Maths Challenge is an annual event. One team from each participating school, comprising four pupils selected from year 8 and 9 (ages 12–14), competes in a regional round. No more than 2 pupils on a team may be from Year 9. There are over 60 regional competitions in the UK, held between February and May. The winning team in each regional round, as well as a few high-scoring runners-up from throughout the country, are then invited to the National Final in London, usually in late June. There are 4 rounds: Group Questions Cross-Numbers Shuttle (NB: The previous Head-to-Head Round has been replaced with another, similar to the Mini-Relay used in the 2007 and 2008 National Finals.) Relay In the National Final however an additional 'Poster Round' is added at the beginning. The poster round is a separate competition, however, since 2018 it is worth up to six marks towards the main event. Four schools have won the Junior Maths Team competition at least twice: Queen Mary's Grammar School in Walsall, City of London School, St Olave's Grammar School, and Westminster Under School. == Senior Team Challenge == A pilot event for a competition similar to the Team Challenge, aimed at 16- to 18-year-olds, was launched in the Autumn of 2007 and has been running ever since. The format is much the same, with a limit of two year 13 (Upper Sixth-Form) pupils per team. Regional finals take place between October and December, with the National Final in early February the following year. Previous winners are below: == British Mathematical Olympiad Subtrust == For more information see British Mathematical Olympiad Subtrust. The British Mathematical Olympiad Subtrust is run by UKMT, which runs the British Mathematical Olympiad as well as the UK Mathematical Olympiad for Girls, several training camps throughout the year such as a winter camp in Hungary, an Easter camp at Trinity College, Cambridge, and other training and selection of the IMO team. == See also == European Kangaroo British Mathematical Olympiad International Mathematical Olympiad International Mathematics Competition for University Students == References == == External links == United Kingdom Mathematics Trust website British Mathematical Olympiad Committee site International Mathematics Competition for University Students (IMC) site Junior Mathematical Challenge Sample Paper Intermediate Mathematical Challenge Sample Paper Senior Mathematical Challenge Sample Paper
Wikipedia:Universal approximation theorem#0
In the mathematical theory of artificial neural networks, universal approximation theorems are theorems of the following form: Given a family of neural networks, for each function f {\displaystyle f} from a certain function space, there exists a sequence of neural networks ϕ 1 , ϕ 2 , … {\displaystyle \phi _{1},\phi _{2},\dots } from the family, such that ϕ n → f {\displaystyle \phi _{n}\to f} according to some criterion. That is, the family of neural networks is dense in the function space. The most popular version states that feedforward networks with non-polynomial activation functions are dense in the space of continuous functions between two Euclidean spaces, with respect to the compact convergence topology. Universal approximation theorems are existence theorems: They simply state that there exists such a sequence ϕ 1 , ϕ 2 , ⋯ → f {\displaystyle \phi _{1},\phi _{2},\dots \to f} , and do not provide any way to actually find such a sequence. They also do not guarantee any method, such as backpropagation, might actually find such a sequence. Any method for searching the space of neural networks, including backpropagation, might find a converging sequence, or not (i.e. the backpropagation might get stuck in a local optimum). Universal approximation theorems are limit theorems: They simply state that for any f {\displaystyle f} and a criterion of closeness ϵ > 0 {\displaystyle \epsilon >0} , if there are enough neurons in a neural network, then there exists a neural network with that many neurons that does approximate f {\displaystyle f} to within ϵ {\displaystyle \epsilon } . There is no guarantee that any finite size, say, 10000 neurons, is enough. == Setup == Artificial neural networks are combinations of multiple simple mathematical functions that implement more complicated functions from (typically) real-valued vectors to real-valued vectors. The spaces of multivariate functions that can be implemented by a network are determined by the structure of the network, the set of simple functions, and its multiplicative parameters. A great deal of theoretical work has gone into characterizing these function spaces. Most universal approximation theorems are in one of two classes. The first quantifies the approximation capabilities of neural networks with an arbitrary number of artificial neurons ("arbitrary width" case) and the second focuses on the case with an arbitrary number of hidden layers, each containing a limited number of artificial neurons ("arbitrary depth" case). In addition to these two classes, there are also universal approximation theorems for neural networks with bounded number of hidden layers and a limited number of neurons in each layer ("bounded depth and bounded width" case). == History == === Arbitrary width === The first examples were the arbitrary width case. George Cybenko in 1989 proved it for sigmoid activation functions. Kurt Hornik, Maxwell Stinchcombe, and Halbert White showed in 1989 that multilayer feed-forward networks with as few as one hidden layer are universal approximators. Hornik also showed in 1991 that it is not the specific choice of the activation function but rather the multilayer feed-forward architecture itself that gives neural networks the potential of being universal approximators. Moshe Leshno et al in 1993 and later Allan Pinkus in 1999 showed that the universal approximation property is equivalent to having a nonpolynomial activation function. === Arbitrary depth === The arbitrary depth case was also studied by a number of authors such as Gustaf Gripenberg in 2003, Dmitry Yarotsky, Zhou Lu et al in 2017, Boris Hanin and Mark Sellke in 2018 who focused on neural networks with ReLU activation function. In 2020, Patrick Kidger and Terry Lyons extended those results to neural networks with general activation functions such, e.g. tanh or GeLU. One special case of arbitrary depth is that each composition component comes from a finite set of mappings. In 2024, Cai constructed a finite set of mappings, named a vocabulary, such that any continuous function can be approximated by compositing a sequence from the vocabulary. This is similar to the concept of compositionality in linguistics, which is the idea that a finite vocabulary of basic elements can be combined via grammar to express an infinite range of meanings. === Bounded depth and bounded width === The bounded depth and bounded width case was first studied by Maiorov and Pinkus in 1999. They showed that there exists an analytic sigmoidal activation function such that two hidden layer neural networks with bounded number of units in hidden layers are universal approximators. In 2018, Guliyev and Ismailov constructed a smooth sigmoidal activation function providing universal approximation property for two hidden layer feedforward neural networks with less units in hidden layers. In 2018, they also constructed single hidden layer networks with bounded width that are still universal approximators for univariate functions. However, this does not apply for multivariable functions. In 2022, Shen et al. obtained precise quantitative information on the depth and width required to approximate a target function by deep and wide ReLU neural networks. === Quantitative bounds === The question of minimal possible width for universality was first studied in 2021, Park et al obtained the minimum width required for the universal approximation of Lp functions using feed-forward neural networks with ReLU as activation functions. Similar results that can be directly applied to residual neural networks were also obtained in the same year by Paulo Tabuada and Bahman Gharesifard using control-theoretic arguments. In 2023, Cai obtained the optimal minimum width bound for the universal approximation. For the arbitrary depth case, Leonie Papon and Anastasis Kratsios derived explicit depth estimates depending on the regularity of the target function and of the activation function. === Kolmogorov network === The Kolmogorov–Arnold representation theorem is similar in spirit. Indeed, certain neural network families can directly apply the Kolmogorov–Arnold theorem to yield a universal approximation theorem. Robert Hecht-Nielsen showed that a three-layer neural network can approximate any continuous multivariate function. This was extended to the discontinuous case by Vugar Ismailov. In 2024, Ziming Liu and co-authors showed a practical application. === Reservoir computing and quantum reservoir computing === In reservoir computing a sparse recurrent neural network with fixed weights equipped of fading memory and echo state property is followed by a trainable output layer. Its universality has been demonstrated separately for what concerns networks of rate neurons and spiking neurons, respectively. In 2024, the framework has been generalized and extended to quantum reservoirs where the reservoir is based on qubits defined over Hilbert spaces. === Variants === Discontinuous activation functions, noncompact domains, certifiable networks, random neural networks, and alternative network architectures and topologies. The universal approximation property of width-bounded networks has been studied as a dual of classical universal approximation results on depth-bounded networks. For input dimension dx and output dimension dy the minimum width required for the universal approximation of the Lp functions is exactly max{dx + 1, dy} (for a ReLU network). More generally this also holds if both ReLU and a threshold activation function are used. Universal function approximation on graphs (or rather on graph isomorphism classes) by popular graph convolutional neural networks (GCNs or GNNs) can be made as discriminative as the Weisfeiler–Leman graph isomorphism test. In 2020, a universal approximation theorem result was established by Brüel-Gabrielsson, showing that graph representation with certain injective properties is sufficient for universal function approximation on bounded graphs and restricted universal function approximation on unbounded graphs, with an accompanying O ( | V | ⋅ | E | ) {\displaystyle {\mathcal {O}}(\left|V\right|\cdot \left|E\right|)} -runtime method that performed at state of the art on a collection of benchmarks (where V {\displaystyle V} and E {\displaystyle E} are the sets of nodes and edges of the graph respectively). There are also a variety of results between non-Euclidean spaces and other commonly used architectures and, more generally, algorithmically generated sets of functions, such as the convolutional neural network (CNN) architecture, radial basis functions, or neural networks with specific properties. == Arbitrary-width case == A spate of papers in the 1980s—1990s, from George Cybenko and Kurt Hornik etc, established several universal approximation theorems for arbitrary width and bounded depth. See for reviews. The following is the most often quoted: Also, certain non-continuous activation functions can be used to approximate a sigmoid function, which then allows the above theorem to apply to those functions. For example, the step function works. In particular, this shows that a perceptron network with a single infinitely wide hidden layer can approximate arbitrary functions. Such an f {\displaystyle f} can also be approximated by a network of greater depth by using the same construction for the first layer and approximating the identity function with later layers. The above proof has not specified how one might use a ramp function to approximate arbitrary functions in C 0 ( R n , R ) {\displaystyle C_{0}(\mathbb {R} ^{n},\mathbb {R} )} . A sketch of the proof is that one can first construct flat bump functions, intersect them to obtain spherical bump functions that approximate the Dirac delta function, then use those to approximate arbitrary functions in C 0 ( R n , R ) {\displaystyle C_{0}(\mathbb {R} ^{n},\mathbb {R} )} . The original proofs, such as the one by Cybenko, use methods from functional analysis, including the Hahn-Banach and Riesz–Markov–Kakutani representation theorems. Cybenko first published the theorem in a technical report in 1988, then as a paper in 1989. Notice also that the neural network is only required to approximate within a compact set K {\displaystyle K} . The proof does not describe how the function would be extrapolated outside of the region. The problem with polynomials may be removed by allowing the outputs of the hidden layers to be multiplied together (the "pi-sigma networks"), yielding the generalization: == Arbitrary-depth case == The "dual" versions of the theorem consider networks of bounded width and arbitrary depth. A variant of the universal approximation theorem was proved for the arbitrary depth case by Zhou Lu et al. in 2017. They showed that networks of width n + 4 with ReLU activation functions can approximate any Lebesgue-integrable function on n-dimensional input space with respect to L 1 {\displaystyle L^{1}} distance if network depth is allowed to grow. It was also shown that if the width was less than or equal to n, this general expressive power to approximate any Lebesgue integrable function was lost. In the same paper it was shown that ReLU networks with width n + 1 were sufficient to approximate any continuous function of n-dimensional input variables. The following refinement, specifies the optimal minimum width for which such an approximation is possible and is due to. Together, the central result of yields the following universal approximation theorem for networks with bounded width (see also for the first result of this kind). Certain necessary conditions for the bounded width, arbitrary depth case have been established, but there is still a gap between the known sufficient and necessary conditions. == Bounded depth and bounded width case == The first result on approximation capabilities of neural networks with bounded number of layers, each containing a limited number of artificial neurons was obtained by Maiorov and Pinkus. Their remarkable result revealed that such networks can be universal approximators and for achieving this property two hidden layers are enough. This is an existence result. It says that activation functions providing universal approximation property for bounded depth bounded width networks exist. Using certain algorithmic and computer programming techniques, Guliyev and Ismailov efficiently constructed such activation functions depending on a numerical parameter. The developed algorithm allows one to compute the activation functions at any point of the real axis instantly. For the algorithm and the corresponding computer code see. The theoretical result can be formulated as follows. Here “ σ : R → R {\displaystyle \sigma \colon \mathbb {R} \to \mathbb {R} } is λ {\displaystyle \lambda } -strictly increasing on some set X {\displaystyle X} ” means that there exists a strictly increasing function u : X → R {\displaystyle u\colon X\to \mathbb {R} } such that | σ ( x ) − u ( x ) | ≤ λ {\displaystyle |\sigma (x)-u(x)|\leq \lambda } for all x ∈ X {\displaystyle x\in X} . Clearly, a λ {\displaystyle \lambda } -increasing function behaves like a usual increasing function as λ {\displaystyle \lambda } gets small. In the "depth-width" terminology, the above theorem says that for certain activation functions depth- 2 {\displaystyle 2} width- 2 {\displaystyle 2} networks are universal approximators for univariate functions and depth- 3 {\displaystyle 3} width- ( 2 d + 2 ) {\displaystyle (2d+2)} networks are universal approximators for d {\displaystyle d} -variable functions ( d > 1 {\displaystyle d>1} ). == See also == Kolmogorov–Arnold representation theorem Representer theorem No free lunch theorem Stone–Weierstrass theorem Fourier series == References ==
Wikipedia:Universal chord theorem#0
In mathematical analysis, the universal chord theorem states that if a function f is continuous on [a,b] and satisfies f ( a ) = f ( b ) {\displaystyle f(a)=f(b)} , then for every natural number n {\displaystyle n} , there exists some x ∈ [ a , b ] {\displaystyle x\in [a,b]} such that f ( x ) = f ( x + b − a n ) {\displaystyle f(x)=f\left(x+{\frac {b-a}{n}}\right)} . == History == The theorem was published by Paul Lévy in 1934 as a generalization of Rolle's theorem. == Statement of the theorem == Let H ( f ) = { h ∈ [ 0 , + ∞ ) : f ( x ) = f ( x + h ) for some x } {\displaystyle H(f)=\{h\in [0,+\infty ):f(x)=f(x+h){\text{ for some }}x\}} denote the chord set of the function f. If f is a continuous function and h ∈ H ( f ) {\displaystyle h\in H(f)} , then h n ∈ H ( f ) {\displaystyle {\frac {h}{n}}\in H(f)} for all natural numbers n. == Case of n = 2 == The case when n = 2 can be considered an application of the Borsuk–Ulam theorem to the real line. It says that if f ( x ) {\displaystyle f(x)} is continuous on some interval I = [ a , b ] {\displaystyle I=[a,b]} with the condition that f ( a ) = f ( b ) {\displaystyle f(a)=f(b)} , then there exists some x ∈ [ a , b ] {\displaystyle x\in [a,b]} such that f ( x ) = f ( x + b − a 2 ) {\displaystyle f(x)=f\left(x+{\frac {b-a}{2}}\right)} . In less generality, if f : [ 0 , 1 ] → R {\displaystyle f:[0,1]\rightarrow \mathbb {R} } is continuous and f ( 0 ) = f ( 1 ) {\displaystyle f(0)=f(1)} , then there exists x ∈ [ 0 , 1 2 ] {\displaystyle x\in \left[0,{\frac {1}{2}}\right]} that satisfies f ( x ) = f ( x + 1 / 2 ) {\displaystyle f(x)=f(x+1/2)} . == Proof of n = 2 == Consider the function g : [ a , b + a 2 ] → R {\displaystyle g:\left[a,{\dfrac {b+a}{2}}\right]\to \mathbb {R} } defined by g ( x ) = f ( x + b − a 2 ) − f ( x ) {\displaystyle g(x)=f\left(x+{\dfrac {b-a}{2}}\right)-f(x)} . Being the sum of two continuous functions, g {\displaystyle g} is continuous, g ( a ) + g ( b + a 2 ) = f ( b ) − f ( a ) = 0 {\displaystyle g(a)+g\left({\dfrac {b+a}{2}}\right)=f(b)-f(a)=0} . It follows that g ( a ) ⋅ g ( b + a 2 ) ≤ 0 {\displaystyle g(a)\cdot g\left({\dfrac {b+a}{2}}\right)\leq 0} and by applying the intermediate value theorem, there exists c ∈ [ a , b + a 2 ] {\displaystyle c\in \left[a,{\dfrac {b+a}{2}}\right]} such that g ( c ) = 0 {\displaystyle g(c)=0} , so that f ( c ) = f ( c + b − a 2 ) {\displaystyle f(c)=f\left(c+{\dfrac {b-a}{2}}\right)} . This concludes the proof of the theorem for n = 2 {\displaystyle n=2} . == Proof of general case == The proof of the theorem in the general case is very similar to the proof for n = 2 {\displaystyle n=2} Let n {\displaystyle n} be a non negative integer, and consider the function g : [ a , b − b − a n ] → R {\displaystyle g:\left[a,b-{\dfrac {b-a}{n}}\right]\to \mathbb {R} } defined by g ( x ) = f ( x + b − a n ) − f ( x ) {\displaystyle g(x)=f\left(x+{\dfrac {b-a}{n}}\right)-f(x)} . Being the sum of two continuous functions, g {\displaystyle g} is continuous. Furthermore, ∑ k = 0 n − 1 g ( a + k ⋅ b − a n ) = 0 {\displaystyle \sum _{k=0}^{n-1}g\left(a+k\cdot {\dfrac {b-a}{n}}\right)=0} . It follows that there exists integers i , j {\displaystyle i,j} such that g ( a + i ⋅ b − a n ) ≤ 0 ≤ g ( a + j ⋅ b − a n ) {\displaystyle g\left(a+i\cdot {\dfrac {b-a}{n}}\right)\leq 0\leq g\left(a+j\cdot {\dfrac {b-a}{n}}\right)} The intermediate value theorems gives us c such that g ( c ) = 0 {\displaystyle g(c)=0} and the theorem follows. == References ==
Wikipedia:University of Chicago School Mathematics Project#0
The University of Chicago School Mathematics Project (UCSMP) is a multi-faceted project of the University of Chicago in the United States, intended to improve competency in mathematics in the United States by elevating educational standards for children in elementary and secondary schools. == Overview == The UCSMP supports educators by supplying training materials to them and offering a comprehensive mathematics curriculum at all levels of primary and secondary education. It seeks to bring international strengths into the United States, translating non-English math textbooks for English students and sponsoring international conferences on the subject of math education. Launched in 1983 with the aid of a six-year grant from Amoco, the UCSMP is used throughout the United States. UCSMP developed Everyday Mathematics, a pre-K and elementary school mathematics curriculum. == UCSMP publishers == Wright Group-McGraw-Hill (K-6 Materials) (Pre-K is by SRA McGraw-Hill Education ) (Pre-K Materials) Wright Group-McGraw-Hill (6-12 Materials) American Mathematical Society (Translations of Foreign Texts) == See also == Zalman Usiskin == References == == External links == Official Website Elementary Component Secondary Component
Wikipedia:University of Liverpool Mathematics School#0
University of Liverpool Mathematics School (abbreviated as University of Liverpool Maths School and ULMaS) is a coeducational maths school in Central, Liverpool, in the English county of Merseyside. It was opened by the University of Liverpool in September 2020 as the third specialist maths school in the country and the first in Northern England. It is located on the university's campus, in the Sir Alastair Pilkington Building, and offers a curriculum specialising in A-Level mathematics (including further mathematics), physics and computer science. == History == In July 2018 the Department for Education, with Lord Agnew and Liz Truss, announced plans to establish the University of Liverpool Mathematics College. It would be a maths school offering the subjects of A-Level mathematics, further mathematics, physics, and computer science, and would enrol 80 students per year. An offer of music was also considered. The New Schools Network, made to support free schools (including maths schools), welcomed the announcement. The University of Liverpool promoted this college to Year Eleven pupils in multiple schools throughout April 2019. By June 2020 the college's name had been changed to University of Liverpool Mathematics School. A headteacher, Damian Haigh, was appointed. The first teaching staff were recruited through video call as a result of the COVID-19 pandemic. The Department for Education reached a funding agreement with the University of Liverpool Mathematics School Trust to enable it to open the school in September 2020. The college opened on 1 September 2020 but the official opening ceremony did not take place until 30 September 2021. Doctor Steve Garnett was the guest of honour at the official opening ceremony, a business magnate who previously spoke at the college. Between January 2021 and the start of March 2021, due to the COVID-19 pandemic, distance education arrangements were in effect. Physical face-to-face teaching resumed on 8 March under new preventative measures, such as compulsory face masks in areas where social distancing could not be enforced. Students were also offered campus COVID-19 tests and some testing equipment for home usage. == External links == Official website == References ==
Wikipedia:Uri Zwick#0
Uri Zwick (Hebrew: אורי צוויק) is an Israeli computer scientist and mathematician known for his work on graph algorithms, in particular on distances in graphs and on the color-coding technique for subgraph isomorphism. With Howard Karloff, he is the namesake of the Karloff–Zwick algorithm for approximating the MAX-3SAT problem of Boolean satisfiability. He and his coauthors won the David P. Robbins Prize in 2011 for their work on the block-stacking problem. Zwick earned a bachelor's degree from the Technion – Israel Institute of Technology, and completed his doctorate at Tel Aviv University in 1989 under the supervision of Noga Alon. He is currently a professor of computer science at Tel Aviv University. == References == == External links == Home page Uri Zwick publications indexed by Google Scholar
Wikipedia:Uriel Frisch#0
Uriel Frisch (born in Agen, in France, on December 10, 1940) is a French mathematical physicist known for his work on fluid dynamics and turbulence. == Biography == From 1959 to 1963 Frisch was a student at the École Normale Supérieure. Early in his graduate studies, he became interested in turbulence, under the mentorship of Robert Kraichnan, a former assistant to Albert Einstein. Frisch earned a Ph.D. in 1967 from the University of Paris, and since then he has worked at the French National Centre for Scientific Research (CNRS). He retired in 2006, and became a director of research emeritus at CNRS. Frisch's wife Hélène is also a physicist, and the grand daughter of mathematician Paul Lévy. == Research == Frisch is the author of a 1995 book on turbulence and of over 200 research publications. One of his most cited works, published in 1986, concerns the lattice gas automaton method of simulating fluid dynamics using a cellular automaton. The method used until that time, the HPP model, simulated particles moving in axis-parallel directions in a square lattice, but this model was unsatisfactory because it obeyed unwanted and unphysical conservation laws (the conservation of momentum within each axis-parallel line). Frisch and his co-authors Brosl Hasslacher and Yves Pomeau introduced a model using instead the hexagonal lattice which became known as the FHP model after the initials of its inventors and which much more accurately simulated the behavior of actual fluids. Frisch is also known for his work with Giorgio Parisi on the analysis of the fine structure of turbulent flows, for his early advocacy of multifractal systems in modeling physical processes, and for his research on using transportation theory to reconstruct the distribution of matter in the early universe. == Awards and honors == Frisch won the Peccot Prize of the Collège de France for his doctoral thesis in 1967, the Bazin Prize of the French Academy of Sciences in 1985, and the Lewis Fry Richardson Medal of the European Geosciences Union "for his fundamental contributions to the understanding of turbulence" in 2003. He is a member of the French Academy of Sciences since 2008. He is an Officier of the Ordre national du Mérite and the recipient of the 2010 Modesto Panetti e Carlo Ferrari prize. In 2020 he has been awarded with the prize EUROMECH, provided by the European Mechanics Society. == Selected publications == Frisch, U.; Hasslacher, B.; Pomeau, Y. (1986), "Lattice-gas automata for the Navier-Stokes equation", Phys. Rev. Lett., 56 (14): 1505–1508, Bibcode:1986PhRvL..56.1505F, doi:10.1103/PhysRevLett.56.1505, PMID 10032689. Frisch, Uriel (1995). Turbulence. The legacy of A. N. Kolmogorov. Cambridge: Cambridge University Press. ISBN 0-521-45103-5. MR 1428905. Frisch, U.; Matarrese, S.; Mohayaee, R; Sobolevski, A. (2002). "A reconstruction of the initial conditions of the universe by optimal mass transportation". Nature. 417 (6886): 260–262. arXiv:astro-ph/0109483. Bibcode:2002Natur.417..260F. doi:10.1038/417260a. PMID 12015595. S2CID 4379455. == References == == Further reading == Frisch, Uriel (2009), Notice sur les travaux de Uriel Frisch (PDF) (in French), French Academy of Sciences == External links == Uriel Frisch at the Mathematics Genealogy Project Nice Uriel-fest, December 2010 (Photographs from a symposium in honor of Frisch)
Wikipedia:Uriel Rothblum#0
Uriel George "Uri" Rothblum (Tel Aviv, March 16, 1947 – Haifa, March 26, 2012) was an Israeli mathematician and operations researcher. From 1984 until 2012 he held the Alexander Goldberg Chair in Management Science at the Technion – Israel Institute of Technology in Haifa, Israel. Rothblum was born in Tel Aviv to a family of Jewish immigrants from Austria. He went to Tel Aviv University, where Robert Aumann became his mentor; he earned a bachelor's degree there in 1969 and a master's in 1971. He completed his doctorate in 1974 from Stanford University, in operations research, under the supervision of Arthur F. Veinott. After postdoctoral research at New York University, he joined the Yale University faculty in 1975, and moved to the Technion in 1984. Rothblum became president of the Israeli Operational Research Society (ORSIS) for 2006–2008, and editor-in-chief of Mathematics of Operations Research from 2010 until his death. He was elected to the 2003 class of Fellows of the Institute for Operations Research and the Management Sciences. == References ==
Wikipedia:Ursula van Rienen#0
Ursula van Rienen (born 1957) is a German applied mathematician and physicist whose research involves computational electrodynamics, the computational simulation of interactions between electromagnetic fields and biological tissue, and its applications in electrical brain stimulation. She is a university professor in the Institut für Allgemeine Elektrotechnik at the University of Rostock, where she holds the Chair of Electromagnetic Field Theory. == Education and career == Van Rienen studied mathematics and physics at the University of Bonn, earning a vordiplom (the equivalent of a bachelor's degree) in 1979, and a diploma (the equivalent of a master's degree) in 1983, with a minor in operations research. She worked as a researcher at DESY, the German Electron Synchrotron research center, from 1983 to 1989. In 1989 she defended a doctoral thesis through the Technische Universität Darmstadt, titled Zur numerischen Berechnung zeitharmonischer elektromagntischer Felder in offenen, zylindersymetrischen Strukturen unter Verwendung von Mehrgitterverfahren [On the numerical calculation of time-harmonic electromagnetic fields in open, cylindrically symmetrical structures using multi-grid methods], supervised by Willi Törnig. Beginning in 1990, she worked as a research assistant at the Technische Universität Darmstadt, and then a lecturer in 1995. In 1997 she completed a habilitation there, and in the same year took her current position as a professor at the University of Rostock. At Rostock, she has been dean of the Faculty of Information Technology and Electrical Engineering from 2004 to 2006, and vice rector for research and research training from 2009 to 2013. == Books == Van Rienen published her habilitation thesis as the book Numerical Methods in Computational Electrodynamics: Linear Systems in Practical Applications (Springer, 2001). == References == == External links == Ursula van Rienen publications indexed by Google Scholar
Wikipedia:Uwe Storch#0
Uwe Storch (born 12 July 1940, Leopoldshall– Lanzarote, 17 September 2017) was a German mathematician. His field of research was commutative algebra and analytic and algebraic geometry, in particular derivations, divisor class group, resultants. Storch studied mathematics, physics and mathematical logic in Münster and in Heidelberg. He got his PhD 1966 under the supervision of Heinrich Behnke with a thesis on almost (or Q) factorial rings. 1972 Habilitation in Bochum, 1974 professor in Osnabrück and since 1981 professor for algebra and geometry in Bochum. 2005 Emeritation. Uwe Storch was married and had four sons. == Theorem of Eisenbud–Evans–Storch == The Theorem of Eisenbud-Evans-Storch states that every algebraic variety in n-dimensional affine space is given geometrically (i.e. up to radical) by n polynomials. == Selected publications == Günther Scheja and Uwe Storch, Lehrbuch der Algebra, 2 volumes, Stuttgart 1980 (1st edition was in 3 volumes), 1988. Uwe Storch and Hartmut Wiebe, Lehrbuch der Mathematik, 4 volumes. == References == == External links == Uwe Storch at the Mathematics Genealogy Project
Wikipedia:V-ring (ring theory)#0
In algebra, a unit or invertible element of a ring is an invertible element for the multiplication of the ring. That is, an element u of a ring R is a unit if there exists v in R such that v u = u v = 1 , {\displaystyle vu=uv=1,} where 1 is the multiplicative identity; the element v is unique for this property and is called the multiplicative inverse of u. The set of units of R forms a group R× under multiplication, called the group of units or unit group of R. Other notations for the unit group are R∗, U(R), and E(R) (from the German term Einheit). Less commonly, the term unit is sometimes used to refer to the element 1 of the ring, in expressions like ring with a unit or unit ring, and also unit matrix. Because of this ambiguity, 1 is more commonly called the "unity" or the "identity" of the ring, and the phrases "ring with unity" or a "ring with identity" may be used to emphasize that one is considering a ring instead of a rng. == Examples == The multiplicative identity 1 and its additive inverse −1 are always units. More generally, any root of unity in a ring R is a unit: if rn = 1, then rn−1 is a multiplicative inverse of r. In a nonzero ring, the element 0 is not a unit, so R× is not closed under addition. A nonzero ring R in which every nonzero element is a unit (that is, R× = R ∖ {0}) is called a division ring (or a skew-field). A commutative division ring is called a field. For example, the unit group of the field of real numbers R is R ∖ {0}. === Integer ring === In the ring of integers Z, the only units are 1 and −1. In the ring Z/nZ of integers modulo n, the units are the congruence classes (mod n) represented by integers coprime to n. They constitute the multiplicative group of integers modulo n. === Ring of integers of a number field === In the ring Z[√3] obtained by adjoining the quadratic integer √3 to Z, one has (2 + √3)(2 − √3) = 1, so 2 + √3 is a unit, and so are its powers, so Z[√3] has infinitely many units. More generally, for the ring of integers R in a number field F, Dirichlet's unit theorem states that R× is isomorphic to the group Z n × μ R {\displaystyle \mathbf {Z} ^{n}\times \mu _{R}} where μ R {\displaystyle \mu _{R}} is the (finite, cyclic) group of roots of unity in R and n, the rank of the unit group, is n = r 1 + r 2 − 1 , {\displaystyle n=r_{1}+r_{2}-1,} where r 1 , r 2 {\displaystyle r_{1},r_{2}} are the number of real embeddings and the number of pairs of complex embeddings of F, respectively. This recovers the Z[√3] example: The unit group of (the ring of integers of) a real quadratic field is infinite of rank 1, since r 1 = 2 , r 2 = 0 {\displaystyle r_{1}=2,r_{2}=0} . === Polynomials and power series === For a commutative ring R, the units of the polynomial ring R[x] are the polynomials p ( x ) = a 0 + a 1 x + ⋯ + a n x n {\displaystyle p(x)=a_{0}+a_{1}x+\dots +a_{n}x^{n}} such that a0 is a unit in R and the remaining coefficients a 1 , … , a n {\displaystyle a_{1},\dots ,a_{n}} are nilpotent, i.e., satisfy a i N = 0 {\displaystyle a_{i}^{N}=0} for some N. In particular, if R is a domain (or more generally reduced), then the units of R[x] are the units of R. The units of the power series ring R [ [ x ] ] {\displaystyle R[[x]]} are the power series p ( x ) = ∑ i = 0 ∞ a i x i {\displaystyle p(x)=\sum _{i=0}^{\infty }a_{i}x^{i}} such that a0 is a unit in R. === Matrix rings === The unit group of the ring Mn(R) of n × n matrices over a ring R is the group GLn(R) of invertible matrices. For a commutative ring R, an element A of Mn(R) is invertible if and only if the determinant of A is invertible in R. In that case, A−1 can be given explicitly in terms of the adjugate matrix. === In general === For elements x and y in a ring R, if 1 − x y {\displaystyle 1-xy} is invertible, then 1 − y x {\displaystyle 1-yx} is invertible with inverse 1 + y ( 1 − x y ) − 1 x {\displaystyle 1+y(1-xy)^{-1}x} ; this formula can be guessed, but not proved, by the following calculation in a ring of noncommutative power series: ( 1 − y x ) − 1 = ∑ n ≥ 0 ( y x ) n = 1 + y ( ∑ n ≥ 0 ( x y ) n ) x = 1 + y ( 1 − x y ) − 1 x . {\displaystyle (1-yx)^{-1}=\sum _{n\geq 0}(yx)^{n}=1+y\left(\sum _{n\geq 0}(xy)^{n}\right)x=1+y(1-xy)^{-1}x.} See Hua's identity for similar results. == Group of units == A commutative ring is a local ring if R ∖ R× is a maximal ideal. As it turns out, if R ∖ R× is an ideal, then it is necessarily a maximal ideal and R is local since a maximal ideal is disjoint from R×. If R is a finite field, then R× is a cyclic group of order |R| − 1. Every ring homomorphism f : R → S induces a group homomorphism R× → S×, since f maps units to units. In fact, the formation of the unit group defines a functor from the category of rings to the category of groups. This functor has a left adjoint which is the integral group ring construction. The group scheme GL 1 {\displaystyle \operatorname {GL} _{1}} is isomorphic to the multiplicative group scheme G m {\displaystyle \mathbb {G} _{m}} over any base, so for any commutative ring R, the groups GL 1 ⁡ ( R ) {\displaystyle \operatorname {GL} _{1}(R)} and G m ( R ) {\displaystyle \mathbb {G} _{m}(R)} are canonically isomorphic to U(R). Note that the functor G m {\displaystyle \mathbb {G} _{m}} (that is, R ↦ U(R)) is representable in the sense: G m ( R ) ≃ Hom ⁡ ( Z [ t , t − 1 ] , R ) {\displaystyle \mathbb {G} _{m}(R)\simeq \operatorname {Hom} (\mathbb {Z} [t,t^{-1}],R)} for commutative rings R (this for instance follows from the aforementioned adjoint relation with the group ring construction). Explicitly this means that there is a natural bijection between the set of the ring homomorphisms Z [ t , t − 1 ] → R {\displaystyle \mathbb {Z} [t,t^{-1}]\to R} and the set of unit elements of R (in contrast, Z [ t ] {\displaystyle \mathbb {Z} [t]} represents the additive group G a {\displaystyle \mathbb {G} _{a}} , the forgetful functor from the category of commutative rings to the category of abelian groups). == Associatedness == Suppose that R is commutative. Elements r and s of R are called associate if there exists a unit u in R such that r = us; then write r ~ s. In any ring, pairs of additive inverse elements x and −x are associate, since any ring includes the unit −1. For example, 6 and −6 are associate in Z. In general, ~ is an equivalence relation on R. Associatedness can also be described in terms of the action of R× on R via multiplication: Two elements of R are associate if they are in the same R×-orbit. In an integral domain, the set of associates of a given nonzero element has the same cardinality as R×. The equivalence relation ~ can be viewed as any one of Green's semigroup relations specialized to the multiplicative semigroup of a commutative ring R. == See also == S-units Localization of a ring and a module == Notes == == Citations == == Sources ==
Wikipedia:V. J. Havel#0
Václav Jaromír Havel is a Czech mathematician. He is known for characterizing the degree sequences of undirected graphs and the Havel–Hakimi algorithm. It is an important contribution to graph theory. == Selected publications == Havel, Václav (1955), "A remark on the existence of finite graphs", Časopis pro pěstování matematiky (in Czech), 80 (4): 477–480, doi:10.21136/CPM.1955.108220 == References ==
Wikipedia:V. Kumar Murty#0
Vijaya Kumar Murty (born 20 May 1956) is an Indo-Canadian mathematician working in number theory. == Biography == Murty obtained his BSc in 1977 from Carleton University and his PhD in mathematics in 1982 from Harvard University under John Tate. He and his brother, M. Ram Murty, have written more than 20 joint papers. In a book edited by Alex Michalos, there is a description of how the Murty brothers learned mathematics in their teens. == Awards == Murty received the Coxeter–James Prize in 1991 from the Canadian Mathematical Society. He was elected to the Royal Society of Canada in 1995. In 1996, he, along with his brother, M. Ram Murty, received the Ferran Sunyer i Balaguer Prize for the book "Non-vanishing of L-functions and their applications." In 2018, the Canadian Mathematical Society listed him in their inaugural class of fellows. He was named a Member of the Order of Canada in 2024. == References == == External links == Vijaya Kumar Murty: Home Page—University of Toronto, Department of Mathematics.
Wikipedia:Vaclav Zizler#0
Vaclav Zizler (born 8 March 1943), is a Czech mathematics professor specializing in Banach space theory and non-linear spaces. As of 2006, Dr. Zizler holds the position of Professor Emeritus at the University of Alberta in Edmonton, Alberta, Canada. Formerly he was at the Mathematical Institute of the Czech Academy of Sciences where he was Head of Research. In 2001 the Czech Minister of Education named his Functional Analysis and Infinite Dimensional Geometry the university textbook of the year. In 2008 he was, for his excellent lifelong work in mathematical analysis and selfless activities in favour of the Czech mathematics, awarded a laureate medal by the Czech Mathematical Society. == Selected publications == Books Fabian, Marián; Habala, Petr; Hájek, Petr; Montesinos Santalucía, Vicente; Pelant, Jan; Zizler, Václav (2001), Functional Analysis and Infinite-dimensional Geometry, CMS Books in Mathematics, vol. 8, New York: Springer-Verlag, doi:10.1007/978-1-4757-3480-5, ISBN 0-387-95219-5. Deville, Robert; Godefroy, Gilles; Zizler, Václav (1993), Smoothness and Renormings in Banach Spaces, Pitman Monographs and Surveys in Pure and Applied Mathematics, vol. 64, New York: John Wiley & Sons, p. xii+376, ISBN 0-582-07250-6. Research articles Deville, Robert; Godefroy, Gilles; Zizler, Václav (1993), "A smooth variational principle with applications to Hamilton-Jacobi equations in infinite dimensions", Journal of Functional Analysis, 111 (1): 197–212, doi:10.1006/jfan.1993.1009, MR 1200641. Zizler, Václav (1973), "On some extremal problems in Banach spaces", Mathematica Scandinavica, 32: 214–224 (1974), doi:10.7146/math.scand.a-11456, MR 0346492. == References == == External links == Zizler's homepage at the University of Alberta. Vaclav Zizler at the Mathematics Genealogy Project
Wikipedia:Vadim G. Vizing#0
Vadim Georgievich Vizing (Russian: Вади́м Гео́ргиевич Визинг, Ukrainian: Вадим Георгійович Візінг; 25 March 1937 – 23 August 2017) was a Soviet and Ukrainian mathematician known for his contributions to graph theory, and especially for Vizing's theorem stating that the edges of any simple graph with maximum degree Δ can be colored with at most Δ + 1 colors. == Biography == Vizing was born in Kiev on March 25, 1937. His mother was half-German, and because of this the Soviet authorities forced his family to move to Siberia in 1947. After completing his undergraduate studies in mathematics in Tomsk State University in 1959, he began his Ph.D. studies at the Steklov Institute of Mathematics in Moscow, on the subject of function approximation, but he left in 1962 without completing his degree. Instead, he returned to Novosibirsk, working from 1962 to 1968 at the Russian Academy of Sciences there and earning a Ph.D. in 1966. In Novosibirsk, he was a regular participant in A. A. Zykov's seminar in graph theory. After holding various additional positions, he moved to Odessa in 1974, where he taught mathematics for many years at the Academy for Food Technology (originally known as Одесский технологический институт пищевой промышленности им. М. В. Ломоносова, "Odessa Technological Institute of Food Industry named after Mikhail Lomonosov"). == Research results == The result now known as Vizing's theorem, published in 1964, when Vizing was working in Novosibirsk, states that the edges of any graph with at most Δ edges per vertex can be colored using at most Δ + 1 colors.[V64] It is a continuation of the work of Claude Shannon, who showed that any multigraph can have its edges colored with at most (3/2)Δ colors (a tight bound, as a triangle with Δ/2 edges per side requires this many colors). Although Vizing's theorem is now standard material in many graph theory textbooks, Vizing had trouble publishing the result initially, and his paper on it appears in an obscure journal, Diskret. Analiz. Vizing also made other contributions to graph theory and graph coloring, including the introduction of list coloring,[V76] the formulation of the total coloring conjecture (still unsolved) stating that the edges and vertices of any graph can together be colored with at most Δ + 2 colors,[V68] Vizing's conjecture (also unsolved) concerning the domination number of cartesian products of graphs,[V68] and the 1974 definition of the modular product of graphs as a way of reducing subgraph isomorphism problems to finding maximum cliques in graphs.[V74] He also proved a stronger version of Brook's theorem that applies to list coloring. From 1976, Vizing stopped working on graph theory and studied problems of scheduling instead, only returning to graph theory again in 1995. == Awards == Great Silver Medal of the Institute of Mathematics of the Siberian Department of the Russian Academy of Sciences == Selected publications == == Notes == == References == == External links == List of recent publications of Vadim Vizing and mathnet.ru
Wikipedia:Vadim Kaloshin#0
Vadim Kaloshin is a Soviet-born mathematician, known for his contributions to dynamical systems. He was a student of John N. Mather at Princeton University, obtaining a Ph.D. in 2001. He was subsequently a C. L. E. Moore instructor at the Massachusetts Institute of Technology, and a faculty member at the California Institute of Technology and Pennsylvania State University. Until 2020 he was the Michael Brin Chair at the University of Maryland, College Park, mathematics professor for the University of Maryland College of Computer, Mathematical, and Natural Sciences,. Now he is a chair professor at Institute of Science and Technology Austria. After receiving his Ph.D. from Princeton University in 2001, he was awarded the American Institute of Mathematics five-year fellowship. He is a recipient of the Sloan fellowship (2004) and of the Simons fellowship (2016). He was awarded a Moscow Mathematical Society Prize (2001) and a Barcelona Prize in Dynamical Systems (2019). In 2020 he received a gold medal from the International Consortium of Chinese Mathematics (ICCM). He was an invited speaker at the 2006 International Congress of Mathematicians in Madrid, a plenary speaker at the 2015 International Congress on Mathematical Physics in Santiago, Chile, and an invited speaker at the conference Dynamics, Equations and Applications in Kraków in 2019. In 2021 he was awarded a European Research Council (ERC) Advanced Grant. From 2006 to 2018 he was an editor of Inventiones mathematicae. He is a member of the editorial boards of Advances in Mathematics, Analysis & PDE, Revista Matemática Iberoamericana, and Ergodic Theory and Dynamical Systems. In 2020 he was elected to Academia Europaea (the Academy of Europe). In 2023 he was elected to the European Academy of Sciences and Arts. Recently he received the Frontier of Science Award == References ==
Wikipedia:Vadym Slyusar#0
Vadym Slyusar (born 15 October 1964, vil. Kolotii, Reshetylivka Raion, Poltava region, Ukraine) is a Soviet and Ukrainian scientist, Professor, Doctor of Technical Sciences, Honored Scientist and Technician of Ukraine, founder of tensor-matrix theory of digital antenna arrays (DAAs), N-OFDM and other theories in fields of radar systems, smart antennas for wireless communications and digital beamforming. == Scientific results == === N-OFDM theory === In 1992 Vadym Slyusar patented the 1st optimal demodulation method for N-OFDM signals after Fast Fourier transform (FFT). From this patent was started the history of N-OFDM signals theory. In this regard, W. Kozek and A. F. Molisch wrote in 1998 about N-OFDM signals with the sub-carrier spacing α < 1 {\displaystyle \alpha <1} , that "it is not possible to recover the information from the received signal, even in the case of an ideal channel." But in 2001 Vadym Slyusar proposed such Non-orthogonal frequency digital modulation (N-OFDM) as an alternative of OFDM for communications systems. The next publication of V. Slysuar about this method has priority in July 2002 before the conference paper of I. Darwazeh and M.R.D. Rodrigues (September, 2003) regarding SEFDM. The description of the method of optimal processing for N-OFDM signals without FFT of ADC samples was transferred to publication by V. Slyusar in October 2003. The theory N-OFDM of V. Slyusar inspired numerous investigations in this area of other scientists. === Tensor-matrix theory of digital antenna array === In 1996 V. Slyusar proposed the column-wise Khatri–Rao product to estimate four coordinates of signals sources at a digital antenna array. The alternative concept of the matrix product, which uses row-wise splitting of matrices with a given quantity of rows (Face-splitting product), was proposed by V. Slyusar in 1996 as well. After these results the tensor-matrix theory of digital antenna arrays and new matrix operations was evolved (such as the Block Face-splitting product, Generalized Face-splitting product, Matrix Derivative of Face-splitting product etc.), which used also in artificial intelligence and machine learning systems to minimization of convolution and tensor sketch operations, in a popular Natural Language Processing models, and hypergraph models of similarity. The Face-splitting product and his properties used for multidimensional smoothing with P-splines and Generalized linear array model in the statistic in two- and multidimensional approximations of data as well. === Theory of odd-order I/Q demodulators === The theory of odd-order I/Q demodulators, which was proposed by V. Slyusar in 2014, started from his investigations of the tandem scheme of two-stage signal processing for the design of an I/Q demodulator and multistage I/Q demodulators concept in 2012. As result, Slyusar "presents a new class of I/Q demodulators with odd order derived from the even order I/Q demodulator which is characterized by linear phase-frequency relation for wideband signals". === Results in the other fields of research === V. Slyusar provided numerous theoretical works realized in several experimental radar stations with DAAs which were successfully tested. He investigated electrical small antennas and new constructions of such antennas, evolved the theory of metamaterials, proposed new ideas to implementation of augmented reality, and artificial intelligence to combat vehicles as well. V. Slyusar has 68 patents, and 850 publications in the areas of digital antenna arrays for radars and wireless communications. == Life data == 1981–1985 – listener of Orenburg Air Defense high military school. In this time started the scientific carrier of V. Slyusar, which published a first scientific report in 1985. June 1992 – defended the dissertation for a candidate degree (Techn. Sci.) at the Council of Military Academy of Air Defense of the Land Forces (Kyiv). The significant stage of the recognition of Vadym Slyusar’s scientific results became the defense of the dissertation for a doctoral degree (Techn. Sci.) in 2000. Professor – since 2005, Honored Scientist and Technician of Ukraine – 2008. Since 1996 – work at Central Scientific Research Institute of Armament and Military Equipment of the Armed Forces of Ukraine (Kyiv). Military Rank - Colonel. Since 2003 – participates in Ukraine-NATO cooperation as head of the national delegations, a person of contact, and national representative within experts groups of NATO Conference of National Armaments Directors and technical members of the Research Task Groups (RTG) of NATO Science and Technology Organisation (STO). Since 2009 – member of editorial board of Izvestiya Vysshikh Uchebnykh Zavedenii. Radioelektronika. == Selected awards == Honored Scientist and Technician of Ukraine (2008) Soviet and Ukraine military medals == Gallery == == See also == Digital antenna array N-OFDM Face-splitting product of matrix Tensor random projections == References == == External links == Personal Website Vadym Slyusar publications indexed by Google Scholar Vadym Slyusar's publications indexed by the Scopus bibliographic database. (subscription required) Few Inventors of Vadym Slyusar – Ukrainian Patents Data Base. «Науковці України – еліта держави». Том VI, 2020. – С. 216 Who's Who in the World 2013. - P. 2233 Who's Who in the World 2014
Wikipedia:Valentin Afraimovich#0
Valentin Afraimovich (Russian: Валентин Сендерович Афраймович, 2 April 1945, Kirov, Kirov Oblast, USSR – 21 February 2018, Nizhny Novgorod, Russia) was a Soviet, Russian and Mexican mathematician. He made contributions to dynamical systems theory, qualitative theory of ordinary differential equations, bifurcation theory, concept of attractor, strange attractors, space-time chaos, mathematical models of non-equilibrium media and biological systems, travelling waves in lattices, complexity of orbits and dimension-like characteristics in dynamical systems. == Biography == He got his PhD (Kandidat) degree in 1974 at the Nizhny Novgorod State University under the advice of L. P. Shil’nikov. Also in 1990 he received his Doctor of Science degree in Mathematics and Physics, at Saratov State University in Russia. After then, he held several academic positions, including: 1992-1995 Visiting Principal Research Scientist, Georgia Institute of Technology, Atlanta. 1995-1996 Visiting Professor, Northwestern University, Evanston, IL. 1996-1998 Visiting Professor, National Tsing Hua University, Hsinchu, Taiwan. 1998–present Professor–researcher, IICO, Universidad Autónoma de San Luis Potosí, S.L.P., México. Afraimovich's students include Mark Shereshevsky, Nizhny Novgorod 1990; Todd Ray Young, Atlanta, Georgia, 1995; Antonio Morante, San Luis Potosí (SLP) México, 2002; Salomé Murgia, SLP México, 2003; Alberto Cordonet, SLP Mexico, 2002; Francisco Ordaz, SLP Mexico, 2004; Leticia Ramirez, SLP Mexico, 2005; Irma Tristan-Lopez, SLP Mexico, 2010; Rosendo Vazquez-Bañuelos, 2013. == Selected scientific papers == VS Afraimovich, G Moses, TR Young. Two dimensional heteroclinic attractor in the generalized Lotka-Volterra system. Nonlinearity 29 (2016). 1645–1667. doi:10.1088/0951-7715/29/5/1645. V. Afraimovich, X. Gong, M. Rabinovich. Sequential memory: Binding dynamics. Chaos, 5(10):103118, 2015. V. Afraimovich. M. Courbage, L. Glebsky. Directional Complexity and entropy for Lift Mappings. Discrete and Continuous Dynamical Systems. Series B. Mathematical Modelling, Analysis and Computations. Volume 20, Number 10. December 2015. Valentin S. Afraimovich, Todd R. Young, Mikhail I. Rabinovich. Hierarchical Heteroclinics in Dynamical Model of Cognitive Processes: Chunking. International Journal of Bifurcation and Chaos Vol. 24, No. 10, 1450132 (2014) V. S. Afraimovich, L. P. Shilnikov. Symbolic Dynamics in Multidimensional Annulus and Chimera States. International Journal of Bifurcation and Chaos. Vol: 24, N: 08 (August 2014) DOI: 10.1142/S0218127414400021, 1440002 V. S. Afraimovich, T. Young, M.K. Muezzinglu, M. Rabinovich. Nonlinear Dynamics of Emotion-Cognition Interaction: When Emotion Does Not Destroy Cognition? Bull Math Biol (2011) 73:266-284. DOI 10.1007/s11538-010-9572-x V. S. Afraimovich, L.A. Bunimovich, S.V. Moreno, Dynamical Networks: Continuous Time and General Discrete Time Models, Regular and Chaotic Dynamics, Vol. 15, 129–147, 2010. V. Afraimovich, L. Glebsky, Measures Related To e,n-Complexity Functions, Discrete And Continuous Dynamical Systems, Vol. 22, N 12. 2008. V. S. Afraimovich, M. Rabinovich, R. Huerta, P. Varona, Transient Cognitive Dynamics, Metastability, and Decision Making, PLOS Computational Biology 04, 05: 1–9. 2008. V. Afraimovich. Some topological properties of lattice dynamical systems, in Dynamics of Coupled Map Lattices and of Related Spatially Extended Systems, eds. J.-R. Chazottes and B. Fernandez, Lecture Notes in Physics, Springer 2005, p 153–180. V. Afraimovich, V. Zhigulin and M. Rabinovich, On the origin of reproducible sequential activity in neural circuits, Chaos 14 (2004), 1123–1129. V. Afraimovich, L. Bunimovich and J. Hale, Sistemi dinamici, Storia della Scienza IX, Enciclopedia Italiana 841–850. (2003) V. Afraimovich, G.M. Zaslavsky, Space time complexity in Hamiltonian dynamics, Chaos, 13, 2, (2003), pp. 519–532. V. Afraimovich, J. R. Chazottes and A. Cordonet, Synchronization in directionally coupled systems, Discrete Contin. Dyn. Syst., Ser. B, vol. 1 (2001), 421–442. V. Afraimovich, J.-R. Chazottes and B. Saussol, Local dimensions for Poincare recurrences, Electron.Res.Announc.Amer.Math.Soc., vol.6 (2000), 64–74 V. Afraimovich and T. Young, Relative density of irrational rotation numbers in families of circle diffeomorphisms. Ergodic theory and dynamical systems, 18 (1998), 1–16. V. Afraimovich and S-N. Chow, Topological spatial chaos and homoclinic points of Z-d actions in lattice dynamical systems, Japan J. Indust.Appl. Math. 12 1995, 1–17. V. Afraimovich, S.-N. Chow and W. Liu, Lorenz type attractors from codimensional-one bifurcation, Journal of Dynamics and Differential Equations, 7 (2), 1995, 375–407. V. Afraimovich and V.I. Nekorkin, Chaos of traveling waves in a discrete chain of di usively coupled maps, International Journal of Bifurcation and Chaos, 4 (3) (1994). V. Afraimovich and Ya. Pesin, Hyperbolicity of infinite-dimensional drift systems, Nonlinearity, 3 (1990), 1–19. V. Afraimovich, N.N. Verichev and M.I. Rabinovich, Stochastic synchronization of oscillations in dissipative systems, Radio zika, 29 (9), 1050–1060 (1986) (in Russian). V. Afraimovich, V.V. Bykov and L.P. Shil'nikov, On attracting nonstructurally stable limiting sets of the type of Lorenz attractor, Trans. of Moscow Math. Soc., 44 (1982). V. Afraimovich and L.P. Shil'nikov, On critical sets of Morse–Smale systems, Trans. Moscow Math. Soc., 28 (1973). == Selected bibliography == Afraimovich, V.S.; V.I. Arnold; et al. (1999). Bifurcation Theory And Catastrophe Theory. Springer. ISBN 3-540-65379-1. Afraimovich, V.S.; I. S. Aranson; M. I. Rabinovich (1989). Multidimensional Strange Attractors and Turbulence. Harwood Academic. ISBN 3-7186-4868-7. Afraimovich, V.S.; Sze-Bi Hsu (2003). Lectures on Chaotic Dynamical Systems. Ams/Ip Studies in Advanced Mathematics. ISBN 0-8218-3168-2. Afrajmovich, V.S.; V.I. Arnold; Yu S. Il'yashenko; L. P. Shil'nikov (6 June 1994). Dynamical Systems V. Springer. ISBN 3-540-18173-3. Afraimovich, V.S.; V. I. Nekorkin; G. V. Osipov; V. D. Shalfeev. Stability, structures and chaos in nonlinear synchronization networks. ISBN 978-981-279-871-8. Afraimovich, V.S.; E. Ugalde; J. Urías (2006). Fractal Dimensions for Poincaré Recurrences (Monograph Series on Nonlinear Sciences and Complexity Volume 2). Elsevier. ISBN 0-444-52189-5. Афраймович, В.С.; Э. Угальде; Х. Уриас (2011). Фрактальные Размерности для Времен Возвращения Пуанкаре. R&C Dynamics, Russia. ISBN 978-5-93972-903-1. Luo, A.; Afraimovich V.S., eds. (2010). Hamiltonian Chaos Beyond the KAM Theory. Springer. ISBN 978-3-642-12717-5. Luo, A.; Afraimovich V.S., eds. (2010). Long-range Interactions, Stochasticity and Fractional Dynamics. Springer. ISBN 978-3-642-12342-9. Luo, A.; Afraimovich V.S., eds. (2012). Continuous Dynamical Systems. Higher Education Press Limited Company and L&H Scientific Publishing. ISBN 978-1-62155-000-6. Luo, A.; Afraimovich V.S., eds. (2012). Discrete and Switching Dynamical Systems. Higher Education Press Limited Company and L&H Scientific Publishing. ISBN 978-1-62155-002-0. Afraimovich, V.; Luo A.; Fu X. (2014). Nonlinear Dynamics and Complexity (Nonlinear Systems and Complexity). Springer-Verlag Gmbh. ISBN 978-3319023526. Afraimovich, V.; Machado J.A.T.; Zhang J. (2016). Complex Motions and Chaos in Nonlinear Systems (Nonlinear Systems and Complexity). Springer-Verlag Gmbh. ISBN 978-3-319-28764-5. == Afraimovich award == Afraimovich Award has been granted to outstanding young scholars in nonlinear physical science by NSC since 2020. == See also == Dynamical systems Homoclinic orbit Topology Chaos theory Attractor Bifurcation theory Catastrophe theory Torus == References == == External links == Personal web page Conference celebrating Afraimovich's 65th anniversary Valentin S. Afraimovich at DBLP Bibliography Server American Institute of Mathematical Sciences A super short curriculum vitae Valentin S. Afraimovich at the Mathematics Genealogy Project Torus breakdown article at Scholarpedia Lagrange Award 2012 Book dedicated to V. Afraimovich preface
Wikipedia:Valentina Harizanov#0
Valentina Harizanov is a Serbian-American mathematician and professor of mathematics at The George Washington University. Her main research contributions are in computable structure theory (roughly at the intersection of computability theory and model theory), where she introduced the notion of degree spectra of relations on computable structures and obtained the first significant results concerning uncountable, countable, and finite Turing degree spectra. Her recent interests include algorithmic learning theory and spaces of orders on groups. == Education == She obtained her Bachelor of Science in mathematics in 1978 at the University of Belgrade and her Ph.D. in mathematics in 1987 at the University of Wisconsin–Madison under the direction of Terry Millar. == Career == At The George Washington University, Harizanov was an assistant professor of mathematics from 1987 to 1993, an associate professor of mathematics from 1994 to 2002, and a professor of mathematics from 2003 to the present. She has held two visiting professor positions, one in 1994 at the University of Maryland, College Park and one in 2014 at the Kurt Gödel Research Center at the University of Vienna. Harizanov has co-directed the Center for Quantum Computing, Information, Logic, and Topology at The George Washington University since 2011. == Research == In 2009, Harizanov received a grant from the National Science Foundation to research how algebraic, topological, and algorithmic properties of mathematical structures relate. == Awards and honors == Harizanov won the Oscar and Shoshana Trachtenberg Prize for Faculty Scholarship from The George Washington University (GWU) in 2016. This award is presented each year to a tenured GWU faculty member to recognize outstanding research accomplishments. She was named MSRI Eisenbud Professor for Fall 2020. == Publications == Harizanov has over 40 publications in peer-reviewed journals, including V.S. Harizanov, "Some effects of Ash-Nerode and other decidability conditions on degree spectra " Annals of Pure and Applied Logic 55 (1), pp. 51–65 (1991), cited 21 times according to Web of Science In addition, she has published the following book-length survey paper and co-edited, co-authored book: V.S. Harizanov, “Pure computable model theory,” in the volume: Handbook of Recursive Mathematics, vol. 1, Yu.L. Ershov, S.S. Goncharov, A. Nerode, and J.B. Remmel, editors (North-Holland, Amsterdam, 1998), pp. 3–114. M. Friend, N.B. Goethe, and V.S. Harizanov, Induction, Algorithmic Learning Theory, and Philosophy, Series: Logic, Epistemology, and the Unity of Science, vol. 9, Springer, Dordrecht, 304 pp., 2007. Degree spectra of relations are introduced and first studied in Harizanov's dissertation: Degree Spectrum of a Recursive Relation on a Recursive Structure(1987). == References == == External links == Valentina Harizanov's home page
Wikipedia:Valentine Joseph#0
Joseph A. Valentine (July 24, 1900 in New York City, as Giuseppe Valentino – May 18, 1949 in (Cheviot Hills, California) was an Italian-American cinematographer, five-time nominee for the Academy Award for Best Cinematography, and co-winner once in 1949. == Biography == Trained in photography, Valentine moved to working in films in the 1920s and from 1924 became a chief cinematographer. Honing his craft by working on several B-films, his final years were spent on the cinematography for three Alfred Hitchcock films. Valentine was nominated for the Academy Award in 1937 for Wings Over Honolulu, in 1938 for Mad About Music, in 1939 for First Love, in 1940 for Spring Parade. In 1949, on his fifth nomination, he won for Joan of Arc. == Partial filmography == == References == == External links == Joseph Valentine at IMDb
Wikipedia:Valeria Simoncini#0
Valeria Simoncini (born 1966) is an Italian researcher in numerical analysis who works as a professor in the mathematics department at the University of Bologna. Her research involves the computational solution of equations involving large matrices, and their applications in scientific computing. She is the chair of the SIAM Activity Group on Linear Algebra. == Education and career == Simoncini earned a degree from the University of Bologna in 1989, became a visiting scholar at the University of Illinois at Urbana–Champaign from 1991 to 1993, and completed her PhD at the University of Padua in 1994. After working at CNR from 1995 to 2000, she returned to Bologna as an associate professor in 2000, and was promoted to full professor in 2010. == Book == With Antonio Navarra, she is the author of the book A Guide to Empirical Orthogonal Functions for Climate Data Analysis (Springer, 2010). == Recognition == Simoncini was a second-place winner of the Leslie Fox Prize for Numerical Analysis in 1997. In 2014 she was elected as a fellow of the Society for Industrial and Applied Mathematics "for contributions to numerical linear algebra". She was named to the 2021 class of fellows of the American Mathematical Society "for contributions to computational mathematics, in particular to numerical linear algebra". In 2023, she was elected to serve on the SIAM Council. == References == == External links == Home page Valeria Simoncini publications indexed by Google Scholar
Wikipedia:Valeria de Paiva#0
Valeria Correa Vaz de Paiva is a Brazilian mathematician, logician, and computer scientist. Her work includes research on logical approaches to computation, especially using category theory, knowledge representation and natural language semantics, and functional programming with a focus on foundations and type theories. == Education == De Paiva earned a bachelor's degree in mathematics in 1982, a master's degree in 1984 (on pure algebra) and completed a doctorate at the University of Cambridge in 1988, under the supervision of Martin Hyland. Her thesis introduced Dialectica spaces, a categorical way of constructing models of linear logic, based on Kurt Gödel's Dialectica interpretation. == Career and research == She worked for nine years at PARC in Palo Alto, California, and also worked at Rearden Commerce and Cuil before joining Nuance. She is an honorary research fellow in computer science at the University of Birmingham. She is currently on the Council of the Division for Logic, Methodology and Philosophy of Science and Technology of the International Union of History and Philosophy of Science and Technology (2020–2023). === Selected publications === Applied Category Theory in Chemistry, Computing, and Social Networks. (with Baez, Cho, Ciccala and Otter). Notices of the American Mathematical Society, vol. 69, number 2, February 2022. Term Assignment for Intuitionistic Linear Logic. (with Benton, Bierman and Hyland). Technical Report 262, University of Cambridge Computer Laboratory. August 1992. Lineales. (with J.M.E. Hyland) In "O que nos faz pensar" Special number in Logic of "Cadernos do Dept. de Filosofia da PUC", Pontificial Catholic University of Rio de Janeiro, April 1991. A Dialectica-like Model of Linear Logic. In Proceedings of Category Theory and Computer Science, Manchester, UK, September 1989. Springer-Verlag LNCS 389 (eds. D. Pitt, D. Rydeheard, P. Dybjer, A. Pitts and A. Poigne). The Dialectica Categories. In Proc of Categories in Computer Science and Logic, Boulder, CO, 1987. Contemporary Mathematics, vol 92, American Mathematical Society, 1989 (eds. J. Gray and A. Scedrov) == References ==
Wikipedia:Valeriy Oseledets#0
Valeriy Iustinovich Oseledets (Russian: Валерий Иустинович Оселедец; 25 May 1940 – 13 March 2025) was a Soviet and Russian mathematician. == Biography == Oseledets was born on 25 May 1940, in the Soviet Union. He completed his undergraduate program in 1962 from Lomonosov Moscow State University, where he studied probability theory. In 1965, he proved the Oseledets Theorem, named after himself. He received his Ph.D. under advisor Yakov Sinai in 1967. He was a faculty member and professor under the Department of Mechanics and Mathematics at Lomonosov Moscow State University. His work primarily focused on statistical mechanics, stochastic analysis, dynamical systems, and probability. Oseledets died on 13 March 2025, at the age of 84. == Awards == Kolmogorov Prize: he received this award in 2009. == References ==
Wikipedia:Valery Glivenko#0
In the theory of probability, the Glivenko–Cantelli theorem (sometimes referred to as the fundamental theorem of statistics), named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, describes the asymptotic behaviour of the empirical distribution function as the number of independent and identically distributed observations grows. Specifically, the empirical distribution function converges uniformly to the true distribution function almost surely. The uniform convergence of more general empirical measures becomes an important property of the Glivenko–Cantelli classes of functions or sets. The Glivenko–Cantelli classes arise in Vapnik–Chervonenkis theory, with applications to machine learning. Applications can be found in econometrics making use of M-estimators. == Statement == Assume that X 1 , X 2 , … {\displaystyle X_{1},X_{2},\dots } are independent and identically distributed random variables in R {\displaystyle \mathbb {R} } with common cumulative distribution function F ( x ) {\displaystyle F(x)} . The empirical distribution function for X 1 , … , X n {\displaystyle X_{1},\dots ,X_{n}} is defined by F n ( x ) = 1 n ∑ i = 1 n I ( − ∞ , x ] ( X i ) = 1 n | { i ∣ X i ≤ x , 1 ≤ i ≤ n } | , {\displaystyle F_{n}(x)={\frac {1}{n}}\sum _{i=1}^{n}I_{(-\infty ,x]}(X_{i})={\frac {1}{n}}{\bigl |}\left\{i\mid X_{i}\leq x,\ 1\leq i\leq n\right\}{\bigr |},} where I C {\displaystyle I_{C}} is the indicator function of the set C . {\displaystyle C.} For every (fixed) x , {\displaystyle x,} F n ( x ) {\displaystyle F_{n}(x)} is a sequence of random variables which converge to F ( x ) {\displaystyle F(x)} almost surely by the strong law of large numbers. Glivenko and Cantelli strengthened this result by proving uniform convergence of F n {\displaystyle F_{n}} to F . {\displaystyle F.} === Theorem === ‖ F n − F ‖ ∞ = sup x ∈ R | F n ( x ) − F ( x ) | ⟶ 0 {\displaystyle \|F_{n}-F\|_{\infty }=\sup _{x\in \mathbb {R} }{\bigl |}F_{n}(x)-F(x){\bigr |}\longrightarrow 0} almost surely.: 265 This theorem originates with Valery Glivenko and Francesco Cantelli, in 1933. === Remarks === If X n {\displaystyle X_{n}} is a stationary ergodic process, then F n ( x ) {\displaystyle F_{n}(x)} converges almost surely to F ( x ) = E ⁡ [ 1 X 1 ≤ x ] . {\displaystyle F(x)=\operatorname {\mathbb {E} } {\bigl [}1_{X_{1}\leq x}{\bigr ]}.} The Glivenko–Cantelli theorem gives a stronger mode of convergence than this in the iid case. An even stronger uniform convergence result for the empirical distribution function is available in the form of an extended type of law of the iterated logarithm.: 268 See asymptotic properties of the empirical distribution function for this and related results. == Proof == For simplicity, consider a case of continuous random variable X {\displaystyle X} . Fix − ∞ = x 0 < x 1 < ⋯ < x m − 1 < x m = ∞ {\displaystyle -\infty =x_{0}<x_{1}<\cdots <x_{m-1}<x_{m}=\infty } such that F ( x j ) − F ( x j − 1 ) = 1 m {\displaystyle F(x_{j})-F(x_{j-1})={\frac {1}{m}}} for j = 1 , … , m {\displaystyle j=1,\dots ,m} . Now for all x ∈ R {\displaystyle x\in \mathbb {R} } there exists j ∈ { 1 , … , m } {\displaystyle j\in \{1,\dots ,m\}} such that x ∈ [ x j − 1 , x j ] {\displaystyle x\in [x_{j-1},x_{j}]} . F n ( x ) − F ( x ) ≤ F n ( x j ) − F ( x j − 1 ) = F n ( x j ) − F ( x j ) + 1 m , F n ( x ) − F ( x ) ≥ F n ( x j − 1 ) − F ( x j ) = F n ( x j − 1 ) − F ( x j − 1 ) − 1 m . {\displaystyle {\begin{aligned}F_{n}(x)-F(x)&\leq F_{n}(x_{j})-F(x_{j-1})=F_{n}(x_{j})-F(x_{j})+{\frac {1}{m}},\\F_{n}(x)-F(x)&\geq F_{n}(x_{j-1})-F(x_{j})=F_{n}(x_{j-1})-F(x_{j-1})-{\frac {1}{m}}.\end{aligned}}} Therefore, ‖ F n − F ‖ ∞ = sup x ∈ R | F n ( x ) − F ( x ) | ≤ max j ∈ { 1 , … , m } | F n ( x j ) − F ( x j ) | + 1 m . {\displaystyle \|F_{n}-F\|_{\infty }=\sup _{x\in \mathbb {R} }|F_{n}(x)-F(x)|\leq \max _{j\in \{1,\dots ,m\}}|F_{n}(x_{j})-F(x_{j})|+{\frac {1}{m}}.} Since max j ∈ { 1 , … , m } | F n ( x j ) − F ( x j ) | → 0 a.s. {\textstyle \max _{j\in \{1,\dots ,m\}}|F_{n}(x_{j})-F(x_{j})|\to 0{\text{ a.s.}}} by strong law of large numbers, we can guarantee that for any positive ε {\textstyle \varepsilon } and any integer m {\textstyle m} such that 1 / m < ε {\textstyle 1/m<\varepsilon } , we can find N {\textstyle N} such that for all n ≥ N {\displaystyle n\geq N} , we have max j ∈ { 1 , … , m } | F n ( x j ) − F ( x j ) | ≤ ε − 1 / m a.s. {\textstyle \max _{j\in \{1,\dots ,m\}}|F_{n}(x_{j})-F(x_{j})|\leq \varepsilon -1/m{\text{ a.s.}}} . Combined with the above result, this further implies that ‖ F n − F ‖ ∞ ≤ ε a.s. {\textstyle \|F_{n}-F\|_{\infty }\leq \varepsilon {\text{ a.s.}}} , which is the definition of almost sure convergence. == Empirical measures == One can generalize the empirical distribution function by replacing the set ( − ∞ , x ] {\displaystyle (-\infty ,x]} by an arbitrary set C from a class of sets C {\displaystyle {\mathcal {C}}} to obtain an empirical measure indexed by sets C ∈ C . {\displaystyle C\in {\mathcal {C}}.} P n ( C ) = 1 n ∑ i = 1 n I C ( X i ) , C ∈ C {\displaystyle P_{n}(C)={\frac {1}{n}}\sum _{i=1}^{n}I_{C}(X_{i}),C\in {\mathcal {C}}} Where I C ( x ) {\displaystyle I_{C}(x)} is the indicator function of each set C {\displaystyle C} . Further generalization is the map induced by P n {\displaystyle P_{n}} on measurable real-valued functions f, which is given by f ↦ P n f = ∫ S f d P n = 1 n ∑ i = 1 n f ( X i ) , f ∈ F . {\displaystyle f\mapsto P_{n}f=\int _{S}f\,dP_{n}={\frac {1}{n}}\sum _{i=1}^{n}f(X_{i}),f\in {\mathcal {F}}.} Then it becomes an important property of these classes whether the strong law of large numbers holds uniformly on F {\displaystyle {\mathcal {F}}} or C {\displaystyle {\mathcal {C}}} . == Glivenko–Cantelli class == Consider a set S {\displaystyle \ {\mathcal {S}}\ } with a sigma algebra of Borel subsets A and a probability measure P . {\displaystyle \ \mathbb {P} ~.} For a class of subsets, C ⊂ { C : C is measurable subset of S } {\displaystyle {\mathcal {C}}\subset {\bigl \{}C:C{\text{ is measurable subset of }}{\mathcal {S}}{\bigr \}}} and a class of functions F ⊂ { f : S → R , f is measurable } {\displaystyle {\mathcal {F}}\subset {\bigl \{}f:{\mathcal {S}}\to \mathbb {R} ,f{\mbox{ is measurable}}\ {\bigr \}}} define random variables ‖ P n − P ‖ C = sup C ∈ C | P n ( C ) − P ( C ) | {\displaystyle {\bigl \|}\mathbb {P} _{n}-\mathbb {P} {\bigr \|}_{\mathcal {C}}=\sup _{C\in {\mathcal {C}}}{\bigl |}\mathbb {P} _{n}(C)-\mathbb {P} (C){\bigr |}} ‖ P n − P ‖ F = sup f ∈ F | P n f − P f | {\displaystyle {\bigl \|}\mathbb {P} _{n}-\mathbb {P} {\bigr \|}_{\mathcal {F}}=\sup _{f\in {\mathcal {F}}}{\bigl |}\mathbb {P} _{n}f-\mathbb {P} f{\bigr |}} where P n ( C ) {\displaystyle \ \mathbb {P} _{n}(C)\ } is the empirical measure, P n f {\displaystyle \ \mathbb {P} _{n}f\ } is the corresponding map, and P f = ∫ S f d P , {\displaystyle \ \mathbb {P} f=\int _{\mathcal {S}}f\ \mathrm {d} \mathbb {P} \ ,} assuming that it exists. Definitions A class C {\displaystyle \ {\mathcal {C}}\ } is called a Glivenko–Cantelli class (or GC class, or sometimes strong GC class) with respect to a probability measure P if ‖ P n − P ‖ C → 0 {\displaystyle \ {\bigl \|}\mathbb {P} _{n}-\mathbb {P} {\bigr \|}_{\mathcal {C}}\to 0\ } almost surely as n → ∞ . {\displaystyle \ n\to \infty ~.} A class C {\displaystyle \ {\mathcal {C}}\ } is a weak Glivenko-Cantelli class with respect to P if it instead satisfies the weaker condition ‖ P n − P ‖ C → 0 {\displaystyle \ {\bigl \|}\mathbb {P} _{n}-\mathbb {P} {\bigr \|}_{\mathcal {C}}\to 0\ } in probability as n → ∞ . {\displaystyle \ n\to \infty ~.} A class is called a universal Glivenko–Cantelli class if it is a GC class with respect to any probability measure P {\displaystyle \mathbb {P} } on ( S , A ) {\displaystyle ({\mathcal {S}},A)} . A class is a weak uniform Glivenko–Cantelli class if the convergence occurs uniformly over all probability measures P {\displaystyle \mathbb {P} } on ( S , A ) {\displaystyle ({\mathcal {S}},A)} : For every ε > 0 {\displaystyle \varepsilon >0} , sup P ∈ P ( S , A ) Pr ( ‖ P n − P ‖ C > ε ) → 0 {\displaystyle \ \sup _{\mathbb {P} \in \mathbb {P} ({\mathcal {S}},A)}\Pr \left({\bigl \|}\mathbb {P} _{n}-\mathbb {P} {\bigr \|}_{\mathcal {C}}>\varepsilon \right)\to 0\ } as n → ∞ . {\displaystyle \ n\to \infty ~.} A class is a (strong) uniform Glivenko-Cantelli class if it satisfies the stronger condition that for every ε > 0 {\displaystyle \varepsilon >0} , sup P ∈ P ( S , A ) Pr ( sup m ≥ n ‖ P m − P ‖ C > ε ) → 0 {\displaystyle \ \sup _{\mathbb {P} \in \mathbb {P} ({\mathcal {S}},A)}\Pr \left(\sup _{m\geq n}{\bigl \|}\mathbb {P} _{m}-\mathbb {P} {\bigr \|}_{\mathcal {C}}>\varepsilon \right)\to 0\ } as n → ∞ . {\displaystyle \ n\to \infty ~.} Glivenko–Cantelli classes of functions (as well as their uniform and universal forms) are defined similarly, replacing all instances of C {\displaystyle {\mathcal {C}}} with F {\displaystyle {\mathcal {F}}} . The weak and strong versions of the various Glivenko-Cantelli properties often coincide under certain regularity conditions. The following definition commonly appears in such regularity conditions: A class of functions F {\displaystyle {\mathcal {F}}} is image-admissible Suslin if there exists a Suslin space Ω {\displaystyle \Omega } and a surjection T : Ω → F {\displaystyle T:\Omega \rightarrow {\mathcal {F}}} such that the map ( x , y ) ↦ [ T ( y ) ] ( x ) {\displaystyle (x,y)\mapsto [T(y)](x)} is measurable X × Ω {\displaystyle {\mathcal {X}}\times \Omega } . A class of measurable sets C {\displaystyle {\mathcal {C}}} is image-admissible Suslin if the class of functions { 1 C ∣ C ∈ C } {\displaystyle \{\mathbf {1} _{C}\mid C\in {\mathcal {C}}\}} is image-admissible Suslin, where 1 C {\displaystyle \mathbf {1} _{C}} denotes the indicator function for the set C {\displaystyle C} . Theorems The following two theorems give sufficient conditions for the weak and strong versions of the Glivenko-Cantelli property to be equivalent. Theorem (Talagrand, 1987) Let F {\displaystyle {\mathcal {F}}} be a class of functions that is integrable P {\displaystyle \mathbb {P} } , and define F 0 = { f − P f ∣ f ∈ F } {\displaystyle {\mathcal {F}}_{0}=\{f-\mathbb {P} f\mid f\in {\mathcal {F}}\}} . Then the following are equivalent: F {\displaystyle {\mathcal {F}}} is a weak Glivenko-Cantelli class and F 0 {\displaystyle {\mathcal {F}}_{0}} is dominated by an integrable function F {\displaystyle {\mathcal {F}}} is a Glivenko-Cantelli class Theorem (Dudley, Giné, and Zinn, 1991) Suppose that a function class F {\displaystyle {\mathcal {F}}} is bounded. Also suppose that the set F 0 = { f − inf f ∣ f ∈ F } {\displaystyle {\mathcal {F}}_{0}=\{f-\inf f\mid f\in {\mathcal {F}}\}} is image-admissible Suslin. Then F {\displaystyle {\mathcal {F}}} is a weak uniform Glivenko-Cantelli class if and only if it is a strong uniform Glivenko-Cantelli class. The following theorem is central to statistical learning of binary classification tasks. Theorem (Vapnik and Chervonenkis, 1968) Under certain consistency conditions, a universally measurable class of sets C {\displaystyle \ {\mathcal {C}}\ } is a uniform Glivenko-Cantelli class if and only if it is a Vapnik–Chervonenkis class. There exist a variety of consistency conditions for the equivalence of uniform Glivenko-Cantelli and Vapnik-Chervonenkis classes. In particular, either of the following conditions for a class C {\displaystyle {\mathcal {C}}} suffice: C {\displaystyle {\mathcal {C}}} is image-admissible Suslin. C {\displaystyle {\mathcal {C}}} is universally separable: There exists a countable subset C 0 {\displaystyle {\mathcal {C_{0}}}} of C {\displaystyle {\mathcal {C}}} such that each set C ∈ C {\displaystyle C\in {\mathcal {C}}} can be written as the pointwise limit of sets in C 0 {\displaystyle {\mathcal {C}}_{0}} . == Examples == Let S = R {\displaystyle S=\mathbb {R} } and C = { ( − ∞ , t ] : t ∈ R } {\displaystyle {\mathcal {C}}=\{(-\infty ,t]:t\in {\mathbb {R} }\}} . The classical Glivenko–Cantelli theorem implies that this class is a universal GC class. Furthermore, by Kolmogorov's theorem, sup P ∈ P ( S , A ) ‖ P n − P ‖ C ∼ n − 1 / 2 {\displaystyle \sup _{P\in {\mathcal {P}}(S,A)}\|P_{n}-P\|_{\mathcal {C}}\sim n^{-1/2}} , that is C {\displaystyle {\mathcal {C}}} is uniformly Glivenko–Cantelli class. Let P be a nonatomic probability measure on S and C {\displaystyle {\mathcal {C}}} be a class of all finite subsets in S. Because A n = { X 1 , … , X n } ∈ C {\displaystyle A_{n}=\{X_{1},\ldots ,X_{n}\}\in {\mathcal {C}}} , P ( A n ) = 0 {\displaystyle P(A_{n})=0} , P n ( A n ) = 1 {\displaystyle P_{n}(A_{n})=1} , we have that ‖ P n − P ‖ C = 1 {\displaystyle \|P_{n}-P\|_{\mathcal {C}}=1} and so C {\displaystyle {\mathcal {C}}} is not a GC class with respect to P. == See also == Donsker's theorem Dvoretzky–Kiefer–Wolfowitz inequality – strengthens the Glivenko–Cantelli theorem by quantifying the rate of convergence. == References == == Further reading ==
Wikipedia:Valery Goppa#0
Valery Denisovich Goppa (Russian: Вале́рий Дени́сович Го́ппа; born 1939) is a Soviet and Russian mathematician. He discovered a relation between algebraic geometry and codes, utilizing the Riemann-Roch theorem. Today these codes are called algebraic geometry codes. In 1981 he presented his discovery at the algebra seminar of the Moscow State University. He also constructed other classes of codes in his career, and in 1972 he won the best paper award of the IEEE Information Theory Society for his paper "A new class of linear correcting codes". It is this class of codes that bear the name of “Goppa code”. == Selected publications == V. D. Goppa (1988). Geometry and Codes (Mathematics and its Applications). Berlin: Springer. ISBN 90-277-2776-7. E. N. Gozodnichev; V. D. Goppa (1995). Algebraic Information Theory (Series on Soviet and East European Mathematics, Vol 11). World Scientific Pub Co Inc. ISBN 981-02-0943-6.{{cite book}}: CS1 maint: multiple names: authors list (link) VD Goppa (1970). "A New Class of Linear Error Correcting Codes". Problemy Peredachi Informatsii. VD Goppa (1971). "Rational Representation of Codes and (L,g)-Codes". Problemy Peredachi Informatsii. VD Goppa (1972). "Codes Constructed on the Base of $(L,g)$-Codes". Probl. Peredachi Inf. 8 (2): 107–109. VD Goppa (1974). "Binary Symmetric Channel Capacity Is Attained with Irreducible Codes". Probl. Peredachi Inf. 10 (1): 111–112. VD Goppa (1974). "Correction of Arbitrary Noise by Irreducible Codes". Probl. Peredachi Inf. 10 (3): 118–119. VD Goppa (1977). "Codes Associated with Divisors". Probl. Peredachi Inf. 13 (1): 33–39. VD Goppa (1983). "Algebraico-Geometric Codes". Math. USSR Izv. 21 (1): 75–91. Bibcode:1983IzMat..21...75G. doi:10.1070/IM1983v021n01ABEH001641. VD Goppa (1984). "Codes and information". Russ. Math. Surv. 39 (1): 87–141. Bibcode:1984RuMaS..39...87G. doi:10.1070/RM1984v039n01ABEH003062. S2CID 250898540. VD Goppa (1995). "Group representations and algebraic information theory". Izv. Math. 59 (6): 1123–1147. Bibcode:1995IzMat..59.1123G. doi:10.1070/IM1995v059n06ABEH000051. S2CID 250882696. == References == David Joyner (23 August 2002). "A brief guide to Goppa codes".
Wikipedia:Valéria Neves Domingos Cavalcanti#0
Valéria Neves Domingos Cavalcanti (born 1965) is a Brazilian mathematician whose research has concerned the control and stabilization of partial differential equations, and especially damping in viscoelastic systems. She is a professor in the department of mathematics at the State University of Maringá. Domingos Cavalcanti was born in Rio de Janeiro on 19 February 1965, the daughter of Portuguese immigrants to Brazil; she grew up in Vila da Penha. A good all-around student, she chose to study mathematics when she took the entrance examination for the Federal University of Rio de Janeiro, where she received a bachelor's degree in 1986, a master's degree in 1988, and a doctorate in 1995. Her doctoral dissertation, Comportamento Assintótico do Sistema de Elasticidade, was supervised by Manuel A. Milla Miranda. She joined the State University of Maringá as an associate professor in 1989, and became a full professor in 2017. She has headed the Paraná Mathematical Society twice, and is a member of the board of directors of the Brazilian Mathematical Society. == References == == External links == Valéria Neves Domingos Cavalcanti publications indexed by Google Scholar
Wikipedia:Vandermonde polynomial#0
In algebra, the Vandermonde polynomial of an ordered set of n variables X 1 , … , X n {\displaystyle X_{1},\dots ,X_{n}} , named after Alexandre-Théophile Vandermonde, is the polynomial: V n = ∏ 1 ≤ i < j ≤ n ( X j − X i ) . {\displaystyle V_{n}=\prod _{1\leq i<j\leq n}(X_{j}-X_{i}).} (Some sources use the opposite order ( X i − X j ) {\displaystyle (X_{i}-X_{j})} , which changes the sign ( n 2 ) {\displaystyle {\binom {n}{2}}} times: thus in some dimensions the two formulas agree in sign, while in others they have opposite signs.) It is also called the Vandermonde determinant, as it is the determinant of the Vandermonde matrix. The value depends on the order of the terms: it is an alternating polynomial, not a symmetric polynomial. == Alternating == The defining property of the Vandermonde polynomial is that it is alternating in the entries, meaning that permuting the X i {\displaystyle X_{i}} by an odd permutation changes the sign, while permuting them by an even permutation does not change the value of the polynomial – in fact, it is the basic alternating polynomial, as will be made precise below. It thus depends on the order, and is zero if two entries are equal – this also follows from the formula, but is also consequence of being alternating: if two variables are equal, then switching them both does not change the value and inverts the value, yielding V n = − V n , {\displaystyle V_{n}=-V_{n},} and thus V n = 0 {\displaystyle V_{n}=0} (assuming the characteristic is not 2, otherwise being alternating is equivalent to being symmetric). Among all alternating polynomials, the Vandermonde polynomial is the lowest degree monic polynomial. Conversely, the Vandermonde polynomial is a factor of every alternating polynomial: as shown above, an alternating polynomial vanishes if any two variables are equal, and thus must have ( X i − X j ) {\displaystyle (X_{i}-X_{j})} as a factor for all i ≠ j {\displaystyle i\neq j} . === Alternating polynomials === Thus, the Vandermonde polynomial (together with the symmetric polynomials) generates the alternating polynomials. == Derivatives == The first derivative is ∂ i Δ n = Δ n ∑ 1 ≤ j ≤ n : i ≠ j 1 X i − X j {\displaystyle \partial _{i}\Delta _{n}=\Delta _{n}\sum _{1\leq j\leq n:i\neq j}{\frac {1}{X_{i}-X_{j}}}} . Since it is the lowest degree monic alternating polynomial, and ∑ i ∂ i 2 V n {\displaystyle \sum _{i}\partial _{i}^{2}V_{n}} is also alternating, this implies ∑ i ∂ i 2 V n = 0 {\displaystyle \sum _{i}\partial _{i}^{2}V_{n}=0} , i.e. it is a harmonic function. == Discriminant == Its square is widely called the discriminant, though some sources call the Vandermonde polynomial itself the discriminant. The discriminant (the square of the Vandermonde polynomial: Δ = V n 2 {\displaystyle \Delta =V_{n}^{2}} ) does not depend on the order of terms, as ( − 1 ) 2 = 1 {\displaystyle (-1)^{2}=1} , and is thus an invariant of the unordered set of points. If one adjoins the Vandermonde polynomial to the ring of symmetric polynomials in n variables Λ n {\displaystyle \Lambda _{n}} , one obtains the quadratic extension Λ n [ V n ] / ⟨ V n 2 − Δ ⟩ {\displaystyle \Lambda _{n}[V_{n}]/\langle V_{n}^{2}-\Delta \rangle } , which is the ring of alternating polynomials. == Vandermonde polynomial of a polynomial == Given a polynomial, the Vandermonde polynomial of its roots is defined over the splitting field; for a non-monic polynomial, with leading coefficient a, one may define the Vandermonde polynomial as V n = a n − 1 ∏ 1 ≤ i < j ≤ n ( X j − X i ) , {\displaystyle V_{n}=a^{n-1}\prod _{1\leq i<j\leq n}(X_{j}-X_{i}),} (multiplying with a leading term) to accord with the discriminant. == Generalizations == Over arbitrary rings, one instead uses a different polynomial to generate the alternating polynomials – see (Romagny, 2005). The Vandermonde determinant is a very special case of the Weyl denominator formula applied to the trivial representation of the special unitary group S U ( n ) {\displaystyle \mathrm {SU} (n)} . == See also == Capelli polynomial (ref) == References == The fundamental theorem of alternating functions, by Matthieu Romagny, September 15, 2005
Wikipedia:Vanessa Robins#0
Vanessa Robins is an Australian applied mathematician whose research interests include computational topology, image processing, and the structure of granular materials. She is a fellow in the departments of applied mathematics and theoretical physics at Australian National University, where she was ARC Future Fellow from 2014 to 2019. == Education == Robins earned a bachelor's degree in mathematics at Australian National University in 1994. She completed a PhD at the University of Colorado Boulder in 2000. Her dissertation, Computational Topology at Multiple Resolutions: Foundations and Applications to Fractals and Dynamics, was jointly supervised by James D. Meiss and Elizabeth Bradley. == Contributions == One of Robins's publications, from 1999, is one of the three works that independently introduced persistent homology in topological data analysis. As well as working on mathematical research, she has collaborated with artist Julie Brooke, of the Australian National University School of Art & Design, on the mathematical visualization of topological surfaces. == References == == External links == Vanessa Robins publications indexed by Google Scholar
Wikipedia:Vanish at infinity#0
In mathematics, a function is said to vanish at infinity if its values approach 0 as the input grows without bounds. There are two different ways to define this with one definition applying to functions defined on normed vector spaces and the other applying to functions defined on locally compact spaces. Aside from this difference, both of these notions correspond to the intuitive notion of adding a point at infinity, and requiring the values of the function to get arbitrarily close to zero as one approaches it. This definition can be formalized in many cases by adding an (actual) point at infinity. == Definitions == A function on a normed vector space is said to vanish at infinity if the function approaches 0 {\displaystyle 0} as the input grows without bounds (that is, f ( x ) → 0 {\displaystyle f(x)\to 0} as ‖ x ‖ → ∞ {\displaystyle \|x\|\to \infty } ). Or, lim x → − ∞ f ( x ) = lim x → + ∞ f ( x ) = 0 {\displaystyle \lim _{x\to -\infty }f(x)=\lim _{x\to +\infty }f(x)=0} in the specific case of functions on the real line. For example, the function f ( x ) = 1 x 2 + 1 {\displaystyle f(x)={\frac {1}{x^{2}+1}}} defined on the real line vanishes at infinity. Alternatively, a function f {\displaystyle f} on a locally compact space Ω {\displaystyle \Omega } vanishes at infinity, if given any positive number ε > 0 {\displaystyle \varepsilon >0} , there exists a compact subset K ⊆ Ω {\displaystyle K\subseteq \Omega } such that | f ( x ) | < ε {\displaystyle |f(x)|<\varepsilon } whenever the point x {\displaystyle x} lies outside of K . {\displaystyle K.} In other words, for each positive number ε > 0 {\displaystyle \varepsilon >0} , the set { x ∈ Ω : ‖ f ( x ) ‖ ≥ ε } {\displaystyle \left\{x\in \Omega :\|f(x)\|\geq \varepsilon \right\}} has compact closure. For a given locally compact space Ω {\displaystyle \Omega } the set of such functions f : Ω → K {\displaystyle f:\Omega \to \mathbb {K} } valued in K , {\displaystyle \mathbb {K} ,} which is either R {\displaystyle \mathbb {R} } or C , {\displaystyle \mathbb {C} ,} forms a K {\displaystyle \mathbb {K} } -vector space with respect to pointwise scalar multiplication and addition, which is often denoted C 0 ( Ω ) . {\displaystyle C_{0}(\Omega ).} As an example, the function h ( x , y ) = 1 x + y {\displaystyle h(x,y)={\frac {1}{x+y}}} where x {\displaystyle x} and y {\displaystyle y} are reals greater or equal 1 and correspond to the point ( x , y ) {\displaystyle (x,y)} on R ≥ 1 2 {\displaystyle \mathbb {R} _{\geq 1}^{2}} vanishes at infinity. A normed space is locally compact if and only if it is finite-dimensional so in this particular case, there are two different definitions of a function "vanishing at infinity". The two definitions could be inconsistent with each other: if f ( x ) = ‖ x ‖ − 1 {\displaystyle f(x)=\|x\|^{-1}} in an infinite dimensional Banach space, then f {\displaystyle f} vanishes at infinity by the ‖ f ( x ) ‖ → 0 {\displaystyle \|f(x)\|\to 0} definition, but not by the compact set definition. == Rapidly decreasing == Refining the concept, one can look more closely to the rate of vanishing of functions at infinity. One of the basic intuitions of mathematical analysis is that the Fourier transform interchanges smoothness conditions with rate conditions on vanishing at infinity. Using big O notation, the rapidly decreasing test functions of tempered distribution theory are smooth functions that are O ( | x | − N ) {\displaystyle O\left(|x|^{-N}\right)} for all N {\displaystyle N} , as | x | → ∞ {\displaystyle |x|\to \infty } , and such that all their partial derivatives satisfy the same condition too. This condition is set up so as to be self-dual under Fourier transform, so that the corresponding distribution theory of tempered distributions will have the same property. == See also == Infinity – Mathematical concept Projectively extended real line – Real numbers with an added point at infinity Zero of a function – Point where function's value is zero == Citations == == References == Hewitt, E and Stromberg, K (1963). Real and abstract analysis. Springer-Verlag.{{cite book}}: CS1 maint: multiple names: authors list (link)
Wikipedia:Varadhan's lemma#0
In mathematics, Varadhan's lemma is a result from the large deviations theory named after S. R. Srinivasa Varadhan. The result gives information on the asymptotic distribution of a statistic φ(Zε) of a family of random variables Zε as ε becomes small in terms of a rate function for the variables. == Statement of the lemma == Let X be a regular topological space; let (Zε)ε>0 be a family of random variables taking values in X; let με be the law (probability measure) of Zε. Suppose that (με)ε>0 satisfies the large deviation principle with good rate function I : X → [0, +∞]. Let ϕ : X → R be any continuous function. Suppose that at least one of the following two conditions holds true: either the tail condition lim M → ∞ lim sup ε → 0 ( ε log ⁡ E [ exp ⁡ ( ϕ ( Z ε ) / ε ) 1 ( ϕ ( Z ε ) ≥ M ) ] ) = − ∞ , {\displaystyle \lim _{M\to \infty }\limsup _{\varepsilon \to 0}{\big (}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\phi (Z_{\varepsilon })/\varepsilon {\big )}\,\mathbf {1} {\big (}\phi (Z_{\varepsilon })\geq M{\big )}{\big ]}{\big )}=-\infty ,} where 1(E) denotes the indicator function of the event E; or, for some γ > 1, the moment condition lim sup ε → 0 ( ε log ⁡ E [ exp ⁡ ( γ ϕ ( Z ε ) / ε ) ] ) < ∞ . {\displaystyle \limsup _{\varepsilon \to 0}{\big (}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\gamma \phi (Z_{\varepsilon })/\varepsilon {\big )}{\big ]}{\big )}<\infty .} Then lim ε → 0 ε log ⁡ E [ exp ⁡ ( ϕ ( Z ε ) / ε ) ] = sup x ∈ X ( ϕ ( x ) − I ( x ) ) . {\displaystyle \lim _{\varepsilon \to 0}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\phi (Z_{\varepsilon })/\varepsilon {\big )}{\big ]}=\sup _{x\in X}{\big (}\phi (x)-I(x){\big )}.} == See also == Laplace principle (large deviations theory) == References == Dembo, Amir; Zeitouni, Ofer (1998). Large deviations techniques and applications. Applications of Mathematics (New York) 38 (Second ed.). New York: Springer-Verlag. pp. xvi+396. ISBN 0-387-98406-2. MR 1619036. (See theorem 4.3.1)
Wikipedia:Vararuchi#0
Vararuci (also transliterated as Vararuchi) (Devanagari: वररुचि) is a name associated with several literary and scientific texts in Sanskrit and also with various legends in several parts of India. This Vararuci is often identified with Kātyāyana. Kātyāyana is the author of Vārtikās which is an elaboration of certain sūtrās (rules or aphorisms) in Pāṇini's much revered treatise on Sanskrit grammar titled Aṣṭādhyāyī. Kātyāyana is believed to have flourished in the 3rd century BCE. However, this identification of Vararuci with Kātyāyana has not been fully accepted by scholars. Vararuci is believed to be the author of Prākrita Prakāśa, the oldest treatise on the grammar of Prākrit language. Vararuci's name appears in a verse listing the 'nine gems' (navaratnas) in the court of one Vikramaditya. Vararuci appears as a prominent character in Kathasaritsagara ("ocean of the streams of stories"), a famous 11th century collection of Indian legends, fairy tales and folk tales as retold by Somadeva. The Aithihyamala of Kottarathil Shankunni states that Vararuchi was the son of Govinda Swami i.e. Govinda Bhagavatpada. It also states that King Vikramaditya, Bhatti (the minister of King Vikramaditya) and Bharthari were his brothers. Vararuci is the father figure in a legend in Kerala popularly referred to as the legend of the twelve clans born of a pariah woman (Parayi petta panthirukulam). Vararuci of Kerala legend was also an astute astronomer believed to be the author of Chandravākyas (moon sentences), a set of numbers specifying the longitudes of the Moon at different intervals of time. These numbers are coded in the katapayādi system of numeration and it is believed that Vararuci himself was the inventor of this system of numeration. The eldest son of Vararuci of Kerala legend is known as Mezhathol Agnihothri and he is supposed to have lived between 343 and 378 CE. The name Vararuchi is associated with more than a dozen works in Sanskrit, and the name Katyayana is associated with about sixteen works. There are around ten works connected with astronomy and mathematics associated with the name of Vararuci. == Vararuci, the astronomer == Possibly there are at least three persons named Vararuci in the astronomical tradition of South India. === Vararuci (Kerala, fourth century CE) === This Vararuchi is the father figure in the astronomical tradition of Kerala. He is also the father figure in the legend of the twelve clans born of the Pariah woman. The eldest son of this Vararuci, the establisher of the first of the twelve clans, was one Mezhattol Agnihotri and he is supposed to have lived between 343 and 378 CE. Based on this, Vararuci is supposed to have lived in the first half of the 4th century CE. The manuscript tradition of Kerala ascribes to Vararuci the authorship of Chandravākyās (moon sentences) which is a set of 248 numbers for calculating the position of the sun and moon. This work is also known by the name Vararuci-Vākyās. Vararuci is also believed to be the originator of the Katapayadi notation of depicting numbers, which has been used in the formulation of Chandravākyās. === Vararuci (Kerala, 13th century CE) === This astronomer is the author of the well-known Vākyakaraṇa, which is the source book of the Vākya Panchānga, popular in South India, . This Vararuchi belonged to the Kerala region, as is clear from the introductory verses of the work. It has been shown that this treatise was originally produced around 1282 CE. The treatise is also known as Vākyapañcādhyāyī and is based on the works of Haridatta (c. 650 CE) of Kerala. Sundararaja, an astronomer from Tamil Nadu contemporaneous with Nilakantha Somayaji, has composed a commentary on Vākyakarana and the commentary contains several references to Vararuci. In five chapters Vākyakaraṇa deals with all aspects of astronomy required for the preparation of the Hindu almanac. Chapter I is concerned with the computation of the sun, the moon and the moon's nodes, Chapter II with that of the planets. Chapter III is devoted to problems involving time, position and direction and other preliminaries like the precession of the equinoxes. Chapter IV deals with the computation of the lunar and solar eclipses. Chapter V is devoted to computation of the conjunction of the planets and of the planets and stars. === Vararuci (several persons) === Many texts have been ascribed to Vararuci, such as Kerala-Vararuchi-Vakya, Kerala-Vararuchi-proktha, Kerala-dvādaśa-bhāva-vākyāni, Vararuchi-Kerala, Bhārgava-pañcāṅga etc. The Vararuchi, who is the author of the above works on astrology might be identical to Vararuchi of Kerala, but it is not possible to assert that he is the same as the author of the Chandra-Vākyās. == Vararuci, the grammarian == === The author of Vartikas === In ancient India, grammar was the first and most important of all sciences. When one had first studied grammar, he could go in for learning any other science. This historical mindset justifies the great respect and prestige attributed to the ancient grammarians of India like Pāṇini and Patanjali. Pāṇini was an ancient Indian Sanskrit grammarian from Pushkalavati, Gandhara (fl. 4th century BCE). He is known for his Sanskrit grammar text known as Aṣṭādhyāyī (meaning 'eight chapters'). The Ashtadhyayi is one of the earliest known grammars of Sanskrit. After Pāṇini, the Mahābhāṣya ('great commentary') of Patañjali on the Ashtadhyayi is one of the three most famous works in Sanskrit grammar. It was with Patañjali that Indian linguistic science reached its definite form. Kātyāyana (c. 3rd century BCE) was a Sanskrit grammarian, mathematician and Vedic priest who lived in ancient India. He is known as the author of the Varttika, an elaboration on Pāṇini grammar. Along with the Mahābhāsya of Patañjali, this text became a core part of the vyākarana (grammar) canon. (A vārttika is defined as a single remark or a whole work attempting to present a detailed commentary.) In many accounts Katyayana has been referred to as Vararuci. Kātyāyana's Vārtikās correct, supplement, eliminate as unnecessary, or justify the rules of Pāṇini. In his Vajasaneyi Pratisakhya, he subjected about 1500 sutras of Panini to critical observations. === A Prākṛt grammarian === The term Prākṛt or Prakrit denotes a multitude of languages all originated from Sanskrit. They are all considered as derived from Sanskrit and developed by adopting deviations and by corruption. There is no complete agreement on what languages are to be included in this group. Prakrit is also closely connected with the development of Buddhist and Jaina philosophical thought. Vararuci is the name of the author of the oldest extant grammar of Prakrit, titled Prākṛt Prakāśa. In this work Vararuci has considered four different dialects: Maharashtri, the older form of Marathi; Sauraseni, which evolved into the Braj language; Magadhi, the former form of Bihari; and Paisaci, a language no longer extant. The book is divided into twelve chapters. The first nine chapters containing a total of 424 rules are devoted to Maharashtri and of the remaining three chapters, one each is devoted to Paisaci with 14 rules, Māgadhi with 17 rules, and Sauraseni with 32 rules respectively. The author of Prakrita Prakasa was also known by the name Katyayana, perhaps the gotra name of Vararuci. This gotra name was given to him by the unknown author of a commentary of Prakrita Prakasa named Prakritamanjari. In Somadeva's Kathasaritsagara and Kshemendra's Brihatkathamanjari one can see that Katyayana was called Vararuci. The oldest commentator of Prakrita Prakasa was Bhamaha an inhabitant of Kashmir who was also a rhetorician as well as a poet. == Vararuci Śulbasūtras == The Śulbasūtras are appendices to the Vedas which give rules for constructing altars. They are the only sources of knowledge of Indian mathematics from the Vedic period. There are several Śulbasūtras. The most important of these are the Baudhayana Śulbasūtra, written about 800 BCE; the Apastamba Śulbasūtra, written about 600 BCE; Manava Śulbasūtra, written about 750 BCE; and the Katyayana Śulbasūtra, written about 200 BCE. Since Katyayana has been identified with one Vararuci, possibly the author of the Vartikas, Katyayana Śulbasūtra is referred to as Vararuci Śulbasūtra also. == Vararuci, the littérateur == Vararuci was also a legendary figure in the Indian literary tradition. === Author of Ubhayabhisarika === Though littérateur Vararuci is recorded to have composed several Kavyas, only one complete work is currently extant. This is a satirical monologue titled Ubhayabhisarika. The work titled Ubhayabhisarika (The mutual elopement) appears in a collection of four monologues titled Chaturbhani, the other monologues in the collection being Padma-prabhritaka (The lotus gift) by Shudraka, Dhurta-vita-samvada (Rogue and pimp confer) by Isvaradatta and Padataditaka (The kick) by Shyamalika. The collection along with an English translation has been published in Clay Sanskrit Library under the title The Quartet of Causeries. Ubhayabhisarika is set in Pataliputra and it is dated to somewhere between the 1st century BCE and 2nd century CE. It might be the earliest Indian play extant. Some scholars are of the opinion that the work was composed in the 5th century CE. === Other works === He is also said to have written two kavyas by names Kanthabharana (The necklace) and Charumati. There are several verses ascribed to Vararuci appearing in different literary works. Other works attributed to Vararuci are: Nitiratna, a book with didactic contents; Niruktasamuccaya, a commentary on the Nirukta of Yaska; Pushpasutra, a Pratishakhya of the Samaveda; a lexicon; and, an alamkara work. == Vararuci, a 'gem' in the court of Samrat Vikramaditya == Vararuci's name appears in a Sanskrit verse specifying the names of the 'nine gems' (navaratnas) in the court of the legendary Samrat Vikramaditya who is said to have founded the Vikrama era in 57 BCE. This verse appears in Jyotirvidabharana, which is supposed to be a work of the great Kalidasa but is in fact a late forgery. This verse appears in the last chapter (Sloka 20 : Chapter XXII) of Jyotirvidabharana. That the great Kalidasa is the author of Jyotirvidabharana is difficult to believe because Varahamihira, one of the nine gems listed in the verse, in his Pancasiddhantika refers to Aryabhata, who was born in 476 CE and wrote his Aryabhatiya in 499 CE or a little later. Jyotirvidabharana is a later work of about 12th century CE. There might have been a very respected Vararuci in the court of one King Vikrama, but the identities of the particular Vararuci and the King Vikrama are uncertain. The names of the nine gems are found in the following Sanskrit verse: The names of the nine gems and their traditional claims to fame are the following: Dhanvantari, a medical practitioner Kshapanaka, probably Siddhasena, a Jain monk, author of Dvatrishatikas Amarasimha, author of Amarakosha, a thesaurus of Sanskrit Sanku (little known) Vetalabhatta, a Maga Brahmin known as the author of the sixteen stanza Niti-pradeepa (The Lamp of Conduct) in tribute to Vikramaditya Ghatakarpara, author of Ghatakarpara-kavya, in which a wife sends a message (reverse of Meghaduta) Kalidasa, a renowned classical Sanskrit writer, widely regarded as the greatest poet and dramatist in the Sanskrit language Varahamihira, astrologer & astronomer Vararuchi, poet & grammarian == Vararuci of Kerala legends == There are several versions of these legends. One of these versions is given in Castes and tribes of Southern India by Edgar Thurston. This seven volume work is a systematic and detailed account of more than 300 castes and tribes in the erstwhile Madras Presidency and the states of Travancore and Cochin. It was originally published in 1909. The Vararuci legend is given in Volume 1 (pp. 120 – 125) in the discussion on the Paraiyan caste. Thurston has recorded that the discussion is based on a note prepared by L.K. Anantha Krishna Aiyar. A slight variant of the legend can be seen in Aithihyamala by Kottarathil Sankunny (1855–1937). This work originally written in Malayalam and published as a series of pamphlets during the years 1909 – 1934 is a definitive source of myths and legends of Kerala. (An English language translation of the work has recently been published under the title Lore and Legends of Kerala.) The story of Vararuci is given in the narration of the legend of Parayi petta panthirukulam. === Legend as per Castes and tribes of Southern India === In these legends, Vararuchi, a son of a Brahmin named Chandragupta and his Brahmin wife who was an astute astrologer, became king of Avanti and ruled until Vikramāditya, son of Chandragupta by his Kṣatriya wife came of age and the king abdicated in his favor. Once when he was resting under an aśvastha (ficus religiosa) tree he happened to overhear a conversation between two Gandharvās on the tree to the effect that he would marry a certain, just then born, paraiya girl. This he tried to prevent by arranging, with the help of the king, to have the girl enclosed in a box and floated down a river with a nail stuck into her head. After floating down the river through a long distance, the box came into the hands of a Brāhman who was bathing in the river. Finding a beautiful and charming little girl inside the box and accepting it as a divine gift he adopted her as his own daughter and helped her groom up accordingly. Vararuci in his travels happened to pass by the house of this Brāhman and the Brāhman invited him to lunch with him. Vararuci accepted the invitation on condition that the Brāhman had to prepare eighteen curries and would give him what remained after feeding a hundred other Brāhmans. The host was puzzled. But his adopted daughter was unfazed. She placed a long plantain leaf in front of Vararuci and served a preparation using ginger (symbolically corresponding to eighteen curries) and some rice which had been used an offering at the Vaisvadeva ceremony (symbolically equivalent to feeding a hundred a Brāhmans). Knowing this to be the work of the host's daughter and fully convinced of her superior intellect Vararuci expressed his desire to marry her. The desire was acceded to by the Brāhman. Days passed. One day while conversing with his wife about their past lives he accidentally saw a nail stuck in her head and he immediately knew her to be the girl whom he caused to be floated down a river. He realised the impossibility of altering one's fate and resolved to go on a pilgrimage with his wife bathing in rivers and worshiping at temples. At the end of these pilgrimages they reached Kerala and while in Kerala the woman bore him twelve children. All these children, except one, were abandoned on the wayside and were picked up members of different castes and were brought up according to the customs and traditions of those castes. They were all remarkable for their wisdom, gifted with the power of performing miracles, and were all believed to incarnations of Viṣṇu. These children are known by the names: Mezhathol Agnihotri (a Brāhman who had performed Yajñam or Yāgam ninety-nine times), Pākkanār (a Paraiya, bamboo basket maker by profession), Perumtaccan (a master carpenter and an expert in Vāsthu), Rajakan (a sage and a learned man raised by a washerman), Vallon (a Paraiya, sometimes identified with the Tamil saint Thiruvalluvar who composed Thirukkural), Vaduthala Nair (a Kshathriya, an expert in martial arts), Uppukootan (a Muslim, a trader in salt and cotton), Akavūr Cāttan (a Vysya, a manager of Akavur Mana), Kārakkal Amma (Kṣatriya woman, the only girl of the twelve children), Pānanār (a Pānan, a singer in Thiruvarangu), Nārānat Bhrāntan (a Brahman, the madman of Nārāṇatt), and Vāyillākunnilappan (not adopted by anybody, deified as the god of silence sanctified on the top of a hill). There are several legends about these children of Vararuci. In one such legend, Pākkanār tries to dissuade a group of Brāhmans who had resolved to go to Benares from so doing, by telling them that the journey to the sacred city would not be of productive of salvation. To prove the fruitlessness of the journey, he plucked a lotus flower from a stagnant pool and gave it to the Brāhmans and instructed them to give it to a hand which would rise from the Ganges and to say that it was a present for Goddess Ganga from Pākkanār. They did as directed and returned with news of the miracle. Pākkanār then led them to a stagnant pool and said: "Please return the lotus flower, Oh! Ganga!". According to the legend, the same lotus flower instantly appeared in his hand. === Legend as per Aithihyamala === There is another legend regarding the circumstances leading to Vararuci's arrival in Kerala. In this legend, Vararuci appears as a very learned scholar in the court of Vikramaditya. Once King Vikramaditya asked Vararuci to tell him about the most important verses in the whole of Valmiki's Ramayana. Since Vararuci could not give an immediate answer, the King granted him 40 days to find out the same and to report back to the king. If he were unable to find the correct answer, he would be required to leave the court. Vararuci left the court in search of an answer and during his wanderings, on the last night of the stipulated period, Vararuci happened to rest under a tree. While half awake and half asleep Vararuci happened to overhear a conversation of the Vanadevatas resting on the tree regarding the fate of a newly born Paraiah infant girl and they were telling each other that the poor Brāhman who does not know that the verses beginning with "māṃ viddhi.." is the most important verse in Ramayana would marry her. Vararuci most pleased with his discovery returned to the court and told the king the surprising answer. The king was very pleased and Vararuci prevailed upon Vikramiditya to destroy all pariah infant girls recently born in a certain locality. The girl was not killed instead was floated down a river with a nail stuck through the heads. The rest of the legend is as described in the first version of the legend. == Vararuci of Kathasaritsagara == Kathasaritsagara ('ocean of the streams of stories') is a famous 11th century collection of Indian legends, fairy tales and folk tales as retold by a Saivite Brahmin named Somadeva. Nothing is known about the author other than that his father's name was Ramadevabatta. The work was compiled for the entertainment of the queen Suryamati, wife of king Anantadeva of Kashmir (r. 1063–81). It consists of 18 books of 124 chapters and more than 21,000 verses in addition to prose sections. Vararuchi's story is told in great detail in the first four chapters of this great collection of stories. The following is a very brief account of some of the main events in the life of Vararuchi as told in this classic. It emphasises the divine ancestry and magical powers of Vararuchi. Once Pārvati pleaded with Shiva to tell her a story nobody had heard before. After much persuasions Shiva agreed and narrated the story of Vidyadharas. To ensure that nobody else would hear the story, Parvati had ordered that nobody be allowed to enter the place where they were and Nandi (The Vehicle of Lord Shiva) kept guard at the door. While Shiva was thus speaking to his consort in private thus, Pushpadanta, one of Shiva's trusted attendants, a member of his gana, appeared at the door. Having denied entry and overcame by curiosity, Pushpadanta summoned his special powers to move about unseen and entered the chamber of Shiva and eavesdropped the entire story as told by Shiva. Pushpadanta then narrated the entire story to his wife Jaya and Jaya retold the same to Parvati! Parvati became enraged and told Shiva: "Thou didst tell me any extraordinary tale, for Jaya knows it also." Shiva, due to his meditational powers, immediately knew the truth and told Parvati of the role of Pushpadanta in leaking the story to Jaya. Having heard this, Parvati became exceedingly enraged and cursed Pushpadanta to be mortal. Then he together with Jaya fell at Parvati's feet and entreated her to say when the curse would end. "A Yaksha named Supratîka, who has been made a Pisacha by the curse of Kuvera, is residing in the Vindhya forest under the name of Kāṇabhūti. When thou shalt see him, and calling to mind thy origin, tell him this tale; then thou shalt be released from this curse." Pushpadanta was born as a mortal under the name of Vararuchi in the city called Kausāṃbi. Somadatta, a Brāhman, was his father, and Vasudatta his mother. Vararuchi was also known as Kātyāyana. At the time of his birth there was a heavenly pronouncement that he would be known as Varauchi because of his interest (ruchi) in the best (vara) things. It was also pronounced that he would be a world-renowned authority on grammar. Vararuchi was divinely blessed with a special gift: who could get anything by heart by hearing only once. In course of time Vararuchi became a student of Varsha along with Indradatta and Vyādi. Though Vararuchi was defeated by Pāṇini in a test of scholarship, Vararuchi by hard work excelled Pāṇini in grammar. Later Vararuchi became a minister to King Yogananda of Pāṭaliputra. Once he went on a visit to the shrine of Durgā. Goddess Durga, being pleased with his austerities, ordered him in a dream to go to the forests of the Vindhya to behold Kāṇābhūti. Proceeding to Vindhya, he saw, surrounded by hundreds of Piśāchas, that Paiśācha Kāṇābhūti, in stature like a śāla tree. When Kāṇābhūti had seen him and respectfully clasped his feet, Kātyāyana sitting down immediately spoke to him thus: "Thou art an observer of the good custom, how hast thou come into this state?" When Kāṇābhūti finished his story, Vararuchi remembered his origin, and exclaimed like one aroused from sleep: "I am that very Pushpadanta, hear that tale from me." and Vararuchi told all his history from his birth at full length. Vararuchi then went to the tranquil site of the hermitage of Badarî. There he, desirous of putting off his mortal condition, resorted for meditation with intense devotion to that goddess and she, manifesting her real form to him, told him the secret of that meditation which arises from fire, to help him to put off the body. Then Vararuchi, having consumed his body by that form of meditation, reached his own heavenly home. == Vararuci in Pancatantra == The characters in one of the several stories in Pancatantra are King Nanda and Vararuci. This story appear as the fifth story titled A Three in One Story in Strategy Four : Loss of Gains. Once upon a time, there was a much respected popular king called Nanda. He had a minister called Vararuchi. He was a very learned man well versed in philosophy and statecraft. Vararuchi's wife was one day annoyed with her husband and kept away from him. Extremely fond of his wife, the minister tried every possible tactics he could think of to please her. Every method failed. Finally he pleaded with her: "Tell me what can I do to make you happy." The wife said sarcastically: "Shave your head cleanly and prostrate before me, then I will be happy." The minister meekly complied with her wish and succeeded in winning back her company and love. King Nanda's queen also enacted the same drama of shunning his company. Nanda tried every trick he knew of to win her affection. King also failed in his efforts. Then the King fell on her feet and prayed: "My darling, I cannot live without you even for a while. Tell me what should I do to win back your love?" The queen said: "I will be happy if you pretend to be a horse, agree to be bridled and to let me ride you. While racing you must neigh like a horse. Is this acceptable to you?" "Yes," said the king and he did as his wife demanded. Next day, the king saw Vararuchi with a shaven head and asked him, "Vararuchi, why have you shaved your head all of a sudden?" Vararuchi replied: "O king, is there anything that a woman does not demand and a man does not readily concede? He would do anything, shave his head or neigh like a horse." Raktamukha, the monkey, then told Karalamukha, the crocodile: "You wicked crocodile, you are a slave of your wife like Nanda and Vararuchi. You tried to kill me but your chatter gave away your plans." That's why the learned have said: == See also == Chandravakyas List of astronomers and mathematicians of the Kerala school Vākyakaraṇa == References ==
Wikipedia:Varga K. Kalantarov#0
Varga K. Kalantarov (born 1950) is an Azerbaijani mathematician, scientist and professor of mathematics. He is a member of the Koç University Mathematics Department in Istanbul, Turkey. == Education == Varga Kalantarov was born in 1950. He graduated from Baku State University in 1971. He received his PhD in Differential Equations and Mathematical Physics at the Baku Institute of Mathematics and Mechanics, Azerbaijan National Academy of Sciences in 1974. He received his Doctor of Sciences degree in 1988 under the supervision of Olga Ladyzhenskaya at the Steklov Institute of Mathematics, Saint Petersburg, Russia. == Academic career == After he received his PhD he started to hold a scientific researcher position in Baku Institute of Mathematics and Mechanics. Meanwhile, between 1975 and 1981 he was a visiting researcher in the Steklov Institute of Mathematics. From 1989 to 1993 he was the head of the Department of Partial Differential Equations at the Baku Institute of Mathematics and Mechanics. After the perestroika era he moved to Turkey with his family in 1993. Between 1993 and 2001 he was a full time professor in Hacettepe University, Mathematics Department, Ankara. Starting from 2001 he became a full time professor in Koç University. He has been an active researcher, having published more than 70 scientific manuscripts with more than 2000 citations. He has had 17 PhD students. == Research areas == His research interests include PDEs and dynamical systems. === Representative scientific publications === Kalantarov, V. K.; Ladyženskaja, O. A. Formation of collapses in quasilinear equations of parabolic and hyperbolic types. (Russian) Boundary value problems of mathematical physics and related questions in the theory of functions, 10. Zap. Naučn. Sem. LOMI 69 (1977), 77-102, 274. Kalantarov, Varga K.; Titi, Edriss S. Global attractors and determining modes for the 3D Navier-Stokes-Voight equations. Chin. Ann. Math. Ser. B 30 (2009), no. 6, 697–714. Kalantarov, Varga; Zelik, Sergey Finite-dimensional attractors for the quasi-linear strongly-damped wave equation. J. Differential Equations 247 (2009), no. 4, 1120–1155. == References == == External links == Varga Kalantarov's professional home page Varga K. Kalantarov publications indexed by Google Scholar
Wikipedia:Varghese Mathai#0
Mathai Varghese is a mathematician at the University of Adelaide. His first most influential contribution is the Mathai–Quillen formalism, which he formulated together with Daniel Quillen, and which has since found applications in index theory and topological quantum field theory. He was appointed a full professor in 2006. He was appointed Director of the Institute for Geometry and its Applications in 2009. In 2011, he was elected a Fellow of the Australian Academy of Science. In 2013, he was appointed the (Sir Thomas) Elder Professor of Mathematics at the University of Adelaide, and was elected a Fellow of the Royal Society of South Australia. In 2017, he was awarded an ARC Australian Laureate Fellowship. In 2021, he was awarded the prestigious Hannan Medal and Lecture from the Australian Academy of Science, recognizing an outstanding career in Mathematics. In 2021, he was also awarded the prestigious George Szekeres Medal which is the Australian Mathematical Society’s most prestigious medal, recognising research achievement and an outstanding record of promoting and supporting the discipline. == Biography == Mathai studied at Bishop Cotton Boys' School, Bangalore. Mathai received a BA at the Illinois Institute of Technology. He then proceeded to the Massachusetts Institute of Technology, where he was awarded a doctorate under the supervision of Daniel Quillen, a Fields Medallist. Mathai's work is in the area of geometric analysis. His research interests are in L 2 {\displaystyle L^{2}} analysis, index theory, and noncommutative geometry. He currently works on mathematical problems that have their roots in physics, for example, topological field theories, fractional quantum Hall effect, and D-branes in the presence of B-fields. The main focus of his research is on the application of noncommutative geometry and index theory to mathematical physics, with particular emphasis on string theory. His current work on index theory is ongoing joint work with Richard Melrose and Isadore Singer, on the fractional analytic index and on the index theorem for projective families of elliptic operators. His current work on string theory is ongoing joint work with Peter Bouwknegt, Jarah Evslin, Keith Hannabuss and Jonathan Rosenberg, on T-duality in the presence of background flux. The Mathai–Quillen formalism appeared in Topology, shortly after Mathai completed his Ph.D. Using the superconnection formalism of Quillen, they obtained a refinement of the Riemann–Roch formula, which links together the Thom classes in K-theory and cohomology, as an equality on the level of differential forms. This has an interpretation in physics as the computation of the classical and quantum (super) partition functions for the fermionic analogue of a harmonic oscillator with source term. In particular, they obtained a nice Gaussian shape representative of the Thom class in cohomology, which has a peak along the zero section. Its universal representative is obtained using the machinery of equivariant differential forms. Mathai was awarded the Australian Mathematical Society Medal in 2000. From August 2000 to August 2001, he was also a Clay Mathematics Institute Research Fellow and visiting scientist at the Massachusetts Institute of Technology. From March to June 2006, he was a senior research fellow at the Erwin Schrödinger Institute in Vienna. == Selected publications == Mathai, Varghese; Quillen, Daniel (1986). "Superconnections, Thom classes and equivariant differential forms". Topology. 25 (1): 85–110. doi:10.1016/0040-9383(86)90007-8. Bouwknegt, Peter, Evslin, Jarah and Mathai, Varghese. (2004) "T-duality: Topology Change from H-flux". Communications in Mathematical Physics 249 (2), 383–415. Mathai, Varghese; Melrose, Richard B.; Singer, Isadore M. (2006). "Fractional Analytic Index". Journal of Differential Geometry. 74 (2): 265–292. arXiv:math/0402329. doi:10.4310/jdg/1175266205. == Notes == == References == Blau, Matthias "The Mathai-Quillen Formalism and Topological Field Theory", Infinite-dimensional geometry in physics (Karpacz, 1992). J. Geom. Phys. 11 (1993), no. 1-4, 95–127 Wu, Siye "Mathai-Quillen Formalism", J. Geom. Phys. 17 (1995), no. 4, 299–309 == External links == Mathai Varghese's research page at the University of Adelaide. Varghese Mathai at the Mathematics Genealogy Project
Wikipedia:Variable (mathematics)#0
In mathematics, a variable (from Latin variabilis 'changeable') is a symbol, typically a letter, that refers to an unspecified mathematical object. One says colloquially that the variable represents or denotes the object, and that any valid candidate for the object is the value of the variable. The values a variable can take are usually of the same kind, often numbers. More specifically, the values involved may form a set, such as the set of real numbers. The object may not always exist, or it might be uncertain whether any valid candidate exists or not. For example, one could represent two integers by the variables p and q and require that the value of the square of p is twice the square of q, which in algebraic notation can be written p2 = 2 q2. A definitive proof that this relationship is impossible to satisfy when p and q are restricted to integer numbers isn't obvious, but it has been known since ancient times and has had a big influence on mathematics ever since. Originally, the term variable was used primarily for the argument of a function, in which case its value could be thought of as varying within the domain of the function. This is the motivation for the choice of the term. Also, variables are used for denoting values of functions, such as the symbol y in the equation y = f(x), where x is the argument and f denotes the function itself. A variable may represent an unspecified number that remains fixed during the resolution of a problem; in which case, it is often called a parameter. A variable may denote an unknown number that has to be determined; in which case, it is called an unknown; for example, in the quadratic equation ax2 + bx + c = 0, the variables a, b, c are parameters, and x is the unknown. Sometimes the same symbol can be used to denote both a variable and a constant, that is a well defined mathematical object. For example, the Greek letter π generally represents the number π, but has also been used to denote a projection. Similarly, the letter e often denotes Euler's number, but has been used to denote an unassigned coefficient for quartic function and higher degree polynomials. Even the symbol 1 has been used to denote an identity element of an arbitrary field. These two notions are used almost identically, therefore one usually must be told whether a given symbol denotes a variable or a constant. Variables are often used for representing matrices, functions, their arguments, sets and their elements, vectors, spaces, etc. In mathematical logic, a variable is a symbol that either represents an unspecified constant of the theory, or is being quantified over. == History == === Early history === The earliest uses of an "unknown quantity" date back to at least the Ancient Egyptians with the Moscow Mathematical Papyrus (c. 1500 BC) which described problems with unknowns rhetorically, called the "Aha problems". The "Aha problems" involve finding unknown quantities (referred to as aha, "stack") if the sum of the quantity and part(s) of it are given (The Rhind Mathematical Papyrus also contains four of these type of problems). For example, problem 19 asks one to calculate a quantity taken 1+1⁄2 times and added to 4 to make 10. In modern mathematical notation: ⁠3/2⁠x + 4 = 10. Around the same time in Mesopotamia, mathematics of the Old Babylonian period (c. 2000 BC – 1500 BC) was more advanced, also studying quadratic and cubic equations. In works of ancient greece such as Euclid's Elements (c. 300 BC), mathematics was described geometrically. For example, The Elements, proposition 1 of Book II, Euclid includes the proposition: "If there be two straight lines, and one of them be cut into any number of segments whatever, the rectangle contained by the two straight lines is equal to the rectangles contained by the uncut straight line and each of the segments." This corresponds to the algebraic identity a(b + c) = ab + ac (distributivity), but is described entirely geometrically. Euclid, and other greek geometers, also used single letters refer to geometric points and shapes. This kind of algebra is now sometimes called Greek geometric algebra. Diophantus of Alexandria, pioneered a form of syncopated algebra in his Arithmetica (c. 200 AD), which introduced symbolic manipulation of expressions with unknowns and powers, but without modern symbols for relations (such as equality or inequality) or exponents. An unknown number was called ζ {\displaystyle \zeta } . The square of ζ {\displaystyle \zeta } was Δ v {\displaystyle \Delta ^{v}} ; the cube was K v {\displaystyle K^{v}} ; the fourth power was Δ v Δ {\displaystyle \Delta ^{v}\Delta } ; and the fifth power was Δ K v {\displaystyle \Delta K^{v}} . So for example, what would be written in modern notation as: x 3 − 2 x 2 + 10 x − 1 , {\displaystyle x^{3}-2x^{2}+10x-1,} would be written in Diophantus's syncopated notation as: K υ α ¯ ζ ι ¯ ⋔ Δ υ β ¯ M α ¯ {\displaystyle \mathrm {K} ^{\upsilon }{\overline {\alpha }}\;\zeta {\overline {\iota }}\;\,\pitchfork \;\,\Delta ^{\upsilon }{\overline {\beta }}\;\mathrm {M} {\overline {\alpha }}\,\;} In the 7th century BC, Brahmagupta used different colours to represent the unknowns in algebraic equations in the Brāhmasphuṭasiddhānta. One section of this book is called "Equations of Several Colours". Greek and other ancient mathematical advances, were often trapped in long periods of stagnation, and so there were few revolutions in notation, but this began to change by the early modern period. === Early modern period === At the end of the 16th century, François Viète introduced the idea of representing known and unknown numbers by letters, nowadays called variables, and the idea of computing with them as if they were numbers—in order to obtain the result by a simple replacement. Viète's convention was to use consonants for known values, and vowels for unknowns. In 1637, René Descartes "invented the convention of representing unknowns in equations by x, y, and z, and knowns by a, b, and c". Contrarily to Viète's convention, Descartes' is still commonly in use. The history of the letter x in math was discussed in an 1887 Scientific American article. Starting in the 1660s, Isaac Newton and Gottfried Wilhelm Leibniz independently developed the infinitesimal calculus, which essentially consists of studying how an infinitesimal variation of a time-varying quantity, called a Fluent, induces a corresponding variation of another quantity which is a function of the first variable. Almost a century later, Leonhard Euler fixed the terminology of infinitesimal calculus, and introduced the notation y = f(x) for a function f, its variable x and its value y. Until the end of the 19th century, the word variable referred almost exclusively to the arguments and the values of functions. In the second half of the 19th century, it appeared that the foundation of infinitesimal calculus was not formalized enough to deal with apparent paradoxes such as a nowhere differentiable continuous function. To solve this problem, Karl Weierstrass introduced a new formalism consisting of replacing the intuitive notion of limit by a formal definition. The older notion of limit was "when the variable x varies and tends toward a, then f(x) tends toward L", without any accurate definition of "tends". Weierstrass replaced this sentence by the formula ( ∀ ϵ > 0 ) ( ∃ η > 0 ) ( ∀ x ) | x − a | < η {\displaystyle (\forall \epsilon >0)(\exists \eta >0)(\forall x)\;|x-a|<\eta } ⇒ | L − f ( x ) | < ϵ , {\displaystyle \;\Rightarrow |L-f(x)|<\epsilon ,} in which none of the five variables is considered as varying. This static formulation led to the modern notion of variable, which is simply a symbol representing a mathematical object that either is unknown, or may be replaced by any element of a given set (e.g., the set of real numbers). == Notation == Variables are generally denoted by a single letter, most often from the Latin alphabet and less often from the Greek, which may be lowercase or capitalized. The letter may be followed by a subscript: a number (as in x2), another variable (xi), a word or abbreviation of a word as a label (xtotal) or a mathematical expression (x2i+1). Under the influence of computer science, some variable names in pure mathematics consist of several letters and digits. Following René Descartes (1596–1650), letters at the beginning of the alphabet such as a, b, c are commonly used for known values and parameters, and letters at the end of the alphabet such as x, y, z are commonly used for unknowns and variables of functions. In printed mathematics, the norm is to set variables and constants in an italic typeface. For example, a general quadratic function is conventionally written as ax2 + bx + c, where a, b and c are parameters (also called constants, because they are constant functions), while x is the variable of the function. A more explicit way to denote this function is x ↦ ax2 + bx + c, which clarifies the function-argument status of x and the constant status of a, b and c. Since c occurs in a term that is a constant function of x, it is called the constant term. Specific branches and applications of mathematics have specific naming conventions for variables. Variables with similar roles or meanings are often assigned consecutive letters or the same letter with different subscripts. For example, the three axes in 3D coordinate space are conventionally called x, y, and z. In physics, the names of variables are largely determined by the physical quantity they describe, but various naming conventions exist. A convention often followed in probability and statistics is to use X, Y, Z for the names of random variables, keeping x, y, z for variables representing corresponding better-defined values. === Conventional variable names === a, b, c, d (sometimes extended to e, f) for parameters or coefficients a0, a1, a2, ... for situations where distinct letters are inconvenient ai or ui for the ith term of a sequence or the ith coefficient of a series f, g, h for functions (as in f(x)) i, j, k (sometimes l or h) for varying integers or indices in an indexed family, or unit vectors l and w for the length and width of a figure l also for a line, or in number theory for a prime number not equal to p n (with m as a second choice) for a fixed integer, such as a count of objects or the degree of a polynomial p for a prime number or a probability q for a prime power or a quotient r for a radius, a remainder or a correlation coefficient t for time x, y, z for the three Cartesian coordinates of a point in Euclidean geometry or the corresponding axes z for a complex number, or in statistics a normal random variable α, β, γ, θ, φ for angle measures ε (with δ as a second choice) for an arbitrarily small positive number λ for an eigenvalue Σ (capital sigma) for a sum, or σ (lowercase sigma) in statistics for the standard deviation μ for a mean == Specific kinds of variables == It is common for variables to play different roles in the same mathematical formula, and names or qualifiers have been introduced to distinguish them. For example, the general cubic equation a x 3 + b x 2 + c x + d = 0 , {\displaystyle ax^{3}+bx^{2}+cx+d=0,} is interpreted as having five variables: four, a, b, c, d, which are taken to be given numbers and the fifth variable, x, is understood to be an unknown number. To distinguish them, the variable x is called an unknown, and the other variables are called parameters or coefficients, or sometimes constants, although this last terminology is incorrect for an equation, and should be reserved for the function defined by the left-hand side of this equation. In the context of functions, the term variable refers commonly to the arguments of the functions. This is typically the case in sentences like "function of a real variable", "x is the variable of the function f : x ↦ f(x)", "f is a function of the variable x" (meaning that the argument of the function is referred to by the variable x). In the same context, variables that are independent of x define constant functions and are therefore called constant. For example, a constant of integration is an arbitrary constant function that is added to a particular antiderivative to obtain the other antiderivatives. Because of the strong relationship between polynomials and polynomial functions, the term "constant" is often used to denote the coefficients of a polynomial, which are constant functions of the indeterminates. Other specific names for variables are: An unknown is a variable in an equation which has to be solved for. An indeterminate is a symbol, commonly called variable, that appears in a polynomial or a formal power series. Formally speaking, an indeterminate is not a variable, but a constant in the polynomial ring or the ring of formal power series. However, because of the strong relationship between polynomials or power series and the functions that they define, many authors consider indeterminates as a special kind of variables. A parameter is a quantity (usually a number) which is a part of the input of a problem, and remains constant during the whole solution of this problem. For example, in mechanics the mass and the size of a solid body are parameters for the study of its movement. In computer science, parameter has a different meaning and denotes an argument of a function. Free variables and bound variables A random variable is a kind of variable that is used in probability theory and its applications. All these denominations of variables are of semantic nature, and the way of computing with them (syntax) is the same for all. === Dependent and independent variables === In calculus and its application to physics and other sciences, it is rather common to consider a variable, say y, whose possible values depend on the value of another variable, say x. In mathematical terms, the dependent variable y represents the value of a function of x. To simplify formulas, it is often useful to use the same symbol for the dependent variable y and the function mapping x onto y. For example, the state of a physical system depends on measurable quantities such as the pressure, the temperature, the spatial position, ..., and all these quantities vary when the system evolves, that is, they are function of the time. In the formulas describing the system, these quantities are represented by variables which are dependent on the time, and thus considered implicitly as functions of the time. Therefore, in a formula, a dependent variable is a variable that is implicitly a function of another (or several other) variables. An independent variable is a variable that is not dependent. The property of a variable to be dependent or independent depends often of the point of view and is not intrinsic. For example, in the notation f(x, y, z), the three variables may be all independent and the notation represents a function of three variables. On the other hand, if y and z depend on x (are dependent variables) then the notation represents a function of the single independent variable x. === Examples === If one defines a function f from the real numbers to the real numbers by f ( x ) = x 2 + sin ⁡ ( x + 4 ) {\displaystyle f(x)=x^{2}+\sin(x+4)} then x is a variable standing for the argument of the function being defined, which can be any real number. In the identity ∑ i = 1 n i = n 2 + n 2 {\displaystyle \sum _{i=1}^{n}i={\frac {n^{2}+n}{2}}} the variable i is a summation variable which designates in turn each of the integers 1, 2, ..., n (it is also called index because its variation is over a discrete set of values) while n is a parameter (it does not vary within the formula). In the theory of polynomials, a polynomial of degree 2 is generally denoted as ax2 + bx + c, where a, b and c are called coefficients (they are assumed to be fixed, i.e., parameters of the problem considered) while x is called a variable. When studying this polynomial for its polynomial function this x stands for the function argument. When studying the polynomial as an object in itself, x is taken to be an indeterminate, and would often be written with a capital letter instead to indicate this status. ==== Example: the ideal gas law ==== Consider the equation describing the ideal gas law, P V = N k B T . {\displaystyle PV=Nk_{\text{B}}T.} This equation would generally be interpreted to have four variables, and one constant. The constant is kB, the Boltzmann constant. One of the variables, N, the number of particles, is a positive integer (and therefore a discrete variable), while the other three, P, V and T, for pressure, volume and temperature, are continuous variables. One could rearrange this equation to obtain P as a function of the other variables, P ( V , N , T ) = N k B T V . {\displaystyle P(V,N,T)={\frac {Nk_{\text{B}}T}{V}}.} Then P, as a function of the other variables, is the dependent variable, while its arguments, V, N and T, are independent variables. One could approach this function more formally and think about its domain and range: in function notation, here P is a function P : R > 0 × N × R > 0 → R {\displaystyle P:\mathbb {R} _{>0}\times \mathbb {N} \times \mathbb {R} _{>0}\rightarrow \mathbb {R} } . However, in an experiment, in order to determine the dependence of pressure on a single one of the independent variables, it is necessary to fix all but one of the variables, say T. This gives a function P ( T ) = N k B T V , {\displaystyle P(T)={\frac {Nk_{\text{B}}T}{V}},} where now N and V are also regarded as constants. Mathematically, this constitutes a partial application of the earlier function P. This illustrates how independent variables and constants are largely dependent on the point of view taken. One could even regard kB as a variable to obtain a function P ( V , N , T , k B ) = N k B T V . {\displaystyle P(V,N,T,k_{\text{B}})={\frac {Nk_{\text{B}}T}{V}}.} == Moduli spaces == Considering constants and variables can lead to the concept of moduli spaces. For illustration, consider the equation for a parabola, y = a x 2 + b x + c , {\displaystyle y=ax^{2}+bx+c,} where a, b, c, x and y are all considered to be real. The set of points (x, y) in the 2D plane satisfying this equation trace out the graph of a parabola. Here, a, b and c are regarded as constants, which specify the parabola, while x and y are variables. Then instead regarding a, b and c as variables, we observe that each set of 3-tuples (a, b, c) corresponds to a different parabola. That is, they specify coordinates on the 'space of parabolas': this is known as a moduli space of parabolas. == See also == Lambda calculus Observable variable Physical constant Propositional variable == References == == Bibliography ==
Wikipedia:Variable and attribute (research)#0
In science and research, an attribute is a quality of an object (person, thing, etc.). Attributes are closely related to variables. A variable is a logical set of attributes. Variables can "vary" – for example, be high or low. How high, or how low, is determined by the value of the attribute (and in fact, an attribute could be just the word "low" or "high"). (For example see: Binary option) While an attribute is often intuitive, the variable is the operationalized way in which the attribute is represented for further data processing. In data processing data are often represented by a combination of items (objects organized in rows), and multiple variables (organized in columns). Values of each variable statistically "vary" (or are distributed) across the variable's domain. A domain is a set of all possible values that a variable is allowed to have. The values are ordered in a logical way and must be defined for each variable. Domains can be bigger or smaller. The smallest possible domains have those variables that can only have two values, also called binary (or dichotomous) variables. Bigger domains have non-dichotomous variables and the ones with a higher level of measurement. (See also domain of discourse.) Semantically, greater precision can be obtained when considering an object's characteristics by distinguishing 'attributes' (characteristics that are attributed to an object) from 'traits' (characteristics that are inherent to the object). == Examples == Age is an attribute that can be operationalized in many ways. It can be dichotomized so that only two values – "old" and "young" – are allowed for further data processing. In this case the attribute "age" is operationalized as a binary variable. If more than two values are possible and they can be ordered, the attribute is represented by ordinal variable, such as "young", "middle age", and "old". Next it can be made of rational values, such as 1, 2, 3.... 99. The "social class" attribute can be operationalized in similar ways as age, including "lower", "middle" and "upper class" and each class could be differentiated between upper and lower, transforming thus changing the three attributes into six (see the model proposed by William Lloyd Warner) or it could use different terminology (such as the working class as in the model by Gilbert and Kahl). == See also == Qualitative data Quantitative data Control variable Dependent and independent variables == Notes ==
Wikipedia:Variational perturbation theory#0
In mathematics, variational perturbation theory (VPT) is a mathematical method to convert divergent power series in a small expansion parameter, say s = ∑ n = 0 ∞ a n g n {\displaystyle s=\sum _{n=0}^{\infty }a_{n}g^{n}} , into a convergent series in powers s = ∑ n = 0 ∞ b n / ( g ω ) n {\displaystyle s=\sum _{n=0}^{\infty }b_{n}/(g^{\omega })^{n}} , where ω {\displaystyle \omega } is a critical exponent (the so-called index of "approach to scaling" introduced by Franz Wegner). This is possible with the help of variational parameters, which are determined by optimization order by order in g {\displaystyle g} . The partial sums are converted to convergent partial sums by a method developed in 1992. Most perturbation expansions in quantum mechanics are divergent for any small coupling strength g {\displaystyle g} . They can be made convergent by VPT (for details see the first textbook cited below). The convergence is exponentially fast. After its success in quantum mechanics, VPT has been developed further to become an important mathematical tool in quantum field theory with its anomalous dimensions. Applications focus on the theory of critical phenomena. It has led to the most accurate predictions of critical exponents. More details can be read here. == References == == External links == Kleinert H., Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3. Auflage, World Scientific (Singapore, 2004) (readable online here) (see Chapter 5) Kleinert H. and Verena Schulte-Frohlinde, Critical Properties of φ4-Theories, World Scientific (Singapur, 2001); Paperback ISBN 981-02-4658-7 (readable online here) (see Chapter 19) Feynman, R. P.; Kleinert, H. (1986). "Effective classical partition functions" (PDF). Physical Review A. 34 (6): 5080–5084. Bibcode:1986PhRvA..34.5080F. doi:10.1103/PhysRevA.34.5080. PMID 9897894.
Wikipedia:Vasile M. Popov#0
Vasile Mihai Popov (born July 7, 1928, Galaţi, Romania) is a leading systems theorist and control engineering specialist. He is well known for having developed a method to analyze stability of nonlinear dynamical systems, now known as Popov criterion. == Biography == He was born in Galaţi, Romania on July 7, 1928. He received the engineering degree in electronics from the Bucharest Polytechnic Institute in 1950. He worked for a few years as Assistant Professor at the Bucharest Polytechnic Institute in the Faculty of Electronics. His main research interests during this period were in frequency modulation and parametric oscillations. In the mid 1950s, he joined the Institute for Energy of Romanian Academy of Science in Bucharest. In the 1960s, Popov headed the Control group at the Institute of Energy of the Romanian Academy. In 1968 Popov left Romania. He was a visiting professor at the Electrical Engineering departments of University of California, Berkeley, and Stanford University, and then Professor in the department of electrical engineering at the University of Maryland College Park. In 1975 he joined the mathematics department of University of Florida Gainesville. He retired in 1993 and currently resides in Gainesville, Florida, USA. == Work == === Qualitative theory of differential equations === Motivated by stability issues in nuclear reactors and by his participation in a seminar series on qualitative theory of differential equations run by A. Halanay, Popov started working in stability of nonlinear feedback systems, in particular on the Lur'e-Postnikov problem. In 1958/59 he obtained, through a very original approach, the first frequency stability criterion for a class of nonlinear feedback control systems. He continued this work and obtained the equivalence between the state space (Lyapunov function based) approach and the frequency domain approach for stability and obtained a very perceptive characterization of passive systems, nowadays known as the celebrated Kalman–Yakubovich–Popov lemma. === Hyperstability === In the early 1960s, Popov also conceived the notion of hyperstability, a concept that he viewed as generalization of absolute stability. This introduced a new and a very fruitful point of view for the analysis and synthesis of nonlinear feedback systems. This research work was published in the first half of the sixties and led to the book Hyperstability of Dynamic Systems, first published in Romania in 1966, and subsequently translated into French and English (Springer-Verlag, 1973). Popov was also the first to discover the geometric invariants of linear systems with respect to certain "transformation groups" and he introduced a "canonical" form for uniquely describing the multivariable systems. == References == Anderson, B.D.O.; P. Kokotovic; I.D. Landau; J.C Willems (2002). "Dissipativity of dynamical systems: applications in control -- dedicated to Vasile Mihai Popov". European Journal of Control. 8 (Special issue). == See also == Popov-Belevitch-Hautus test
Wikipedia:Vasily Denisov#0
Vasily Denisov (Russian: Васи́лий Никола́евич Дени́сов) (born 1951) is a Russian mathematician, Dr.Sc., Professor, a professor at the Faculty of Computer Science at the Moscow State University. He graduated from the faculty MSU CMC (1976). He defended the thesis "On the behavior for large values of the time of solutions of parabolic equations" for the degree of Doctor of Physical and Mathematical Sciences (2011). He is the author of four books and more than 90 scientific articles. Area of scientific interests: stabilization of solutions of the Cauchy problem and boundary value problems for parabolic equations; qualitative theory of partial differential equations. == References == == Bibliography == Faculty of Computational Mathematics and Cybernetics: History and Modernity: A Biographical Directory (1 500 экз ed.). Moscow: Publishing house of Moscow University. 2010. pp. 174–175. ISBN 978-5-211-05838-5 – via Author-compiler Evgeny Grigoriev. == External links == Денисов Василий Николаевич. MSU Faculty of Computational Mathematics and Cybernetics (in Russian). Retrieved 2018-05-25. "Денисов Василий Николаевич - пользователь, сотрудник; ИСТИНА – Интеллектуальная Система Тематического Исследования НАукометрических данных". istina.msu.ru. Retrieved 2018-05-25. "Denisov Vasilii Nikolaevich — Scientific works". mathnet.ru. Retrieved 2018-05-25.
Wikipedia:Vasily Nemchinov#0
Nemchinov (masculine), Nemchinova (feminine) is a Russian-language patronymic surname, derived from the nickname nemchin, borrowed from Polish niemczyn for "German person". Notable people with the surname include: [Natalia Nemchinova]] Oleh Nemchinov Sergei Nemchinov Vasily Nemchinov Vera Nemtchinova
Wikipedia:Vasily Vladimirov#0
Vasily Sergeyevich Vladimirov (Russian: Васи́лий Серге́евич Влади́миров; 9 January 1923 – 3 November 2012) was a Soviet and Russian mathematician working in the fields of number theory, mathematical physics, quantum field theory, numerical analysis, generalized functions, several complex variables, p-adic analysis, multidimensional Tauberian theorems. == Life == Vladimirov was born to a peasant family of 5 children, in 1923, Petrograd. Under the impact of food shortage and poverty, he began schooling in 1930. He then went to a 7-year school in 1934, but transferred to the Leningrad Technical School of Hydrology and Meteorology in 1937. In 1939, at the age of sixteen, he enrolled into a night preparatory school for workers, and finally successfully progressed to Leningrad University to study physics. During the Second World War, Vladimirov took part in defence of Leningrad against German invasion, building defences, working as a tractor driver and as meteorologist in Air Force after training. He served in several different units, mainly as part of air-defense system of Leningrad. He was given the rank of sergeant major in the reserves after the war and permitted to return to his study. When he returned to university, Vladimirov shifted his focus of interest from physics to number theory. Under the advice of Boris Alekseevich Venkov (1900-1962), an expert on quadratic forms, he started undertaking research in number theory and attained a master's degree in 1948. In the first thesis of his master study in Leningrad, he confirmed the existence of non-extreme perfect quadratic form in six variables in Georgy Fedoseevich Voronoy's conjecture. In his second thesis, he approached packing problems for convex bodies initiated by Hermann Minkowski. Upon graduation, he was appointed as a junior researcher in the Leningrad Branch of the Steklov Mathematical Institute of the USSR Academy of Sciences. As the Soviet atomic bomb programme ran, Vladimirov was assigned to assist with the development of the bomb, in joint force with many top scientists and industrialists. He worked with Vitalevich Kantorovich calculating critical parameters of certain simple nuclear systems. In 1950, when he was sent to Arzamas-16, he worked under the direction of Nikolai Nikolaevich Bogolyubov, who later became a long-term collaborator with Vladimirov. In Arzamas-16, Vladimirov worked on finding mathematical solutions for problems raised by physicists. He developed new techniques for the numerical solution of boundary value problems, especially for solving the kinetic equation of neutron transfer in nuclear reactors in 1952, which is now known as Vladimirov method. After the success of the bomb project, Vladimirov was awarded the Stalin Prize in for his contribution 1953. He continued working on mathematics for atomic bomb in the Central Scientific Research Institute for Artillery Armaments, where he served as Senior Researcher in 1955. Vladimirov moved to Steklov Mathematical Institute, Moscow, in 1956, under the supervision of Nikolay Nikolaevich Bogolyubov. There he started working on new mathematical branches for solving problems in quantum field theory. He defended his doctoral thesis in 1958, which contains the renowned 'Vladimirov variational principle'. == Honours and awards == Hero of Socialist Labour Two Orders of Lenin Order of the Patriotic War 2nd class Two Orders of the Red Banner of Labour Medal of Zhukov Medal "For the Defence of Leningrad" Medal "For the Victory over Germany in the Great Patriotic War 1941–1945" Jubilee Medal "Twenty Years of Victory in the Great Patriotic War 1941–1945" Jubilee Medal "Thirty Years of Victory in the Great Patriotic War 1941–1945" Jubilee Medal "Forty Years of Victory in the Great Patriotic War 1941–1945" Jubilee Medal "50 Years of Victory in the Great Patriotic War 1941–1945" Jubilee Medal "60 Years of Victory in the Great Patriotic War 1941–1945" Jubilee Medal "50 Years of the Armed Forces of the USSR" Jubilee Medal "60 Years of the Armed Forces of the USSR" Jubilee Medal "70 Years of the Armed Forces of the USSR" Medal "In Commemoration of the 250th Anniversary of Leningrad" Medal "Veteran of Labour" Medal "In Commemoration of the 850th Anniversary of Moscow" Medal "In Commemoration of the 300th Anniversary of Saint Petersburg" Stalin Prize USSR State Prize == Selected publications == Vladimirov, V. S. (1966), Ehrenpreis, L. (ed.), Methods of the theory of functions of several complex variables. With a foreword of N.N. Bogolyubov, Cambridge-London: The M.I.T. Press, pp. XII+353, MR 0201669, Zbl 0125.31904 (Zentralblatt review of the original Russian edition). One of the first modern monographs on the theory of several complex variables, being different from other ones of the same period due to the extensive use of generalized functions. Vladimirov, V. S. (1979), Generalized functions in mathematical physics, Moscow: Mir Publishers, p. 362, ISBN 978-0-8285-0001-2, MR 0564116, Zbl 0515.46034. A textbook on the theory of generalized functions and their applications to mathematical physics and several complex variables. Vladimirov, V.S. (1983), Equations of mathematical physics (2nd ed.), Moscow: Mir Publishers, p. 464, MR 0764399, Zbl 0207.09101 (Zentralblatt review of the first English edition). Vladimirov, V.S.; Drozzinov, Yu.N.; Zavialov, B.I. (1988), Tauberian theorems for generalized functions, Mathematics and Its Applications (Soviet Series), vol. 10, Dordrecht-Boston-London: Kluwer Academic Publishers, pp. XV+293, ISBN 978-90-277-2383-3, MR 0947960, Zbl 0636.40003. Vladimirov, V.S. (2002), Methods of the theory of generalized functions, Analytical Methods and Special Functions, vol. 6, London-New York City: Taylor & Francis, pp. XII+353, ISBN 978-0-415-27356-5, MR 2012831, Zbl 1078.46029. A monograph on the theory of generalized functions written with an eye towards their applications to several complex variables and mathematical physics, as is customary for the Author: it is a substantial revision of the textbook (Vladimirov 1979). == See also == Nikolay Bogolyubov Generalized function Edge-of-the-wedge theorem Riemann–Hilbert problem == References == === Biographical and general references === Bolibrukh, Andrey Andreevich; Volovich, Igor Vasil'evich; Faddeev, Lyudvig Dmitrievich; Gonchar, Andrei Aleksandrovich; Kadyshevskii, Vladimir Georgievich; Logunov, Anatoly Alekseevich; Marchuk, Guri Ivanovich; Mishchenko, Evgenii Frolovich; Nikol'skii, Sergei Mikhailovich; Novikov, Sergei Petrovich (2003), "Vasilii Sergeevich Vladimirov (on his 80th birthday)", Uspekhi Matematicheskikh Nauk (in Russian), 58 (1(349)): 199–207, Bibcode:2003RuMaS..58..199B, doi:10.1070/RM2003v058n01ABEH000608, MR 1992146, S2CID 250833289, Zbl 1050.01516. Bogolyubov, Nikolai Nikolaevich; Logunov, Anatoly Alekseevich; Marchuk, Guri Ivanovich (1983), "Vasilii Sergeevich Vladimirov (on his sixtieth birthday)", Uspekhi Matematicheskikh Nauk (in Russian), 38 (1(229)): 207–216, Bibcode:1983RuMaS..38..231B, doi:10.1070/RM1983v038n01ABEH003420, MR 0693751, S2CID 250881492, Zbl 0512.01021. Gonchar, Andrei Aleksandrovich; Marchuk, Guri Ivanovich; Novikov, Sergei Petrovich (1993), "Vasilii Sergeevich Vladimirov (on his seventieth birthday)", Uspekhi Matematicheskikh Nauk (in Russian), 48 (1(289)): 195–204, Bibcode:1993RuMaS..48..201G, doi:10.1070/RM1993v048n01ABEH001007, MR 1227969, S2CID 250909442, Zbl 0797.01012. == External links == Vladimirov's academic web page at the Russian Academy of Science. Vasily Vladimirov author page at Math-Net.Ru. Chuyanov, V.A. (2001) [1994], "Vladimirov method", Encyclopedia of Mathematics, EMS Press Drozhzhinov, Yu.N. (2001) [1994], "Vladimirov variational principle", Encyclopedia of Mathematics, EMS Press Vasily Vladimirov's obituary (in Russian)
Wikipedia:Vaughan's identity#0
In mathematics and analytic number theory, Vaughan's identity is an identity found by R. C. Vaughan (1977) that can be used to simplify Vinogradov's work on trigonometric sums. It can be used to estimate summatory functions of the form ∑ n ≤ N f ( n ) Λ ( n ) {\displaystyle \sum _{n\leq N}f(n)\Lambda (n)} where f is some arithmetic function of the natural integers n, whose values in applications are often roots of unity, and Λ is the von Mangoldt function. == Procedure for applying the method == The motivation for Vaughan's construction of his identity is briefly discussed at the beginning of Chapter 24 in Davenport. For now, we will skip over most of the technical details motivating the identity and its usage in applications, and instead focus on the setup of its construction by parts. Following from the reference, we construct four distinct sums based on the expansion of the logarithmic derivative of the Riemann zeta function in terms of functions which are partial Dirichlet series respectively truncated at the upper bounds of U {\displaystyle U} and V {\displaystyle V} , respectively. More precisely, we define F ( s ) = ∑ m ≤ U Λ ( m ) m − s {\displaystyle F(s)=\sum _{m\leq U}\Lambda (m)m^{-s}} and G ( s ) = ∑ d ≤ V μ ( d ) d − s {\displaystyle G(s)=\sum _{d\leq V}\mu (d)d^{-s}} , which leads us to the exact identity that − ζ ′ ( s ) ζ ( s ) = F ( s ) − ζ ( s ) F ( s ) G ( s ) − ζ ′ ( s ) G ( s ) + ( − ζ ′ ( s ) ζ ( s ) − F ( s ) ) ( 1 − ζ ( s ) G ( s ) ) . {\displaystyle -{\frac {\zeta ^{\prime }(s)}{\zeta (s)}}=F(s)-\zeta (s)F(s)G(s)-\zeta ^{\prime }(s)G(s)+\left(-{\frac {\zeta ^{\prime }(s)}{\zeta (s)}}-F(s)\right)(1-\zeta (s)G(s)).} This last expansion implies that we can write Λ ( n ) = a 1 ( n ) + a 2 ( n ) + a 3 ( n ) + a 4 ( n ) , {\displaystyle \Lambda (n)=a_{1}(n)+a_{2}(n)+a_{3}(n)+a_{4}(n),} where the component functions are defined to be a 1 ( n ) := { Λ ( n ) , if n ≤ U ; 0 , if n > U a 2 ( n ) := − ∑ d ≤ V m ≤ U m d r = n Λ ( m ) μ ( d ) a 3 ( n ) := ∑ d ≤ V h d = n μ ( d ) log ⁡ ( h ) a 4 ( n ) := − ∑ k > 1 m > U m k = n Λ ( m ) ( ∑ d ≤ V d | k μ ( d ) ) . {\displaystyle {\begin{aligned}a_{1}(n)&:={\Biggl \{}{\begin{matrix}\Lambda (n),&{\text{ if }}n\leq U;\\0,&{\text{ if }}n>U\end{matrix}}\\a_{2}(n)&:=-\sum _{\stackrel {mdr=n}{\stackrel {m\leq U}{d\leq V}}}\Lambda (m)\mu (d)\\a_{3}(n)&:=\sum _{\stackrel {hd=n}{d\leq V}}\mu (d)\log(h)\\a_{4}(n)&:=-\sum _{\stackrel {mk=n}{\stackrel {m>U}{k>1}}}\Lambda (m)\left(\sum _{\stackrel {d|k}{d\leq V}}\mu (d)\right).\end{aligned}}} We then define the corresponding summatory functions for 1 ≤ i ≤ 4 {\displaystyle 1\leq i\leq 4} to be S i ( N ) := ∑ n ≤ N f ( n ) a i ( n ) , {\displaystyle S_{i}(N):=\sum _{n\leq N}f(n)a_{i}(n),} so that we can write ∑ n ≤ N f ( n ) Λ ( n ) = S 1 ( N ) + S 2 ( N ) + S 3 ( N ) + S 4 ( N ) . {\displaystyle \sum _{n\leq N}f(n)\Lambda (n)=S_{1}(N)+S_{2}(N)+S_{3}(N)+S_{4}(N).} Finally, at the conclusion of a multi-page argument of technical and at times delicate estimations of these sums, we obtain the following form of Vaughan's identity when we assume that | f ( n ) | ≤ 1 , ∀ n {\displaystyle |f(n)|\leq 1,\ \forall n} , U , V ≥ 2 {\displaystyle U,V\geq 2} , and U V ≤ N {\displaystyle UV\leq N} : ∑ n ≤ N f ( n ) Λ ( n ) ≪ U + ( log ⁡ N ) × ∑ t ≤ U V ( max w | ∑ w ≤ r ≤ N t f ( r t ) | ) + N ( log ⁡ N ) 3 × max U ≤ M ≤ N / V max V ≤ j ≤ N / M ( ∑ V < k ≤ N / M | ∑ m ≤ N / j m ≤ N / k M < m ≤ 2 M f ( m j ) f ( m k ) ¯ | ) 1 / 2 ( V 1 ) . {\displaystyle \sum _{n\leq N}f(n)\Lambda (n)\ll U+(\log N)\times \sum _{t\leq UV}\left(\max _{w}\left|\sum _{w\leq r\leq {\frac {N}{t}}}f(rt)\right|\right)+{\sqrt {N}}(\log N)^{3}\times \max _{U\leq M\leq N/V}\max _{V\leq j\leq N/M}\left(\sum _{V<k\leq N/M}\left|\sum _{\stackrel {M<m\leq 2M}{\stackrel {m\leq N/k}{m\leq N/j}}}f(mj){\bar {f(mk)}}\right|\right)^{1/2}\mathbf {(V1)} .} It is remarked that in some instances sharper estimates can be obtained from Vaughan's identity by treating the component sum S 2 {\displaystyle S_{2}} more carefully by expanding it in the form of S 2 = ∑ t ≤ U V ⟼ ∑ t ≤ U + ∑ U < t ≤ U V =: S 2 ′ + S 2 ′ ′ . {\displaystyle S_{2}=\sum _{t\leq UV}\longmapsto \sum _{t\leq U}+\sum _{U<t\leq UV}=:S_{2}^{\prime }+S_{2}^{\prime \prime }.} The optimality of the upper bound obtained by applying Vaughan's identity appears to be application-dependent with respect to the best functions U = f U ( N ) {\displaystyle U=f_{U}(N)} and V = f V ( N ) {\displaystyle V=f_{V}(N)} we can choose to input into equation (V1). See the applications cited in the next section for specific examples that arise in the different contexts respectively considered by multiple authors. == Applications == Vaughan's identity has been used to simplify the proof of the Bombieri–Vinogradov theorem and to study Kummer sums (see the references and external links below). In Chapter 25 of Davenport, one application of Vaughan's identity is to estimate an important prime-related exponential sum of Vinogradov defined by S ( α ) := ∑ n ≤ N Λ ( n ) e ( n α ) . {\displaystyle S(\alpha ):=\sum _{n\leq N}\Lambda (n)e\left(n\alpha \right).} In particular, we obtain an asymptotic upper bound for these sums (typically evaluated at irrational α ∈ R ∖ Q {\displaystyle \alpha \in \mathbb {R} \setminus \mathbb {Q} } ) whose rational approximations satisfy | α − a q | ≤ 1 q 2 , ( a , q ) = 1 , {\displaystyle \left|\alpha -{\frac {a}{q}}\right|\leq {\frac {1}{q^{2}}},(a,q)=1,} of the form S ( α ) ≪ ( N q + N 4 / 5 + N q ) ( log ⁡ N ) 4 . {\displaystyle S(\alpha )\ll \left({\frac {N}{\sqrt {q}}}+N^{4/5}+{\sqrt {Nq}}\right)(\log N)^{4}.} The argument for this estimate follows from Vaughan's identity by proving by a somewhat intricate argument that S ( α ) ≪ ( U V + q + N U + N V + N q + N q ) ( log ⁡ ( q N ) ) 4 , {\displaystyle S(\alpha )\ll \left(UV+q+{\frac {N}{\sqrt {U}}}+{\frac {N}{\sqrt {V}}}+{\frac {N}{\sqrt {q}}}+{\sqrt {Nq}}\right)(\log(qN))^{4},} and then deducing the first formula above in the non-trivial cases when q ≤ N {\displaystyle q\leq N} and with U = V = N 2 / 5 {\displaystyle U=V=N^{2/5}} . Another application of Vaughan's identity is found in Chapter 26 of Davenport where the method is employed to derive estimates for sums (exponential sums) of three primes. Examples of Vaughan's identity in practice are given as the following references / citations in this informative post:. == Generalizations == Vaughan's identity was generalized by Heath-Brown (1982). == Notes == == References == Davenport, Harold (31 October 2000). Multiplicative Number Theory (Third ed.). New York: Springer Graduate Texts in Mathematics. ISBN 0-387-95097-4. Graham, S.W. (2001) [1994], "Vaughan's identity", Encyclopedia of Mathematics, EMS Press Heath-Brown, D. R. (1982), "Prime numbers in short intervals and a generalized Vaughan identity", Can. J. Math., 34 (6): 1365–1377, doi:10.4153/CJM-1982-095-9, MR 0678676 Vaughan, R.C. (1977), "Sommes trigonométriques sur les nombres premiers", Comptes Rendus de l'Académie des Sciences, Série A, 285: 981–983, MR 0498434 == External links == Proof Wiki on Vaughan's Identity Joni's Math Notes (very detailed exposition) Encyclopedia of Mathematics Terry Tao's blog on the large sieve and the Bombieri-Vinogradov theorem
Wikipedia:Vaṭeśvara-siddhānta#0
Vaṭeśvara (Sanskrit: वटेश्वर Sanskrit pronunciation: [vəʈeːɕvərə]) (born c. 880), was a tenth-century Indian mathematician from Kashmir who presented several trigonometric identities. He was the author (at the age of 24) of the Vaṭeśvara-siddhānta, written in 904 AD, a treatise focusing on astronomy and applied mathematics.The work criticized Brahmagupta and defended Aryabhatta I. An edition of the first three chapters was published in 1962 by R. S. Sharma and Mukund Mishra. Al Biruni referred to the works by Vateswara, particularly the Karaṇasāra, noting that the author was the son of Mihdatta who belonged to Nagarapura (also referred to as Anandapura, now named Vadnagar). The Karaṇasāra uses 821 Saka era (899 AD) as a reference year. == References == == Other sources == K. V. Sarma (1997), "Vatesvara", Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures edited by Helaine Selin, Springer, ISBN 978-0-7923-4066-9
Wikipedia:Vector algebra relations#0
The following are important identities in vector algebra. Identities that only involve the magnitude of a vector ‖ A ‖ {\displaystyle \|\mathbf {A} \|} and the dot product (scalar product) of two vectors A·B, apply to vectors in any dimension, while identities that use the cross product (vector product) A×B only apply in three dimensions, since the cross product is only defined there. Most of these relations can be dated to founder of vector calculus Josiah Willard Gibbs, if not earlier. == Magnitudes == The magnitude of a vector A can be expressed using the dot product: ‖ A ‖ 2 = A ⋅ A {\displaystyle \|\mathbf {A} \|^{2}=\mathbf {A\cdot A} } In three-dimensional Euclidean space, the magnitude of a vector is determined from its three components using Pythagoras' theorem: ‖ A ‖ 2 = A 1 2 + A 2 2 + A 3 2 {\displaystyle \|\mathbf {A} \|^{2}=A_{1}^{2}+A_{2}^{2}+A_{3}^{2}} == Inequalities == The Cauchy–Schwarz inequality: A ⋅ B ≤ ‖ A ‖ ‖ B ‖ {\displaystyle \mathbf {A} \cdot \mathbf {B} \leq \left\|\mathbf {A} \right\|\left\|\mathbf {B} \right\|} The triangle inequality: ‖ A + B ‖ ≤ ‖ A ‖ + ‖ B ‖ {\displaystyle \|\mathbf {A+B} \|\leq \|\mathbf {A} \|+\|\mathbf {B} \|} The reverse triangle inequality: ‖ A − B ‖ ≥ | ‖ A ‖ − ‖ B ‖ | {\displaystyle \|\mathbf {A-B} \|\geq {\Bigl |}\|\mathbf {A} \|-\|\mathbf {B} \|{\Bigr |}} == Angles == The vector product and the scalar product of two vectors define the angle between them, say θ: sin ⁡ θ = ‖ A × B ‖ ‖ A ‖ ‖ B ‖ ( − π < θ ≤ π ) {\displaystyle \sin \theta ={\frac {\|\mathbf {A} \times \mathbf {B} \|}{\left\|\mathbf {A} \right\|\left\|\mathbf {B} \right\|}}\quad (-\pi <\theta \leq \pi )} To satisfy the right-hand rule, for positive θ, vector B is counter-clockwise from A, and for negative θ it is clockwise. cos ⁡ θ = A ⋅ B ‖ A ‖ ‖ B ‖ ( − π < θ ≤ π ) {\displaystyle \cos \theta ={\frac {\mathbf {A} \cdot \mathbf {B} }{\left\|\mathbf {A} \right\|\left\|\mathbf {B} \right\|}}\quad (-\pi <\theta \leq \pi )} The Pythagorean trigonometric identity then provides: ‖ A × B ‖ 2 + ( A ⋅ B ) 2 = ‖ A ‖ 2 ‖ B ‖ 2 {\displaystyle \left\|\mathbf {A\times B} \right\|^{2}+(\mathbf {A} \cdot \mathbf {B} )^{2}=\left\|\mathbf {A} \right\|^{2}\left\|\mathbf {B} \right\|^{2}} If a vector A = (Ax, Ay, Az) makes angles α, β, γ with an orthogonal set of x-, y- and z-axes, then: cos ⁡ α = A x A x 2 + A y 2 + A z 2 = A x ‖ A ‖ , {\displaystyle \cos \alpha ={\frac {A_{x}}{\sqrt {A_{x}^{2}+A_{y}^{2}+A_{z}^{2}}}}={\frac {A_{x}}{\|\mathbf {A} \|}}\ ,} and analogously for angles β, γ. Consequently: A = ‖ A ‖ ( cos ⁡ α i ^ + cos ⁡ β j ^ + cos ⁡ γ k ^ ) , {\displaystyle \mathbf {A} =\left\|\mathbf {A} \right\|\left(\cos \alpha \ {\hat {\mathbf {i} }}+\cos \beta \ {\hat {\mathbf {j} }}+\cos \gamma \ {\hat {\mathbf {k} }}\right),} with i ^ , j ^ , k ^ {\displaystyle {\hat {\mathbf {i} }},\ {\hat {\mathbf {j} }},\ {\hat {\mathbf {k} }}} unit vectors along the axis directions. == Areas and volumes == The area Σ of a parallelogram with sides A and B containing the angle θ is: Σ = A B sin ⁡ θ , {\displaystyle \Sigma =AB\sin \theta ,} which will be recognized as the magnitude of the vector cross product of the vectors A and B lying along the sides of the parallelogram. That is: Σ = ‖ A × B ‖ = ‖ A ‖ 2 ‖ B ‖ 2 − ( A ⋅ B ) 2 . {\displaystyle \Sigma =\left\|\mathbf {A} \times \mathbf {B} \right\|={\sqrt {\left\|\mathbf {A} \right\|^{2}\left\|\mathbf {B} \right\|^{2}-\left(\mathbf {A} \cdot \mathbf {B} \right)^{2}}}\ .} (If A, B are two-dimensional vectors, this is equal to the determinant of the 2 × 2 matrix with rows A, B.) The square of this expression is: Σ 2 = ( A ⋅ A ) ( B ⋅ B ) − ( A ⋅ B ) ( B ⋅ A ) = Γ ( A , B ) , {\displaystyle \Sigma ^{2}=(\mathbf {A\cdot A} )(\mathbf {B\cdot B} )-(\mathbf {A\cdot B} )(\mathbf {B\cdot A} )=\Gamma (\mathbf {A} ,\ \mathbf {B} )\ ,} where Γ(A, B) is the Gram determinant of A and B defined by: Γ ( A , B ) = | A ⋅ A A ⋅ B B ⋅ A B ⋅ B | . {\displaystyle \Gamma (\mathbf {A} ,\ \mathbf {B} )={\begin{vmatrix}\mathbf {A\cdot A} &\mathbf {A\cdot B} \\\mathbf {B\cdot A} &\mathbf {B\cdot B} \end{vmatrix}}\ .} In a similar fashion, the squared volume V of a parallelepiped spanned by the three vectors A, B, C is given by the Gram determinant of the three vectors: V 2 = Γ ( A , B , C ) = | A ⋅ A A ⋅ B A ⋅ C B ⋅ A B ⋅ B B ⋅ C C ⋅ A C ⋅ B C ⋅ C | , {\displaystyle V^{2}=\Gamma (\mathbf {A} ,\ \mathbf {B} ,\ \mathbf {C} )={\begin{vmatrix}\mathbf {A\cdot A} &\mathbf {A\cdot B} &\mathbf {A\cdot C} \\\mathbf {B\cdot A} &\mathbf {B\cdot B} &\mathbf {B\cdot C} \\\mathbf {C\cdot A} &\mathbf {C\cdot B} &\mathbf {C\cdot C} \end{vmatrix}}\ ,} Since A, B, C are three-dimensional vectors, this is equal to the square of the scalar triple product det [ A , B , C ] = | A , B , C | {\displaystyle \det[\mathbf {A} ,\mathbf {B} ,\mathbf {C} ]=|\mathbf {A} ,\mathbf {B} ,\mathbf {C} |} below. This process can be extended to n-dimensions. == Addition and multiplication of vectors == Commutativity of addition: A + B = B + A {\displaystyle \mathbf {A} +\mathbf {B} =\mathbf {B} +\mathbf {A} } . Commutativity of scalar product: A ⋅ B = B ⋅ A {\displaystyle \mathbf {A} \cdot \mathbf {B} =\mathbf {B} \cdot \mathbf {A} } . Anticommutativity of cross product: A × B = − ( B × A ) {\displaystyle \mathbf {A} \times \mathbf {B} =\mathbf {-} (\mathbf {B} \times \mathbf {A} )} . Distributivity of multiplication by a scalar over addition: c ( A + B ) = c A + c B {\displaystyle c(\mathbf {A} +\mathbf {B} )=c\mathbf {A} +c\mathbf {B} } . Distributivity of scalar product over addition: ( A + B ) ⋅ C = A ⋅ C + B ⋅ C {\displaystyle \left(\mathbf {A} +\mathbf {B} \right)\cdot \mathbf {C} =\mathbf {A} \cdot \mathbf {C} +\mathbf {B} \cdot \mathbf {C} } . Distributivity of vector product over addition: ( A + B ) × C = A × C + B × C {\displaystyle (\mathbf {A} +\mathbf {B} )\times \mathbf {C} =\mathbf {A} \times \mathbf {C} +\mathbf {B} \times \mathbf {C} } . Scalar triple product: A ⋅ ( B × C ) = B ⋅ ( C × A ) = C ⋅ ( A × B ) = | A B C | = | A x B x C x A y B y C y A z B z C z | . {\displaystyle \mathbf {A} \cdot (\mathbf {B} \times \mathbf {C} )=\mathbf {B} \cdot (\mathbf {C} \times \mathbf {A} )=\mathbf {C} \cdot (\mathbf {A} \times \mathbf {B} )=|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |={\begin{vmatrix}A_{x}&B_{x}&C_{x}\\A_{y}&B_{y}&C_{y}\\A_{z}&B_{z}&C_{z}\end{vmatrix}}.} Vector triple product: A × ( B × C ) = ( A ⋅ C ) B − ( A ⋅ B ) C {\displaystyle \mathbf {A} \times (\mathbf {B} \times \mathbf {C} )=(\mathbf {A} \cdot \mathbf {C} )\mathbf {B} -(\mathbf {A} \cdot \mathbf {B} )\mathbf {C} } . Jacobi identity: A × ( B × C ) + C × ( A × B ) + B × ( C × A ) = 0 . {\displaystyle \mathbf {A} \times (\mathbf {B} \times \mathbf {C} )+\mathbf {C} \times (\mathbf {A} \times \mathbf {B} )+\mathbf {B} \times (\mathbf {C} \times \mathbf {A} )=\mathbf {0} .} Lagrange's identity: | A × B | 2 = ( A ⋅ A ) ( B ⋅ B ) − ( A ⋅ B ) 2 {\displaystyle |\mathbf {A} \times \mathbf {B} |^{2}=(\mathbf {A} \cdot \mathbf {A} )(\mathbf {B} \cdot \mathbf {B} )-(\mathbf {A} \cdot \mathbf {B} )^{2}} . === Quadruple product === The name "quadruple product" is used for two different products, the scalar-valued scalar quadruple product and the vector-valued vector quadruple product or vector product of four vectors. ==== Scalar quadruple product ==== The scalar quadruple product is defined as the dot product of two cross products: ( a × b ) ⋅ ( c × d ) , {\displaystyle (\mathbf {a\times b} )\cdot (\mathbf {c} \times \mathbf {d} )\ ,} where a, b, c, d are vectors in three-dimensional Euclidean space. It can be evaluated using the Binet-Cauchy identity: ( a × b ) ⋅ ( c × d ) = ( a ⋅ c ) ( b ⋅ d ) − ( a ⋅ d ) ( b ⋅ c ) . {\displaystyle (\mathbf {a\times b} )\cdot (\mathbf {c} \times \mathbf {d} )=(\mathbf {a\cdot c} )(\mathbf {b\cdot d} )-(\mathbf {a\cdot d} )(\mathbf {b\cdot c} )\ .} or using the determinant: ( a × b ) ⋅ ( c × d ) = | a ⋅ c a ⋅ d b ⋅ c b ⋅ d | . {\displaystyle (\mathbf {a\times b} )\cdot (\mathbf {c} \times \mathbf {d} )={\begin{vmatrix}\mathbf {a\cdot c} &\mathbf {a\cdot d} \\\mathbf {b\cdot c} &\mathbf {b\cdot d} \end{vmatrix}}\ .} ==== Vector quadruple product ==== The vector quadruple product is defined as the cross product of two cross products: ( a × b ) × ( c × d ) , {\displaystyle (\mathbf {a\times b} )\mathbf {\times } (\mathbf {c} \times \mathbf {d} )\ ,} where a, b, c, d are vectors in three-dimensional Euclidean space. It can be evaluated using the identity: ( a × b ) × ( c × d ) = ( a ⋅ ( b × d ) ) c − ( a ⋅ ( b × c ) ) d . {\displaystyle (\mathbf {a\times b} )\mathbf {\times } (\mathbf {c} \times \mathbf {d} )=(\mathbf {a} \cdot (\mathbf {b} \times \mathbf {d} ))\mathbf {c} -(\mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} ))\mathbf {d} \ .} Equivalent forms can be obtained using the identity: ( b ⋅ ( c × d ) ) a − ( c ⋅ ( d × a ) ) b + ( d ⋅ ( a × b ) ) c − ( a ⋅ ( b × c ) ) d = 0 . {\displaystyle (\mathbf {b} \cdot (\mathbf {c} \times \mathbf {d} ))\mathbf {a} -(\mathbf {c} \cdot (\mathbf {d} \times \mathbf {a} ))\mathbf {b} +(\mathbf {d} \cdot (\mathbf {a} \times \mathbf {b} ))\mathbf {c} -(\mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} ))\mathbf {d} =0\ .} This identity can also be written using tensor notation and the Einstein summation convention as follows: ( a × b ) × ( c × d ) = ε i j k a i c j d k b l − ε i j k b i c j d k a l = ε i j k a i b j d k c l − ε i j k a i b j c k d l {\displaystyle (\mathbf {a\times b} )\mathbf {\times } (\mathbf {c} \times \mathbf {d} )=\varepsilon _{ijk}a^{i}c^{j}d^{k}b^{l}-\varepsilon _{ijk}b^{i}c^{j}d^{k}a^{l}=\varepsilon _{ijk}a^{i}b^{j}d^{k}c^{l}-\varepsilon _{ijk}a^{i}b^{j}c^{k}d^{l}} where εijk is the Levi-Civita symbol. Related relationships: A consequence of the previous equation: | A B C | D = ( A ⋅ D ) ( B × C ) + ( B ⋅ D ) ( C × A ) + ( C ⋅ D ) ( A × B ) . {\displaystyle |\mathbf {A} \,\mathbf {B} \,\mathbf {C} |\,\mathbf {D} =(\mathbf {A} \cdot \mathbf {D} )\left(\mathbf {B} \times \mathbf {C} \right)+\left(\mathbf {B} \cdot \mathbf {D} \right)\left(\mathbf {C} \times \mathbf {A} \right)+\left(\mathbf {C} \cdot \mathbf {D} \right)\left(\mathbf {A} \times \mathbf {B} \right).} In 3 dimensions, a vector D can be expressed in terms of basis vectors {A,B,C} as: D = D ⋅ ( B × C ) | A B C | A + D ⋅ ( C × A ) | A B C | B + D ⋅ ( A × B ) | A B C | C . {\displaystyle \mathbf {D} \ =\ {\frac {\mathbf {D} \cdot (\mathbf {B} \times \mathbf {C} )}{|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |}}\ \mathbf {A} +{\frac {\mathbf {D} \cdot (\mathbf {C} \times \mathbf {A} )}{|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |}}\ \mathbf {B} +{\frac {\mathbf {D} \cdot (\mathbf {A} \times \mathbf {B} )}{|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |}}\ \mathbf {C} .} == Applications == These relations are useful for deriving various formulas in spherical and Euclidean geometry. For example, if four points are chosen on the unit sphere, A, B, C, D, and unit vectors drawn from the center of the sphere to the four points, a, b, c, d respectively, the identity: ( a × b ) ⋅ ( c × d ) = ( a ⋅ c ) ( b ⋅ d ) − ( a ⋅ d ) ( b ⋅ c ) , {\displaystyle (\mathbf {a\times b} )\mathbf {\cdot } (\mathbf {c\times d} )=(\mathbf {a\cdot c} )(\mathbf {b\cdot d} )-(\mathbf {a\cdot d} )(\mathbf {b\cdot c} )\ ,} in conjunction with the relation for the magnitude of the cross product: ‖ a × b ‖ = a b sin ⁡ θ a b , {\displaystyle \|\mathbf {a\times b} \|=ab\sin \theta _{ab}\ ,} and the dot product: a ⋅ b = a b cos ⁡ θ a b , {\displaystyle \mathbf {a\cdot b} =ab\cos \theta _{ab}\ ,} where a = b = 1 for the unit sphere, results in the identity among the angles attributed to Gauss: sin ⁡ θ a b sin ⁡ θ c d cos ⁡ x = cos ⁡ θ a c cos ⁡ θ b d − cos ⁡ θ a d cos ⁡ θ b c , {\displaystyle \sin \theta _{ab}\sin \theta _{cd}\cos x=\cos \theta _{ac}\cos \theta _{bd}-\cos \theta _{ad}\cos \theta _{bc}\ ,} where x is the angle between a × b and c × d, or equivalently, between the planes defined by these vectors. == See also == Vector calculus identities Vector space Geometric algebra == Notes == == References == == Further reading == Gibbs, Josiah Willard; Wilson, Edwin Bidwell (1901). Vector analysis: a text-book for the use of students of mathematics. Scribner.
Wikipedia:Vector calculus identities#0
The following are important identities involving derivatives and integrals in vector calculus. == Operator notation == === Gradient === For a function f ( x , y , z ) {\displaystyle f(x,y,z)} in three-dimensional Cartesian coordinate variables, the gradient is the vector field: grad ⁡ ( f ) = ∇ f = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) f = ∂ f ∂ x i + ∂ f ∂ y j + ∂ f ∂ z k {\displaystyle \operatorname {grad} (f)=\nabla f={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}f={\frac {\partial f}{\partial x}}\mathbf {i} +{\frac {\partial f}{\partial y}}\mathbf {j} +{\frac {\partial f}{\partial z}}\mathbf {k} } where i, j, k are the standard unit vectors for the x, y, z-axes. More generally, for a function of n variables ψ ( x 1 , … , x n ) {\displaystyle \psi (x_{1},\ldots ,x_{n})} , also called a scalar field, the gradient is the vector field: ∇ ψ = ( ∂ ∂ x 1 , … , ∂ ∂ x n ) ψ = ∂ ψ ∂ x 1 e 1 + ⋯ + ∂ ψ ∂ x n e n {\displaystyle \nabla \psi ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{n}}}\end{pmatrix}}\psi ={\frac {\partial \psi }{\partial x_{1}}}\mathbf {e} _{1}+\dots +{\frac {\partial \psi }{\partial x_{n}}}\mathbf {e} _{n}} where e i ( i = 1 , 2 , . . . , n ) {\displaystyle \mathbf {e} _{i}\,(i=1,2,...,n)} are mutually orthogonal unit vectors. As the name implies, the gradient is proportional to, and points in the direction of, the function's most rapid (positive) change. For a vector field A = ( A 1 , … , A n ) {\displaystyle \mathbf {A} =\left(A_{1},\ldots ,A_{n}\right)} , also called a tensor field of order 1, the gradient or total derivative is the n × n Jacobian matrix: J A = d A = ( ∇ A ) T = ( ∂ A i ∂ x j ) i j . {\displaystyle \mathbf {J} _{\mathbf {A} }=d\mathbf {A} =(\nabla \!\mathbf {A} )^{\textsf {T}}=\left({\frac {\partial A_{i}}{\partial x_{j}}}\right)_{\!ij}.} For a tensor field T {\displaystyle \mathbf {T} } of any order k, the gradient grad ⁡ ( T ) = d T = ( ∇ T ) T {\displaystyle \operatorname {grad} (\mathbf {T} )=d\mathbf {T} =(\nabla \mathbf {T} )^{\textsf {T}}} is a tensor field of order k + 1. For a tensor field T {\displaystyle \mathbf {T} } of order k > 0, the tensor field ∇ T {\displaystyle \nabla \mathbf {T} } of order k + 1 is defined by the recursive relation ( ∇ T ) ⋅ C = ∇ ( T ⋅ C ) {\displaystyle (\nabla \mathbf {T} )\cdot \mathbf {C} =\nabla (\mathbf {T} \cdot \mathbf {C} )} where C {\displaystyle \mathbf {C} } is an arbitrary constant vector. === Divergence === In Cartesian coordinates, the divergence of a continuously differentiable vector field F = F x i + F y j + F z k {\displaystyle \mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} } is the scalar-valued function: div ⁡ F = ∇ ⋅ F = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) ⋅ ( F x , F y , F z ) = ∂ F x ∂ x + ∂ F y ∂ y + ∂ F z ∂ z . {\displaystyle \operatorname {div} \mathbf {F} =\nabla \cdot \mathbf {F} ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}\cdot {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}={\frac {\partial F_{x}}{\partial x}}+{\frac {\partial F_{y}}{\partial y}}+{\frac {\partial F_{z}}{\partial z}}.} As the name implies, the divergence is a (local) measure of the degree to which vectors in the field diverge. The divergence of a tensor field T {\displaystyle \mathbf {T} } of non-zero order k is written as div ⁡ ( T ) = ∇ ⋅ T {\displaystyle \operatorname {div} (\mathbf {T} )=\nabla \cdot \mathbf {T} } , a contraction of a tensor field of order k − 1. Specifically, the divergence of a vector is a scalar. The divergence of a higher-order tensor field may be found by decomposing the tensor field into a sum of outer products and using the identity, ∇ ⋅ ( A ⊗ T ) = T ( ∇ ⋅ A ) + ( A ⋅ ∇ ) T {\displaystyle \nabla \cdot \left(\mathbf {A} \otimes \mathbf {T} \right)=\mathbf {T} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {T} } where A ⋅ ∇ {\displaystyle \mathbf {A} \cdot \nabla } is the directional derivative in the direction of A {\displaystyle \mathbf {A} } multiplied by its magnitude. Specifically, for the outer product of two vectors, ∇ ⋅ ( A B T ) = B ( ∇ ⋅ A ) + ( A ⋅ ∇ ) B . {\displaystyle \nabla \cdot \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=\mathbf {B} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {B} .} For a tensor field T {\displaystyle \mathbf {T} } of order k > 1, the tensor field ∇ ⋅ T {\displaystyle \nabla \cdot \mathbf {T} } of order k − 1 is defined by the recursive relation ( ∇ ⋅ T ) ⋅ C = ∇ ⋅ ( T ⋅ C ) {\displaystyle (\nabla \cdot \mathbf {T} )\cdot \mathbf {C} =\nabla \cdot (\mathbf {T} \cdot \mathbf {C} )} where C {\displaystyle \mathbf {C} } is an arbitrary constant vector. === Curl === In Cartesian coordinates, for F = F x i + F y j + F z k {\displaystyle \mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} } the curl is the vector field: curl ⁡ F = ∇ × F = ( ∂ ∂ x , ∂ ∂ y , ∂ ∂ z ) × ( F x , F y , F z ) = | i j k ∂ ∂ x ∂ ∂ y ∂ ∂ z F x F y F z | = ( ∂ F z ∂ y − ∂ F y ∂ z ) i + ( ∂ F x ∂ z − ∂ F z ∂ x ) j + ( ∂ F y ∂ x − ∂ F x ∂ y ) k {\displaystyle {\begin{aligned}\operatorname {curl} \mathbf {F} &=\nabla \times \mathbf {F} ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}\times {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}={\begin{vmatrix}\mathbf {i} &\mathbf {j} &\mathbf {k} \\{\frac {\partial }{\partial x}}&{\frac {\partial }{\partial y}}&{\frac {\partial }{\partial z}}\\F_{x}&F_{y}&F_{z}\end{vmatrix}}\\[1em]&=\left({\frac {\partial F_{z}}{\partial y}}-{\frac {\partial F_{y}}{\partial z}}\right)\mathbf {i} +\left({\frac {\partial F_{x}}{\partial z}}-{\frac {\partial F_{z}}{\partial x}}\right)\mathbf {j} +\left({\frac {\partial F_{y}}{\partial x}}-{\frac {\partial F_{x}}{\partial y}}\right)\mathbf {k} \end{aligned}}} where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively. As the name implies the curl is a measure of how much nearby vectors tend in a circular direction. In Einstein notation, the vector field F = ( F 1 , F 2 , F 3 ) {\displaystyle \mathbf {F} ={\begin{pmatrix}F_{1},\ F_{2},\ F_{3}\end{pmatrix}}} has curl given by: ∇ × F = ε i j k e i ∂ F k ∂ x j {\displaystyle \nabla \times \mathbf {F} =\varepsilon ^{ijk}\mathbf {e} _{i}{\frac {\partial F_{k}}{\partial x_{j}}}} where ε {\displaystyle \varepsilon } = ±1 or 0 is the Levi-Civita parity symbol. For a tensor field T {\displaystyle \mathbf {T} } of order k > 1, the tensor field ∇ × T {\displaystyle \nabla \times \mathbf {T} } of order k is defined by the recursive relation ( ∇ × T ) ⋅ C = ∇ × ( T ⋅ C ) {\displaystyle (\nabla \times \mathbf {T} )\cdot \mathbf {C} =\nabla \times (\mathbf {T} \cdot \mathbf {C} )} where C {\displaystyle \mathbf {C} } is an arbitrary constant vector. A tensor field of order greater than one may be decomposed into a sum of outer products, and then the following identity may be used: ∇ × ( A ⊗ T ) = ( ∇ × A ) ⊗ T − A × ( ∇ T ) . {\displaystyle \nabla \times \left(\mathbf {A} \otimes \mathbf {T} \right)=(\nabla \times \mathbf {A} )\otimes \mathbf {T} -\mathbf {A} \times (\nabla \mathbf {T} ).} Specifically, for the outer product of two vectors, ∇ × ( A B T ) = ( ∇ × A ) B T − A × ( ∇ B ) . {\displaystyle \nabla \times \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=(\nabla \times \mathbf {A} )\mathbf {B} ^{\textsf {T}}-\mathbf {A} \times (\nabla \mathbf {B} ).} === Laplacian === In Cartesian coordinates, the Laplacian of a function f ( x , y , z ) {\displaystyle f(x,y,z)} is Δ f = ∇ 2 f = ( ∇ ⋅ ∇ ) f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 + ∂ 2 f ∂ z 2 . {\displaystyle \Delta f=\nabla ^{2}\!f=(\nabla \cdot \nabla )f={\frac {\partial ^{2}\!f}{\partial x^{2}}}+{\frac {\partial ^{2}\!f}{\partial y^{2}}}+{\frac {\partial ^{2}\!f}{\partial z^{2}}}.} The Laplacian is a measure of how much a function is changing over a small sphere centered at the point. When the Laplacian is equal to 0, the function is called a harmonic function. That is, Δ f = 0. {\displaystyle \Delta f=0.} For a tensor field, T {\displaystyle \mathbf {T} } , the Laplacian is generally written as: Δ T = ∇ 2 T = ( ∇ ⋅ ∇ ) T {\displaystyle \Delta \mathbf {T} =\nabla ^{2}\mathbf {T} =(\nabla \cdot \nabla )\mathbf {T} } and is a tensor field of the same order. For a tensor field T {\displaystyle \mathbf {T} } of order k > 0, the tensor field ∇ 2 T {\displaystyle \nabla ^{2}\mathbf {T} } of order k is defined by the recursive relation ( ∇ 2 T ) ⋅ C = ∇ 2 ( T ⋅ C ) {\displaystyle \left(\nabla ^{2}\mathbf {T} \right)\cdot \mathbf {C} =\nabla ^{2}(\mathbf {T} \cdot \mathbf {C} )} where C {\displaystyle \mathbf {C} } is an arbitrary constant vector. === Special notations === In Feynman subscript notation, ∇ B ( A ⋅ B ) = A × ( ∇ × B ) + ( A ⋅ ∇ ) B {\displaystyle \nabla _{\mathbf {B} }\!\left(\mathbf {A{\cdot }B} \right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} } where the notation ∇B means the subscripted gradient operates on only the factor B. More general but similar is the Hestenes overdot notation in geometric algebra. The above identity is then expressed as: ∇ ˙ ( A ⋅ B ˙ ) = A × ( ∇ × B ) + ( A ⋅ ∇ ) B {\displaystyle {\dot {\nabla }}\left(\mathbf {A} {\cdot }{\dot {\mathbf {B} }}\right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} } where overdots define the scope of the vector derivative. The dotted vector, in this case B, is differentiated, while the (undotted) A is held constant. The utility of the Feynman subscript notation lies in its use in the derivation of vector and tensor derivative identities, as in the following example which uses the algebraic identity C⋅(A×B) = (C×A)⋅B: ∇ ⋅ ( A × B ) = ∇ A ⋅ ( A × B ) + ∇ B ⋅ ( A × B ) = ( ∇ A × A ) ⋅ B + ( ∇ B × A ) ⋅ B = ( ∇ A × A ) ⋅ B − ( A × ∇ B ) ⋅ B = ( ∇ A × A ) ⋅ B − A ⋅ ( ∇ B × B ) = ( ∇ × A ) ⋅ B − A ⋅ ( ∇ × B ) {\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\nabla _{\mathbf {A} }\cdot (\mathbf {A} \times \mathbf {B} )+\nabla _{\mathbf {B} }\cdot (\mathbf {A} \times \mathbf {B} )\\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} +(\nabla _{\mathbf {B} }\times \mathbf {A} )\cdot \mathbf {B} \\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} -(\mathbf {A} \times \nabla _{\mathbf {B} })\cdot \mathbf {B} \\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla _{\mathbf {B} }\times \mathbf {B} )\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla \times \mathbf {B} )\end{aligned}}} An alternative method is to use the Cartesian components of the del operator as follows (with implicit summation over the index i): ∇ ⋅ ( A × B ) = e i ∂ i ⋅ ( A × B ) = e i ⋅ ∂ i ( A × B ) = e i ⋅ ( ∂ i A × B + A × ∂ i B ) = e i ⋅ ( ∂ i A × B ) + e i ⋅ ( A × ∂ i B ) = ( e i × ∂ i A ) ⋅ B + ( e i × A ) ⋅ ∂ i B = ( e i × ∂ i A ) ⋅ B − ( A × e i ) ⋅ ∂ i B = ( e i × ∂ i A ) ⋅ B − A ⋅ ( e i × ∂ i B ) = ( e i ∂ i × A ) ⋅ B − A ⋅ ( e i ∂ i × B ) = ( ∇ × A ) ⋅ B − A ⋅ ( ∇ × B ) {\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\mathbf {e} _{i}\partial _{i}\cdot (\mathbf {A} \times \mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot \partial _{i}(\mathbf {A} \times \mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot (\partial _{i}\mathbf {A} \times \mathbf {B} +\mathbf {A} \times \partial _{i}\mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot (\partial _{i}\mathbf {A} \times \mathbf {B} )+\mathbf {e} _{i}\cdot (\mathbf {A} \times \partial _{i}\mathbf {B} )\\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} +(\mathbf {e} _{i}\times \mathbf {A} )\cdot \partial _{i}\mathbf {B} \\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} -(\mathbf {A} \times \mathbf {e} _{i})\cdot \partial _{i}\mathbf {B} \\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\mathbf {e} _{i}\times \partial _{i}\mathbf {B} )\\[2pt]&=(\mathbf {e} _{i}\partial _{i}\times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\mathbf {e} _{i}\partial _{i}\times \mathbf {B} )\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla \times \mathbf {B} )\end{aligned}}} Another method of deriving vector and tensor derivative identities is to replace all occurrences of a vector in an algebraic identity by the del operator, provided that no variable occurs both inside and outside the scope of an operator or both inside the scope of one operator in a term and outside the scope of another operator in the same term (i.e., the operators must be nested). The validity of this rule follows from the validity of the Feynman method, for one may always substitute a subscripted del and then immediately drop the subscript under the condition of the rule. For example, from the identity A⋅(B×C) = (A×B)⋅C we may derive A⋅(∇×C) = (A×∇)⋅C but not ∇⋅(B×C) = (∇×B)⋅C, nor from A⋅(B×A) = 0 may we derive A⋅(∇×A) = 0. On the other hand, a subscripted del operates on all occurrences of the subscript in the term, so that A⋅(∇A×A) = ∇A⋅(A×A) = ∇⋅(A×A) = 0. Also, from A×(A×C) = A(A⋅C) − (A⋅A)C we may derive ∇×(∇×C) = ∇(∇⋅C) − ∇2C, but from (Aψ)⋅(Aφ) = (A⋅A)(ψφ) we may not derive (∇ψ)⋅(∇φ) = ∇2(ψφ). A subscript c on a quantity indicates that it is temporarily considered to be a constant. Since a constant is not a variable, when the substitution rule (see the preceding paragraph) is used it, unlike a variable, may be moved into or out of the scope of a del operator, as in the following example: ∇ ⋅ ( A × B ) = ∇ ⋅ ( A × B c ) + ∇ ⋅ ( A c × B ) = ∇ ⋅ ( A × B c ) − ∇ ⋅ ( B × A c ) = ( ∇ × A ) ⋅ B c − ( ∇ × B ) ⋅ A c = ( ∇ × A ) ⋅ B − ( ∇ × B ) ⋅ A {\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\nabla \cdot (\mathbf {A} \times \mathbf {B} _{\mathrm {c} })+\nabla \cdot (\mathbf {A} _{\mathrm {c} }\times \mathbf {B} )\\[2pt]&=\nabla \cdot (\mathbf {A} \times \mathbf {B} _{\mathrm {c} })-\nabla \cdot (\mathbf {B} \times \mathbf {A} _{\mathrm {c} })\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} _{\mathrm {c} }-(\nabla \times \mathbf {B} )\cdot \mathbf {A} _{\mathrm {c} }\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -(\nabla \times \mathbf {B} )\cdot \mathbf {A} \end{aligned}}} Another way to indicate that a quantity is a constant is to affix it as a subscript to the scope of a del operator, as follows: ∇ ( A ⋅ B ) A = A × ( ∇ × B ) + ( A ⋅ ∇ ) B {\displaystyle \nabla \left(\mathbf {A{\cdot }B} \right)_{\mathbf {A} }=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} } For the remainder of this article, Feynman subscript notation will be used where appropriate. == First derivative identities == For scalar fields ψ {\displaystyle \psi } , ϕ {\displaystyle \phi } and vector fields A {\displaystyle \mathbf {A} } , B {\displaystyle \mathbf {B} } , we have the following derivative identities. === Distributive properties === ∇ ( ψ + ϕ ) = ∇ ψ + ∇ ϕ ∇ ( A + B ) = ∇ A + ∇ B ∇ ⋅ ( A + B ) = ∇ ⋅ A + ∇ ⋅ B ∇ × ( A + B ) = ∇ × A + ∇ × B {\displaystyle {\begin{aligned}\nabla (\psi +\phi )&=\nabla \psi +\nabla \phi \\\nabla (\mathbf {A} +\mathbf {B} )&=\nabla \mathbf {A} +\nabla \mathbf {B} \\\nabla \cdot (\mathbf {A} +\mathbf {B} )&=\nabla \cdot \mathbf {A} +\nabla \cdot \mathbf {B} \\\nabla \times (\mathbf {A} +\mathbf {B} )&=\nabla \times \mathbf {A} +\nabla \times \mathbf {B} \end{aligned}}} === First derivative associative properties === ( A ⋅ ∇ ) ψ = A ⋅ ( ∇ ψ ) ( A ⋅ ∇ ) B = A ⋅ ( ∇ B ) ( A × ∇ ) ψ = A × ( ∇ ψ ) ( A × ∇ ) B = A × ( ∇ B ) {\displaystyle {\begin{aligned}(\mathbf {A} \cdot \nabla )\psi &=\mathbf {A} \cdot (\nabla \psi )\\(\mathbf {A} \cdot \nabla )\mathbf {B} &=\mathbf {A} \cdot (\nabla \mathbf {B} )\\(\mathbf {A} \times \nabla )\psi &=\mathbf {A} \times (\nabla \psi )\\(\mathbf {A} \times \nabla )\mathbf {B} &=\mathbf {A} \times (\nabla \mathbf {B} )\end{aligned}}} === Product rule for multiplication by a scalar === We have the following generalizations of the product rule in single-variable calculus. ∇ ( ψ ϕ ) = ϕ ∇ ψ + ψ ∇ ϕ ∇ ( ψ A ) = ( ∇ ψ ) A T + ψ ∇ A = ∇ ψ ⊗ A + ψ ∇ A ∇ ⋅ ( ψ A ) = ψ ∇ ⋅ A + ( ∇ ψ ) ⋅ A ∇ × ( ψ A ) = ψ ∇ × A + ( ∇ ψ ) × A ∇ 2 ( ψ ϕ ) = ψ ∇ 2 ϕ + 2 ∇ ψ ⋅ ∇ ϕ + ϕ ∇ 2 ψ {\displaystyle {\begin{aligned}\nabla (\psi \phi )&=\phi \,\nabla \psi +\psi \,\nabla \phi \\\nabla (\psi \mathbf {A} )&=(\nabla \psi )\mathbf {A} ^{\textsf {T}}+\psi \nabla \mathbf {A} \ =\ \nabla \psi \otimes \mathbf {A} +\psi \,\nabla \mathbf {A} \\\nabla \cdot (\psi \mathbf {A} )&=\psi \,\nabla {\cdot }\mathbf {A} +(\nabla \psi )\,{\cdot }\mathbf {A} \\\nabla {\times }(\psi \mathbf {A} )&=\psi \,\nabla {\times }\mathbf {A} +(\nabla \psi ){\times }\mathbf {A} \\\nabla ^{2}(\psi \phi )&=\psi \,\nabla ^{2\!}\phi +2\,\nabla \!\psi \cdot \!\nabla \phi +\phi \,\nabla ^{2\!}\psi \end{aligned}}} === Quotient rule for division by a scalar === ∇ ( ψ ϕ ) = ϕ ∇ ψ − ψ ∇ ϕ ϕ 2 ∇ ( A ϕ ) = ϕ ∇ A − ∇ ϕ ⊗ A ϕ 2 ∇ ⋅ ( A ϕ ) = ϕ ∇ ⋅ A − ∇ ϕ ⋅ A ϕ 2 ∇ × ( A ϕ ) = ϕ ∇ × A − ∇ ϕ × A ϕ 2 ∇ 2 ( ψ ϕ ) = ϕ ∇ 2 ψ − 2 ϕ ∇ ( ψ ϕ ) ⋅ ∇ ϕ − ψ ∇ 2 ϕ ϕ 2 {\displaystyle {\begin{aligned}\nabla \left({\frac {\psi }{\phi }}\right)&={\frac {\phi \,\nabla \psi -\psi \,\nabla \phi }{\phi ^{2}}}\\[1em]\nabla \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla \mathbf {A} -\nabla \phi \otimes \mathbf {A} }{\phi ^{2}}}\\[1em]\nabla \cdot \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla {\cdot }\mathbf {A} -\nabla \!\phi \cdot \mathbf {A} }{\phi ^{2}}}\\[1em]\nabla \times \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla {\times }\mathbf {A} -\nabla \!\phi \,{\times }\,\mathbf {A} }{\phi ^{2}}}\\[1em]\nabla ^{2}\left({\frac {\psi }{\phi }}\right)&={\frac {\phi \,\nabla ^{2\!}\psi -2\,\phi \,\nabla \!\left({\frac {\psi }{\phi }}\right)\cdot \!\nabla \phi -\psi \,\nabla ^{2\!}\phi }{\phi ^{2}}}\end{aligned}}} === Chain rule === Let f ( x ) {\displaystyle f(x)} be a one-variable function from scalars to scalars, r ( t ) = ( x 1 ( t ) , … , x n ( t ) ) {\displaystyle \mathbf {r} (t)=(x_{1}(t),\ldots ,x_{n}(t))} a parametrized curve, ϕ : R n → R {\displaystyle \phi \!:\mathbb {R} ^{n}\to \mathbb {R} } a function from vectors to scalars, and A : R n → R n {\displaystyle \mathbf {A} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} a vector field. We have the following special cases of the multi-variable chain rule. ∇ ( f ∘ ϕ ) = ( f ′ ∘ ϕ ) ∇ ϕ ( r ∘ f ) ′ = ( r ′ ∘ f ) f ′ ( ϕ ∘ r ) ′ = ( ∇ ϕ ∘ r ) ⋅ r ′ ( A ∘ r ) ′ = r ′ ⋅ ( ∇ A ∘ r ) ∇ ( ϕ ∘ A ) = ( ∇ A ) ⋅ ( ∇ ϕ ∘ A ) ∇ ⋅ ( r ∘ ϕ ) = ∇ ϕ ⋅ ( r ′ ∘ ϕ ) ∇ × ( r ∘ ϕ ) = ∇ ϕ × ( r ′ ∘ ϕ ) {\displaystyle {\begin{aligned}\nabla (f\circ \phi )&=\left(f'\circ \phi \right)\nabla \phi \\(\mathbf {r} \circ f)'&=(\mathbf {r} '\circ f)f'\\(\phi \circ \mathbf {r} )'&=(\nabla \phi \circ \mathbf {r} )\cdot \mathbf {r} '\\(\mathbf {A} \circ \mathbf {r} )'&=\mathbf {r} '\cdot (\nabla \mathbf {A} \circ \mathbf {r} )\\\nabla (\phi \circ \mathbf {A} )&=(\nabla \mathbf {A} )\cdot (\nabla \phi \circ \mathbf {A} )\\\nabla \cdot (\mathbf {r} \circ \phi )&=\nabla \phi \cdot (\mathbf {r} '\circ \phi )\\\nabla \times (\mathbf {r} \circ \phi )&=\nabla \phi \times (\mathbf {r} '\circ \phi )\end{aligned}}} For a vector transformation x : R n → R n {\displaystyle \mathbf {x} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}} we have: ∇ ⋅ ( A ∘ x ) = t r ( ( ∇ x ) ⋅ ( ∇ A ∘ x ) ) {\displaystyle \nabla \cdot (\mathbf {A} \circ \mathbf {x} )=\mathrm {tr} \left((\nabla \mathbf {x} )\cdot (\nabla \mathbf {A} \circ \mathbf {x} )\right)} Here we take the trace of the dot product of two second-order tensors, which corresponds to the product of their matrices. === Dot product rule === ∇ ( A ⋅ B ) = ( A ⋅ ∇ ) B + ( B ⋅ ∇ ) A + A × ( ∇ × B ) + B × ( ∇ × A ) = A ⋅ J B + B ⋅ J A = ( ∇ B ) ⋅ A + ( ∇ A ) ⋅ B {\displaystyle {\begin{aligned}\nabla (\mathbf {A} \cdot \mathbf {B} )&\ =\ (\mathbf {A} \cdot \nabla )\mathbf {B} \,+\,(\mathbf {B} \cdot \nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {B} )\,+\,\mathbf {B} {\times }(\nabla {\times }\mathbf {A} )\\&\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {B} }+\mathbf {B} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,+\,(\nabla \mathbf {A} )\cdot \mathbf {B} \end{aligned}}} where J A = ( ∇ A ) T = ( ∂ A i / ∂ x j ) i j {\displaystyle \mathbf {J} _{\mathbf {A} }=(\nabla \!\mathbf {A} )^{\textsf {T}}=(\partial A_{i}/\partial x_{j})_{ij}} denotes the Jacobian matrix of the vector field A = ( A 1 , … , A n ) {\displaystyle \mathbf {A} =(A_{1},\ldots ,A_{n})} . Alternatively, using Feynman subscript notation, ∇ ( A ⋅ B ) = ∇ A ( A ⋅ B ) + ∇ B ( A ⋅ B ) . {\displaystyle \nabla (\mathbf {A} \cdot \mathbf {B} )=\nabla _{\mathbf {A} }(\mathbf {A} \cdot \mathbf {B} )+\nabla _{\mathbf {B} }(\mathbf {A} \cdot \mathbf {B} )\ .} See these notes. As a special case, when A = B, 1 2 ∇ ( A ⋅ A ) = A ⋅ J A = ( ∇ A ) ⋅ A = ( A ⋅ ∇ ) A + A × ( ∇ × A ) = A ∇ A . {\displaystyle {\tfrac {1}{2}}\nabla \left(\mathbf {A} \cdot \mathbf {A} \right)\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {A} )\cdot \mathbf {A} \ =\ (\mathbf {A} {\cdot }\nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {A} )\ =\ A\nabla A.} The generalization of the dot product formula to Riemannian manifolds is a defining property of a Riemannian connection, which differentiates a vector field to give a vector-valued 1-form. === Cross product rule === ∇ ( A × B ) = ( ∇ A ) × B − ( ∇ B ) × A ∇ ⋅ ( A × B ) = ( ∇ × A ) ⋅ B − A ⋅ ( ∇ × B ) ∇ × ( A × B ) = A ( ∇ ⋅ B ) − B ( ∇ ⋅ A ) + ( B ⋅ ∇ ) A − ( A ⋅ ∇ ) B = A ( ∇ ⋅ B ) + ( B ⋅ ∇ ) A − ( B ( ∇ ⋅ A ) + ( A ⋅ ∇ ) B ) = ∇ ⋅ ( B A T ) − ∇ ⋅ ( A B T ) = ∇ ⋅ ( B A T − A B T ) A × ( ∇ × B ) = ∇ B ( A ⋅ B ) − ( A ⋅ ∇ ) B = A ⋅ J B − ( A ⋅ ∇ ) B = ( ∇ B ) ⋅ A − A ⋅ ( ∇ B ) = A ⋅ ( J B − J B T ) ( A × ∇ ) × B = ( ∇ B ) ⋅ A − A ( ∇ ⋅ B ) = A × ( ∇ × B ) + ( A ⋅ ∇ ) B − A ( ∇ ⋅ B ) ( A × ∇ ) ⋅ B = A ⋅ ( ∇ × B ) {\displaystyle {\begin{aligned}\nabla (\mathbf {A} \times \mathbf {B} )&\ =\ (\nabla \mathbf {A} )\times \mathbf {B} \,-\,(\nabla \mathbf {B} )\times \mathbf {A} \\[5pt]\nabla \cdot (\mathbf {A} \times \mathbf {B} )&\ =\ (\nabla {\times }\mathbf {A} )\cdot \mathbf {B} \,-\,\mathbf {A} \cdot (\nabla {\times }\mathbf {B} )\\[5pt]\nabla \times (\mathbf {A} \times \mathbf {B} )&\ =\ \mathbf {A} (\nabla {\cdot }\mathbf {B} )\,-\,\mathbf {B} (\nabla {\cdot }\mathbf {A} )\,+\,(\mathbf {B} {\cdot }\nabla )\mathbf {A} \,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ \mathbf {A} (\nabla {\cdot }\mathbf {B} )\,+\,(\mathbf {B} {\cdot }\nabla )\mathbf {A} \,-\,(\mathbf {B} (\nabla {\cdot }\mathbf {A} )\,+\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} )\\[2pt]&\ =\ \nabla {\cdot }\left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\right)\,-\,\nabla {\cdot }\left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\\[2pt]&\ =\ \nabla {\cdot }\left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\,-\,\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\\[5pt]\mathbf {A} \times (\nabla \times \mathbf {B} )&\ =\ \nabla _{\mathbf {B} }(\mathbf {A} {\cdot }\mathbf {B} )\,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {B} }\,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,-\,\mathbf {A} \cdot (\nabla \mathbf {B} )\\[2pt]&\ =\ \mathbf {A} \cdot (\mathbf {J} _{\mathbf {B} }\,-\,\mathbf {J} _{\mathbf {B} }^{\textsf {T}})\\[5pt](\mathbf {A} \times \nabla )\times \mathbf {B} &\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,-\,\mathbf {A} (\nabla {\cdot }\mathbf {B} )\\[2pt]&\ =\ \mathbf {A} \times (\nabla \times \mathbf {B} )\,+\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \,-\,\mathbf {A} (\nabla {\cdot }\mathbf {B} )\\[5pt](\mathbf {A} \times \nabla )\cdot \mathbf {B} &\ =\ \mathbf {A} \cdot (\nabla {\times }\mathbf {B} )\end{aligned}}} Note that the matrix J B − J B T {\displaystyle \mathbf {J} _{\mathbf {B} }\,-\,\mathbf {J} _{\mathbf {B} }^{\textsf {T}}} is antisymmetric. == Second derivative identities == === Divergence of curl is zero === The divergence of the curl of any continuously twice-differentiable vector field A is always zero: ∇ ⋅ ( ∇ × A ) = 0 {\displaystyle \nabla \cdot (\nabla \times \mathbf {A} )=0} This is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex. === Divergence of gradient is Laplacian === The Laplacian of a scalar field is the divergence of its gradient: Δ ψ = ∇ 2 ψ = ∇ ⋅ ( ∇ ψ ) {\displaystyle \Delta \psi =\nabla ^{2}\psi =\nabla \cdot (\nabla \psi )} The result is a scalar quantity. === Divergence of divergence is not defined === The divergence of a vector field A is a scalar, and the divergence of a scalar quantity is undefined. Therefore, ∇ ⋅ ( ∇ ⋅ A ) is undefined. {\displaystyle \nabla \cdot (\nabla \cdot \mathbf {A} ){\text{ is undefined.}}} === Curl of gradient is zero === The curl of the gradient of any continuously twice-differentiable scalar field φ {\displaystyle \varphi } (i.e., differentiability class C 2 {\displaystyle C^{2}} ) is always the zero vector: ∇ × ( ∇ φ ) = 0 . {\displaystyle \nabla \times (\nabla \varphi )=\mathbf {0} .} It can be easily proved by expressing ∇ × ( ∇ φ ) {\displaystyle \nabla \times (\nabla \varphi )} in a Cartesian coordinate system with Schwarz's theorem (also called Clairaut's theorem on equality of mixed partials). This result is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex. === Curl of curl === ∇ × ( ∇ × A ) = ∇ ( ∇ ⋅ A ) − ∇ 2 A {\displaystyle \nabla \times \left(\nabla \times \mathbf {A} \right)\ =\ \nabla (\nabla {\cdot }\mathbf {A} )\,-\,\nabla ^{2\!}\mathbf {A} } Here ∇2 is the vector Laplacian operating on the vector field A. === Curl of divergence is not defined === The divergence of a vector field A is a scalar, and the curl of a scalar quantity is undefined. Therefore, ∇ × ( ∇ ⋅ A ) is undefined. {\displaystyle \nabla \times (\nabla \cdot \mathbf {A} ){\text{ is undefined.}}} === Second derivative associative properties === ( ∇ ⋅ ∇ ) ψ = ∇ ⋅ ( ∇ ψ ) = ∇ 2 ψ ( ∇ ⋅ ∇ ) A = ∇ ⋅ ( ∇ A ) = ∇ 2 A ( ∇ × ∇ ) ψ = ∇ × ( ∇ ψ ) = 0 ( ∇ × ∇ ) A = ∇ × ( ∇ A ) = 0 {\displaystyle {\begin{aligned}(\nabla \cdot \nabla )\psi &=\nabla \cdot (\nabla \psi )=\nabla ^{2}\psi \\(\nabla \cdot \nabla )\mathbf {A} &=\nabla \cdot (\nabla \mathbf {A} )=\nabla ^{2}\mathbf {A} \\(\nabla \times \nabla )\psi &=\nabla \times (\nabla \psi )=\mathbf {0} \\(\nabla \times \nabla )\mathbf {A} &=\nabla \times (\nabla \mathbf {A} )=\mathbf {0} \end{aligned}}} === A mnemonic === The figure to the right is a mnemonic for some of these identities. The abbreviations used are: D: divergence, C: curl, G: gradient, L: Laplacian, CC: curl of curl. Each arrow is labeled with the result of an identity, specifically, the result of applying the operator at the arrow's tail to the operator at its head. The blue circle in the middle means curl of curl exists, whereas the other two red circles (dashed) mean that DD and GG do not exist. == Summary of important identities == === Differentiation === ==== Gradient ==== ∇ ( ψ + ϕ ) = ∇ ψ + ∇ ϕ {\displaystyle \nabla (\psi +\phi )=\nabla \psi +\nabla \phi } ∇ ( ψ ϕ ) = ϕ ∇ ψ + ψ ∇ ϕ {\displaystyle \nabla (\psi \phi )=\phi \nabla \psi +\psi \nabla \phi } ∇ ( ψ A ) = ∇ ψ ⊗ A + ψ ∇ A {\displaystyle \nabla (\psi \mathbf {A} )=\nabla \psi \otimes \mathbf {A} +\psi \nabla \mathbf {A} } ∇ ( A ⋅ B ) = ( A ⋅ ∇ ) B + ( B ⋅ ∇ ) A + A × ( ∇ × B ) + B × ( ∇ × A ) {\displaystyle \nabla (\mathbf {A} \cdot \mathbf {B} )=(\mathbf {A} \cdot \nabla )\mathbf {B} +(\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {A} \times (\nabla \times \mathbf {B} )+\mathbf {B} \times (\nabla \times \mathbf {A} )} ==== Divergence ==== ∇ ⋅ ( A + B ) = ∇ ⋅ A + ∇ ⋅ B {\displaystyle \nabla \cdot (\mathbf {A} +\mathbf {B} )=\nabla \cdot \mathbf {A} +\nabla \cdot \mathbf {B} } ∇ ⋅ ( ψ A ) = ψ ∇ ⋅ A + A ⋅ ∇ ψ {\displaystyle \nabla \cdot \left(\psi \mathbf {A} \right)=\psi \nabla \cdot \mathbf {A} +\mathbf {A} \cdot \nabla \psi } ∇ ⋅ ( A × B ) = ( ∇ × A ) ⋅ B − ( ∇ × B ) ⋅ A {\displaystyle \nabla \cdot \left(\mathbf {A} \times \mathbf {B} \right)=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -(\nabla \times \mathbf {B} )\cdot \mathbf {A} } ==== Curl ==== ∇ × ( A + B ) = ∇ × A + ∇ × B {\displaystyle \nabla \times (\mathbf {A} +\mathbf {B} )=\nabla \times \mathbf {A} +\nabla \times \mathbf {B} } ∇ × ( ψ A ) = ψ ( ∇ × A ) − ( A × ∇ ) ψ = ψ ( ∇ × A ) + ( ∇ ψ ) × A {\displaystyle \nabla \times \left(\psi \mathbf {A} \right)=\psi \,(\nabla \times \mathbf {A} )-(\mathbf {A} \times \nabla )\psi =\psi \,(\nabla \times \mathbf {A} )+(\nabla \psi )\times \mathbf {A} } ∇ × ( ψ ∇ ϕ ) = ∇ ψ × ∇ ϕ {\displaystyle \nabla \times \left(\psi \nabla \phi \right)=\nabla \psi \times \nabla \phi } ∇ × ( A × B ) = A ( ∇ ⋅ B ) − B ( ∇ ⋅ A ) + ( B ⋅ ∇ ) A − ( A ⋅ ∇ ) B {\displaystyle \nabla \times \left(\mathbf {A} \times \mathbf {B} \right)=\mathbf {A} \left(\nabla \cdot \mathbf {B} \right)-\mathbf {B} \left(\nabla \cdot \mathbf {A} \right)+\left(\mathbf {B} \cdot \nabla \right)\mathbf {A} -\left(\mathbf {A} \cdot \nabla \right)\mathbf {B} } ==== Vector-dot-Del Operator ==== ( A ⋅ ∇ ) B = 1 2 [ ∇ ( A ⋅ B ) − ∇ × ( A × B ) − B × ( ∇ × A ) − A × ( ∇ × B ) − B ( ∇ ⋅ A ) + A ( ∇ ⋅ B ) ] {\displaystyle (\mathbf {A} \cdot \nabla )\mathbf {B} ={\frac {1}{2}}{\bigg [}\nabla (\mathbf {A} \cdot \mathbf {B} )-\nabla \times (\mathbf {A} \times \mathbf {B} )-\mathbf {B} \times (\nabla \times \mathbf {A} )-\mathbf {A} \times (\nabla \times \mathbf {B} )-\mathbf {B} (\nabla \cdot \mathbf {A} )+\mathbf {A} (\nabla \cdot \mathbf {B} ){\bigg ]}} ( A ⋅ ∇ ) A = 1 2 ∇ | A | 2 − A × ( ∇ × A ) = 1 2 ∇ | A | 2 + ( ∇ × A ) × A {\displaystyle (\mathbf {A} \cdot \nabla )\mathbf {A} ={\frac {1}{2}}\nabla |\mathbf {A} |^{2}-\mathbf {A} \times (\nabla \times \mathbf {A} )={\frac {1}{2}}\nabla |\mathbf {A} |^{2}+(\nabla \times \mathbf {A} )\times \mathbf {A} } A ⋅ ∇ ( B ⋅ B ) = 2 B ⋅ ( A ⋅ ∇ ) B {\displaystyle \mathbf {A} \cdot \nabla (\mathbf {B} \cdot \mathbf {B} )=2\mathbf {B} \cdot (\mathbf {A} \cdot \nabla )\mathbf {B} } ==== Second derivatives ==== ∇ ⋅ ( ∇ × A ) = 0 {\displaystyle \nabla \cdot (\nabla \times \mathbf {A} )=0} ∇ × ( ∇ ψ ) = 0 {\displaystyle \nabla \times (\nabla \psi )=\mathbf {0} } ∇ ⋅ ( ∇ ψ ) = ∇ 2 ψ {\displaystyle \nabla \cdot (\nabla \psi )=\nabla ^{2}\psi } (scalar Laplacian) ∇ ( ∇ ⋅ A ) − ∇ × ( ∇ × A ) = ∇ 2 A {\displaystyle \nabla \left(\nabla \cdot \mathbf {A} \right)-\nabla \times \left(\nabla \times \mathbf {A} \right)=\nabla ^{2}\mathbf {A} } (vector Laplacian) ∇ ⋅ [ ∇ A + ( ∇ A ) T ] = ∇ 2 A + ∇ ( ∇ ⋅ A ) {\displaystyle \nabla \cdot {\big [}\nabla \mathbf {A} +(\nabla \mathbf {A} )^{\textsf {T}}{\big ]}=\nabla ^{2}\mathbf {A} +\nabla (\nabla \cdot \mathbf {A} )} ∇ ⋅ ( ϕ ∇ ψ ) = ϕ ∇ 2 ψ + ∇ ϕ ⋅ ∇ ψ {\displaystyle \nabla \cdot (\phi \nabla \psi )=\phi \nabla ^{2}\psi +\nabla \phi \cdot \nabla \psi } ψ ∇ 2 ϕ − ϕ ∇ 2 ψ = ∇ ⋅ ( ψ ∇ ϕ − ϕ ∇ ψ ) {\displaystyle \psi \nabla ^{2}\phi -\phi \nabla ^{2}\psi =\nabla \cdot \left(\psi \nabla \phi -\phi \nabla \psi \right)} ∇ 2 ( ϕ ψ ) = ϕ ∇ 2 ψ + 2 ( ∇ ϕ ) ⋅ ( ∇ ψ ) + ( ∇ 2 ϕ ) ψ {\displaystyle \nabla ^{2}(\phi \psi )=\phi \nabla ^{2}\psi +2(\nabla \phi )\cdot (\nabla \psi )+\left(\nabla ^{2}\phi \right)\psi } ∇ 2 ( ψ A ) = A ∇ 2 ψ + 2 ( ∇ ψ ⋅ ∇ ) A + ψ ∇ 2 A {\displaystyle \nabla ^{2}(\psi \mathbf {A} )=\mathbf {A} \nabla ^{2}\psi +2(\nabla \psi \cdot \nabla )\mathbf {A} +\psi \nabla ^{2}\mathbf {A} } ∇ 2 ( A ⋅ B ) = A ⋅ ∇ 2 B − B ⋅ ∇ 2 A + 2 ∇ ⋅ ( ( B ⋅ ∇ ) A + B × ( ∇ × A ) ) {\displaystyle \nabla ^{2}(\mathbf {A} \cdot \mathbf {B} )=\mathbf {A} \cdot \nabla ^{2}\mathbf {B} -\mathbf {B} \cdot \nabla ^{2}\!\mathbf {A} +2\nabla \cdot ((\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {B} \times (\nabla \times \mathbf {A} ))} (Green's vector identity) ==== Third derivatives ==== ∇ 2 ( ∇ ψ ) = ∇ ( ∇ ⋅ ( ∇ ψ ) ) = ∇ ( ∇ 2 ψ ) {\displaystyle \nabla ^{2}(\nabla \psi )=\nabla (\nabla \cdot (\nabla \psi ))=\nabla \left(\nabla ^{2}\psi \right)} ∇ 2 ( ∇ ⋅ A ) = ∇ ⋅ ( ∇ ( ∇ ⋅ A ) ) = ∇ ⋅ ( ∇ 2 A ) {\displaystyle \nabla ^{2}(\nabla \cdot \mathbf {A} )=\nabla \cdot (\nabla (\nabla \cdot \mathbf {A} ))=\nabla \cdot \left(\nabla ^{2}\mathbf {A} \right)} ∇ 2 ( ∇ × A ) = − ∇ × ( ∇ × ( ∇ × A ) ) = ∇ × ( ∇ 2 A ) {\displaystyle \nabla ^{2}(\nabla \times \mathbf {A} )=-\nabla \times (\nabla \times (\nabla \times \mathbf {A} ))=\nabla \times \left(\nabla ^{2}\mathbf {A} \right)} === Integration === Below, the curly symbol ∂ means "boundary of" a surface or solid. ==== Surface–volume integrals ==== In the following surface–volume integral theorems, V denotes a three-dimensional volume with a corresponding two-dimensional boundary S = ∂V (a closed surface): ∂ V {\displaystyle \scriptstyle \partial V} ψ d S = ∭ V ∇ ψ d V {\displaystyle \psi \,d\mathbf {S} \ =\ \iiint _{V}\nabla \psi \,dV} ∂ V {\displaystyle \scriptstyle \partial V} A ⋅ d S = ∭ V ∇ ⋅ A d V {\displaystyle \mathbf {A} \cdot d\mathbf {S} \ =\ \iiint _{V}\nabla \cdot \mathbf {A} \,dV} (divergence theorem) ∂ V {\displaystyle \scriptstyle \partial V} A × d S = − ∭ V ∇ × A d V {\displaystyle \mathbf {A} \times d\mathbf {S} \ =\ -\iiint _{V}\nabla \times \mathbf {A} \,dV} ∂ V {\displaystyle \scriptstyle \partial V} ψ ∇ φ ⋅ d S = ∭ V ( ψ ∇ 2 φ + ∇ φ ⋅ ∇ ψ ) d V {\displaystyle \psi \nabla \!\varphi \cdot d\mathbf {S} \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\varphi +\nabla \!\varphi \cdot \nabla \!\psi \right)\,dV} (Green's first identity) ∂ V {\displaystyle \scriptstyle \partial V} ( ψ ∇ φ − φ ∇ ψ ) ⋅ d S = {\displaystyle \left(\psi \nabla \!\varphi -\varphi \nabla \!\psi \right)\cdot d\mathbf {S} \ =\ } ∂ V {\displaystyle \scriptstyle \partial V} ( ψ ∂ φ ∂ n − φ ∂ ψ ∂ n ) d S {\displaystyle \left(\psi {\frac {\partial \varphi }{\partial n}}-\varphi {\frac {\partial \psi }{\partial n}}\right)dS} = ∭ V ( ψ ∇ 2 φ − φ ∇ 2 ψ ) d V {\displaystyle \displaystyle \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\varphi -\varphi \nabla ^{2}\!\psi \right)\,dV} (Green's second identity) ∭ V A ⋅ ∇ ψ d V = {\displaystyle \iiint _{V}\mathbf {A} \cdot \nabla \psi \,dV\ =\ } ∂ V {\displaystyle \scriptstyle \partial V} ψ A ⋅ d S − ∭ V ψ ∇ ⋅ A d V {\displaystyle \psi \mathbf {A} \cdot d\mathbf {S} -\iiint _{V}\psi \nabla \cdot \mathbf {A} \,dV} (integration by parts) ∭ V ψ ∇ ⋅ A d V = {\displaystyle \iiint _{V}\psi \nabla \cdot \mathbf {A} \,dV\ =\ } ∂ V {\displaystyle \scriptstyle \partial V} ψ A ⋅ d S − ∭ V A ⋅ ∇ ψ d V {\displaystyle \psi \mathbf {A} \cdot d\mathbf {S} -\iiint _{V}\mathbf {A} \cdot \nabla \psi \,dV} (integration by parts) ∭ V A ⋅ ( ∇ × B ) d V = − {\displaystyle \iiint _{V}\mathbf {A} \cdot \left(\nabla \times \mathbf {B} \right)\,dV\ =\ -} ∂ V {\displaystyle \scriptstyle \partial V} ( A × B ) ⋅ d S + ∭ V ( ∇ × A ) ⋅ B d V {\displaystyle \left(\mathbf {A} \times \mathbf {B} \right)\cdot d\mathbf {S} +\iiint _{V}\left(\nabla \times \mathbf {A} \right)\cdot \mathbf {B} \,dV} (integration by parts) ∂ V {\displaystyle \scriptstyle \partial V} A × ( d S ⋅ ( B C T ) ) = ∭ V A × ( ∇ ⋅ ( B C T ) ) d V + ∭ V B ⋅ ( ∇ A ) × C d V {\displaystyle \mathbf {A} \times \left(d\mathbf {S} \cdot \left(\mathbf {B} \mathbf {C} ^{\textsf {T}}\right)\right)\ =\ \iiint _{V}\mathbf {A} \times \left(\nabla \cdot \left(\mathbf {B} \mathbf {C} ^{\textsf {T}}\right)\right)\,dV+\iiint _{V}\mathbf {B} \cdot (\nabla \mathbf {A} )\times \mathbf {C} \,dV} ∭ V ( ∇ ⋅ B + B ⋅ ∇ ) A d V = {\displaystyle \iiint _{V}\left(\nabla \cdot \mathbf {B} +\mathbf {B} \cdot \nabla \right)\mathbf {A} \,dV\ =\ } ∂ V {\displaystyle \scriptstyle \partial V} ( B ⋅ d S ) A {\displaystyle \left(\mathbf {B} \cdot d\mathbf {S} \right)\mathbf {A} } ==== Curve–surface integrals ==== In the following curve–surface integral theorems, S denotes a 2d open surface with a corresponding 1d boundary C = ∂S (a closed curve): ∮ ∂ S A ⋅ d ℓ = ∬ S ( ∇ × A ) ⋅ d S {\displaystyle \oint _{\partial S}\mathbf {A} \cdot d{\boldsymbol {\ell }}\ =\ \iint _{S}\left(\nabla \times \mathbf {A} \right)\cdot d\mathbf {S} } (Stokes' theorem) ∮ ∂ S ψ d ℓ = − ∬ S ∇ ψ × d S {\displaystyle \oint _{\partial S}\psi \,d{\boldsymbol {\ell }}\ =\ -\iint _{S}\nabla \psi \times d\mathbf {S} } ∮ ∂ S A × d ℓ = − ∬ S ( ∇ A − ( ∇ ⋅ A ) 1 ) ⋅ d S = − ∬ S ( d S × ∇ ) × A {\displaystyle \oint _{\partial S}\mathbf {A} \times d{\boldsymbol {\ell }}\ =\ -\iint _{S}\left(\nabla \mathbf {A} -(\nabla \cdot \mathbf {A} )\mathbf {1} \right)\cdot d\mathbf {S} \ =\ -\iint _{S}\left(d\mathbf {S} \times \nabla \right)\times \mathbf {A} } ∮ ∂ S A × ( B × d ℓ ) = ∬ S ( ∇ × ( A B T ) ) ⋅ d S + ∬ S ( ∇ ⋅ ( B A T ) ) × d S {\displaystyle \oint _{\partial S}\mathbf {A} \times (\mathbf {B} \times d{\boldsymbol {\ell }})\ =\ \iint _{S}\left(\nabla \times \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\right)\cdot d\mathbf {S} +\iint _{S}\left(\nabla \cdot \left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\right)\right)\times d\mathbf {S} } ∮ ∂ S ( B ⋅ d ℓ ) A = ∬ S ( d S ⋅ [ ∇ × B − B × ∇ ] ) A {\displaystyle \oint _{\partial S}(\mathbf {B} \cdot d{\boldsymbol {\ell }})\mathbf {A} =\iint _{S}(d\mathbf {S} \cdot \left[\nabla \times \mathbf {B} -\mathbf {B} \times \nabla \right])\mathbf {A} } Integration around a closed curve in the clockwise sense is the negative of the same line integral in the counterclockwise sense (analogous to interchanging the limits in a definite integral): ==== Endpoint-curve integrals ==== In the following endpoint–curve integral theorems, P denotes a 1d open path with signed 0d boundary points q − p = ∂ P {\displaystyle \mathbf {q} -\mathbf {p} =\partial P} and integration along P is from p {\displaystyle \mathbf {p} } to q {\displaystyle \mathbf {q} } : ψ | ∂ P = ψ ( q ) − ψ ( p ) = ∫ P ∇ ψ ⋅ d ℓ {\displaystyle \psi |_{\partial P}=\psi (\mathbf {q} )-\psi (\mathbf {p} )=\int _{P}\nabla \psi \cdot d{\boldsymbol {\ell }}} (gradient theorem) A | ∂ P = A ( q ) − A ( p ) = ∫ P ( d ℓ ⋅ ∇ ) A {\displaystyle \mathbf {A} |_{\partial P}=\mathbf {A} (\mathbf {q} )-\mathbf {A} (\mathbf {p} )=\int _{P}\left(d{\boldsymbol {\ell }}\cdot \nabla \right)\mathbf {A} } A | ∂ P = A ( q ) − A ( p ) = ∫ P ( ∇ A ) ⋅ d ℓ + ∫ P ( ∇ × A ) × d ℓ {\displaystyle \mathbf {A} |_{\partial P}=\mathbf {A} (\mathbf {q} )-\mathbf {A} (\mathbf {p} )=\int _{P}\left(\nabla \mathbf {A} \right)\cdot d{\boldsymbol {\ell }}+\int _{P}\left(\nabla \times \mathbf {A} \right)\times d{\boldsymbol {\ell }}} ==== Tensor integrals ==== A tensor form of a vector integral theorem may be obtained by replacing the vector (or one of them) by a tensor, provided that the vector is first made to appear only as the right-most vector of each integrand. For example, Stokes' theorem becomes ∮ ∂ S d ℓ ⋅ T = ∬ S d S ⋅ ( ∇ × T ) {\displaystyle \oint _{\partial S}d{\boldsymbol {\ell }}\cdot \mathbf {T} \ =\ \iint _{S}d\mathbf {S} \cdot \left(\nabla \times \mathbf {T} \right)} . A scalar field may also be treated as a vector and replaced by a vector or tensor. For example, Green's first identity becomes ∂ V {\displaystyle \scriptstyle \partial V} ψ d S ⋅ ∇ A = ∭ V ( ψ ∇ 2 A + ∇ ψ ⋅ ∇ A ) d V {\displaystyle \psi \,d\mathbf {S} \cdot \nabla \!\mathbf {A} \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\mathbf {A} +\nabla \!\psi \cdot \nabla \!\mathbf {A} \right)\,dV} . Similar rules apply to algebraic and differentiation formulas. For algebraic formulas one may alternatively use the left-most vector position. == See also == Comparison of vector algebra and geometric algebra Del in cylindrical and spherical coordinates – Mathematical gradient operator in certain coordinate systems Differentiation rules – Rules for computing derivatives of functions Exterior calculus identities Exterior derivative – Operation on differential forms List of limits Table of derivatives – Rules for computing derivatives of functionsPages displaying short descriptions of redirect targets Vector algebra relations – Formulas about vectors in three-dimensional Euclidean space == References == == Further reading ==
Wikipedia:Vector projection#0
The vector projection (also known as the vector component or vector resolution) of a vector a on (or onto) a nonzero vector b is the orthogonal projection of a onto a straight line parallel to b. The projection of a onto b is often written as proj b ⁡ a {\displaystyle \operatorname {proj} _{\mathbf {b} }\mathbf {a} } or a∥b. The vector component or vector resolute of a perpendicular to b, sometimes also called the vector rejection of a from b (denoted oproj b ⁡ a {\displaystyle \operatorname {oproj} _{\mathbf {b} }\mathbf {a} } or a⊥b), is the orthogonal projection of a onto the plane (or, in general, hyperplane) that is orthogonal to b. Since both proj b ⁡ a {\displaystyle \operatorname {proj} _{\mathbf {b} }\mathbf {a} } and oproj b ⁡ a {\displaystyle \operatorname {oproj} _{\mathbf {b} }\mathbf {a} } are vectors, and their sum is equal to a, the rejection of a from b is given by: oproj b ⁡ a = a − proj b ⁡ a . {\displaystyle \operatorname {oproj} _{\mathbf {b} }\mathbf {a} =\mathbf {a} -\operatorname {proj} _{\mathbf {b} }\mathbf {a} .} To simplify notation, this article defines a 1 := proj b ⁡ a {\displaystyle \mathbf {a} _{1}:=\operatorname {proj} _{\mathbf {b} }\mathbf {a} } and a 2 := oproj b ⁡ a . {\displaystyle \mathbf {a} _{2}:=\operatorname {oproj} _{\mathbf {b} }\mathbf {a} .} Thus, the vector a 1 {\displaystyle \mathbf {a} _{1}} is parallel to b , {\displaystyle \mathbf {b} ,} the vector a 2 {\displaystyle \mathbf {a} _{2}} is orthogonal to b , {\displaystyle \mathbf {b} ,} and a = a 1 + a 2 . {\displaystyle \mathbf {a} =\mathbf {a} _{1}+\mathbf {a} _{2}.} The projection of a onto b can be decomposed into a direction and a scalar magnitude by writing it as a 1 = a 1 b ^ {\displaystyle \mathbf {a} _{1}=a_{1}\mathbf {\hat {b}} } where a 1 {\displaystyle a_{1}} is a scalar, called the scalar projection of a onto b, and b̂ is the unit vector in the direction of b. The scalar projection is defined as a 1 = ‖ a ‖ cos ⁡ θ = a ⋅ b ^ {\displaystyle a_{1}=\left\|\mathbf {a} \right\|\cos \theta =\mathbf {a} \cdot \mathbf {\hat {b}} } where the operator ⋅ denotes a dot product, ‖a‖ is the length of a, and θ is the angle between a and b. The scalar projection is equal in absolute value to the length of the vector projection, with a minus sign if the direction of the projection is opposite to the direction of b, that is, if the angle between the vectors is more than 90 degrees. The vector projection can be calculated using the dot product of a {\displaystyle \mathbf {a} } and b {\displaystyle \mathbf {b} } as: proj b ⁡ a = ( a ⋅ b ^ ) b ^ = a ⋅ b ‖ b ‖ b ‖ b ‖ = a ⋅ b ‖ b ‖ 2 b = a ⋅ b b ⋅ b b . {\displaystyle \operatorname {proj} _{\mathbf {b} }\mathbf {a} =\left(\mathbf {a} \cdot \mathbf {\hat {b}} \right)\mathbf {\hat {b}} ={\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {b} \right\|}}{\frac {\mathbf {b} }{\left\|\mathbf {b} \right\|}}={\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {b} \right\|^{2}}}{\mathbf {b} }={\frac {\mathbf {a} \cdot \mathbf {b} }{\mathbf {b} \cdot \mathbf {b} }}{\mathbf {b} }~.} == Notation == This article uses the convention that vectors are denoted in a bold font (e.g. a1), and scalars are written in normal font (e.g. a1). The dot product of vectors a and b is written as a ⋅ b {\displaystyle \mathbf {a} \cdot \mathbf {b} } , the norm of a is written ‖a‖, the angle between a and b is denoted θ. == Definitions based on angle alpha == === Scalar projection === The scalar projection of a on b is a scalar equal to a 1 = ‖ a ‖ cos ⁡ θ , {\displaystyle a_{1}=\left\|\mathbf {a} \right\|\cos \theta ,} where θ is the angle between a and b. A scalar projection can be used as a scale factor to compute the corresponding vector projection. === Vector projection === The vector projection of a on b is a vector whose magnitude is the scalar projection of a on b with the same direction as b. Namely, it is defined as a 1 = a 1 b ^ = ( ‖ a ‖ cos ⁡ θ ) b ^ {\displaystyle \mathbf {a} _{1}=a_{1}\mathbf {\hat {b}} =(\left\|\mathbf {a} \right\|\cos \theta )\mathbf {\hat {b}} } where a 1 {\displaystyle a_{1}} is the corresponding scalar projection, as defined above, and b ^ {\displaystyle \mathbf {\hat {b}} } is the unit vector with the same direction as b: b ^ = b ‖ b ‖ {\displaystyle \mathbf {\hat {b}} ={\frac {\mathbf {b} }{\left\|\mathbf {b} \right\|}}} === Vector rejection === By definition, the vector rejection of a on b is: a 2 = a − a 1 {\displaystyle \mathbf {a} _{2}=\mathbf {a} -\mathbf {a} _{1}} Hence, a 2 = a − ( ‖ a ‖ cos ⁡ θ ) b ^ {\displaystyle \mathbf {a} _{2}=\mathbf {a} -\left(\left\|\mathbf {a} \right\|\cos \theta \right)\mathbf {\hat {b}} } == Definitions in terms of a and b == When θ is not known, the cosine of θ can be computed in terms of a and b, by the following property of the dot product a ⋅ b a ⋅ b = ‖ a ‖ ‖ b ‖ cos ⁡ θ {\displaystyle \mathbf {a} \cdot \mathbf {b} =\left\|\mathbf {a} \right\|\left\|\mathbf {b} \right\|\cos \theta } === Scalar projection === By the above-mentioned property of the dot product, the definition of the scalar projection becomes: In two dimensions, this becomes a 1 = a x b x + a y b y ‖ b ‖ . {\displaystyle a_{1}={\frac {\mathbf {a} _{x}\mathbf {b} _{x}+\mathbf {a} _{y}\mathbf {b} _{y}}{\left\|\mathbf {b} \right\|}}.} === Vector projection === Similarly, the definition of the vector projection of a onto b becomes: a 1 = a 1 b ^ = a ⋅ b ‖ b ‖ b ‖ b ‖ , {\displaystyle \mathbf {a} _{1}=a_{1}\mathbf {\hat {b}} ={\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {b} \right\|}}{\frac {\mathbf {b} }{\left\|\mathbf {b} \right\|}},} which is equivalent to either a 1 = ( a ⋅ b ^ ) b ^ , {\displaystyle \mathbf {a} _{1}=\left(\mathbf {a} \cdot \mathbf {\hat {b}} \right)\mathbf {\hat {b}} ,} or a 1 = a ⋅ b ‖ b ‖ 2 b = a ⋅ b b ⋅ b b . {\displaystyle \mathbf {a} _{1}={\frac {\mathbf {a} \cdot \mathbf {b} }{\left\|\mathbf {b} \right\|^{2}}}{\mathbf {b} }={\frac {\mathbf {a} \cdot \mathbf {b} }{\mathbf {b} \cdot \mathbf {b} }}{\mathbf {b} }~.} === Scalar rejection === In two dimensions, the scalar rejection is equivalent to the projection of a onto b ⊥ = ( − b y b x ) {\displaystyle \mathbf {b} ^{\perp }={\begin{pmatrix}-\mathbf {b} _{y}&\mathbf {b} _{x}\end{pmatrix}}} , which is b = ( b x b y ) {\displaystyle \mathbf {b} ={\begin{pmatrix}\mathbf {b} _{x}&\mathbf {b} _{y}\end{pmatrix}}} rotated 90° to the left. Hence, a 2 = ‖ a ‖ sin ⁡ θ = a ⋅ b ⊥ ‖ b ‖ = a y b x − a x b y ‖ b ‖ . {\displaystyle a_{2}=\left\|\mathbf {a} \right\|\sin \theta ={\frac {\mathbf {a} \cdot \mathbf {b} ^{\perp }}{\left\|\mathbf {b} \right\|}}={\frac {\mathbf {a} _{y}\mathbf {b} _{x}-\mathbf {a} _{x}\mathbf {b} _{y}}{\left\|\mathbf {b} \right\|}}.} Such a dot product is called the "perp dot product." === Vector rejection === By definition, a 2 = a − a 1 {\displaystyle \mathbf {a} _{2}=\mathbf {a} -\mathbf {a} _{1}} Hence, a 2 = a − a ⋅ b b ⋅ b b . {\displaystyle \mathbf {a} _{2}=\mathbf {a} -{\frac {\mathbf {a} \cdot \mathbf {b} }{\mathbf {b} \cdot \mathbf {b} }}{\mathbf {b} }.} By using the Scalar rejection using the perp dot product this gives a 2 = a ⋅ b ⊥ b ⋅ b b ⊥ {\displaystyle \mathbf {a} _{2}={\frac {\mathbf {a} \cdot \mathbf {b} ^{\perp }}{\mathbf {b} \cdot \mathbf {b} }}\mathbf {b} ^{\perp }} == Properties == === Scalar projection === The scalar projection a on b is a scalar which has a negative sign if 90 degrees < θ ≤ 180 degrees. It coincides with the length ‖c‖ of the vector projection if the angle is smaller than 90°. More exactly: a1 = ‖a1‖ if 0° ≤ θ ≤ 90°, a1 = −‖a1‖ if 90° < θ ≤ 180°. === Vector projection === The vector projection of a on b is a vector a1 which is either null or parallel to b. More exactly: a1 = 0 if θ = 90°, a1 and b have the same direction if 0° ≤ θ < 90°, a1 and b have opposite directions if 90° < θ ≤ 180°. === Vector rejection === The vector rejection of a on b is a vector a2 which is either null or orthogonal to b. More exactly: a2 = 0 if θ = 0° or θ = 180°, a2 is orthogonal to b if 0 < θ < 180°, == Matrix representation == The orthogonal projection can be represented by a projection matrix. To project a vector onto the unit vector a = (ax, ay, az), it would need to be multiplied with this projection matrix: == Uses == The vector projection is an important operation in the Gram–Schmidt orthonormalization of vector space bases. It is also used in the separating axis theorem to detect whether two convex shapes intersect. == Generalizations == Since the notions of vector length and angle between vectors can be generalized to any n-dimensional inner product space, this is also true for the notions of orthogonal projection of a vector, projection of a vector onto another, and rejection of a vector from another. === Vector projection on a plane === In some cases, the inner product coincides with the dot product. Whenever they don't coincide, the inner product is used instead of the dot product in the formal definitions of projection and rejection. For a three-dimensional inner product space, the notions of projection of a vector onto another and rejection of a vector from another can be generalized to the notions of projection of a vector onto a plane, and rejection of a vector from a plane. The projection of a vector on a plane is its orthogonal projection on that plane. The rejection of a vector from a plane is its orthogonal projection on a straight line which is orthogonal to that plane. Both are vectors. The first is parallel to the plane, the second is orthogonal. For a given vector and plane, the sum of projection and rejection is equal to the original vector. Similarly, for inner product spaces with more than three dimensions, the notions of projection onto a vector and rejection from a vector can be generalized to the notions of projection onto a hyperplane, and rejection from a hyperplane. In geometric algebra, they can be further generalized to the notions of projection and rejection of a general multivector onto/from any invertible k-blade. == See also == Scalar projection Vector notation == References == == External links == Projection of a vector onto a plane
Wikipedia:Vector-valued function#0
A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector (that is, the dimension of the domain could be 1 or greater than 1); the dimension of the function's domain has no relation to the dimension of its range. == Example: Helix == A common example of a vector-valued function is one that depends on a single real parameter t, often representing time, producing a vector v(t) as the result. In terms of the standard unit vectors i, j, k of Cartesian 3-space, these specific types of vector-valued functions are given by expressions such as r ( t ) = f ( t ) i + g ( t ) j + h ( t ) k {\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} +h(t)\mathbf {k} } where f(t), g(t) and h(t) are the coordinate functions of the parameter t, and the domain of this vector-valued function is the intersection of the domains of the functions f, g, and h. It can also be referred to in a different notation: r ( t ) = ⟨ f ( t ) , g ( t ) , h ( t ) ⟩ {\displaystyle \mathbf {r} (t)=\langle f(t),g(t),h(t)\rangle } The vector r(t) has its tail at the origin and its head at the coordinates evaluated by the function. The vector shown in the graph to the right is the evaluation of the function ⟨ 2 cos ⁡ t , 4 sin ⁡ t , t ⟩ {\displaystyle \langle 2\cos t,\,4\sin t,\,t\rangle } near t = 19.5 (between 6π and 6.5π; i.e., somewhat more than 3 rotations). The helix is the path traced by the tip of the vector as t increases from zero through 8π. In 2D, we can analogously speak about vector-valued functions as: r ( t ) = f ( t ) i + g ( t ) j {\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} } or r ( t ) = ⟨ f ( t ) , g ( t ) ⟩ {\displaystyle \mathbf {r} (t)=\langle f(t),g(t)\rangle } == Linear case == In the linear case the function can be expressed in terms of matrices: y = A x , {\displaystyle \mathbf {y} =A\mathbf {x} ,} where y is an n × 1 output vector, x is a k × 1 vector of inputs, and A is an n × k matrix of parameters. Closely related is the affine case (linear up to a translation) where the function takes the form y = A x + b , {\displaystyle \mathbf {y} =A\mathbf {x} +\mathbf {b} ,} where in addition b'' is an n × 1 vector of parameters. The linear case arises often, for example in multiple regression, where for instance the n × 1 vector y ^ {\displaystyle {\hat {y}}} of predicted values of a dependent variable is expressed linearly in terms of a k × 1 vector β ^ {\displaystyle {\hat {\boldsymbol {\beta }}}} (k < n) of estimated values of model parameters: y ^ = X β ^ , {\displaystyle {\hat {\mathbf {y} }}=X{\hat {\boldsymbol {\beta }}},} in which X (playing the role of A in the previous generic form) is an n × k matrix of fixed (empirically based) numbers. == Parametric representation of a surface == A surface is a 2-dimensional set of points embedded in (most commonly) 3-dimensional space. One way to represent a surface is with parametric equations, in which two parameters s and t determine the three Cartesian coordinates of any point on the surface: ( x , y , z ) = ( f ( s , t ) , g ( s , t ) , h ( s , t ) ) ≡ F ( s , t ) . {\displaystyle (x,y,z)=(f(s,t),g(s,t),h(s,t))\equiv \mathbf {F} (s,t).} Here F is a vector-valued function. For a surface embedded in n-dimensional space, one similarly has the representation ( x 1 , x 2 , … , x n ) = ( f 1 ( s , t ) , f 2 ( s , t ) , … , f n ( s , t ) ) ≡ F ( s , t ) . {\displaystyle (x_{1},x_{2},\dots ,x_{n})=(f_{1}(s,t),f_{2}(s,t),\dots ,f_{n}(s,t))\equiv \mathbf {F} (s,t).} == Derivative of a three-dimensional vector function == Many vector-valued functions, like scalar-valued functions, can be differentiated by simply differentiating the components in the Cartesian coordinate system. Thus, if r ( t ) = f ( t ) i + g ( t ) j + h ( t ) k {\displaystyle \mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} +h(t)\mathbf {k} } is a vector-valued function, then d r d t = f ′ ( t ) i + g ′ ( t ) j + h ′ ( t ) k . {\displaystyle {\frac {d\mathbf {r} }{dt}}=f'(t)\mathbf {i} +g'(t)\mathbf {j} +h'(t)\mathbf {k} .} The vector derivative admits the following physical interpretation: if r(t) represents the position of a particle, then the derivative is the velocity of the particle v ( t ) = d r d t . {\displaystyle \mathbf {v} (t)={\frac {d\mathbf {r} }{dt}}.} Likewise, the derivative of the velocity is the acceleration d v d t = a ( t ) . {\displaystyle {\frac {d\mathbf {v} }{dt}}=\mathbf {a} (t).} === Partial derivative === The partial derivative of a vector function a with respect to a scalar variable q is defined as ∂ a ∂ q = ∑ i = 1 n ∂ a i ∂ q e i {\displaystyle {\frac {\partial \mathbf {a} }{\partial q}}=\sum _{i=1}^{n}{\frac {\partial a_{i}}{\partial q}}\mathbf {e} _{i}} where ai is the scalar component of a in the direction of ei. It is also called the direction cosine of a and ei or their dot product. The vectors e1, e2, e3 form an orthonormal basis fixed in the reference frame in which the derivative is being taken. === Ordinary derivative === If a is regarded as a vector function of a single scalar variable, such as time t, then the equation above reduces to the first ordinary time derivative of a with respect to t, d a d t = ∑ i = 1 n d a i d t e i . {\displaystyle {\frac {d\mathbf {a} }{dt}}=\sum _{i=1}^{n}{\frac {da_{i}}{dt}}\mathbf {e} _{i}.} === Total derivative === If the vector a is a function of a number n of scalar variables qr (r = 1, ..., n), and each qr is only a function of time t, then the ordinary derivative of a with respect to t can be expressed, in a form known as the total derivative, as d a d t = ∑ r = 1 n ∂ a ∂ q r d q r d t + ∂ a ∂ t . {\displaystyle {\frac {d\mathbf {a} }{dt}}=\sum _{r=1}^{n}{\frac {\partial \mathbf {a} }{\partial q_{r}}}{\frac {dq_{r}}{dt}}+{\frac {\partial \mathbf {a} }{\partial t}}.} Some authors prefer to use capital D to indicate the total derivative operator, as in D/Dt. The total derivative differs from the partial time derivative in that the total derivative accounts for changes in a due to the time variance of the variables qr. === Reference frames === Whereas for scalar-valued functions there is only a single possible reference frame, to take the derivative of a vector-valued function requires the choice of a reference frame (at least when a fixed Cartesian coordinate system is not implied as such). Once a reference frame has been chosen, the derivative of a vector-valued function can be computed using techniques similar to those for computing derivatives of scalar-valued functions. A different choice of reference frame will, in general, produce a different derivative function. The derivative functions in different reference frames have a specific kinematical relationship. === Derivative of a vector function with nonfixed bases === The above formulas for the derivative of a vector function rely on the assumption that the basis vectors e1, e2, e3 are constant, that is, fixed in the reference frame in which the derivative of a is being taken, and therefore the e1, e2, e3 each has a derivative of identically zero. This often holds true for problems dealing with vector fields in a fixed coordinate system, or for simple problems in physics. However, many complex problems involve the derivative of a vector function in multiple moving reference frames, which means that the basis vectors will not necessarily be constant. In such a case where the basis vectors e1, e2, e3 are fixed in reference frame E, but not in reference frame N, the more general formula for the ordinary time derivative of a vector in reference frame N is N d a d t = ∑ i = 1 3 d a i d t e i + ∑ i = 1 3 a i N d e i d t {\displaystyle {\frac {{}^{\mathrm {N} }d\mathbf {a} }{dt}}=\sum _{i=1}^{3}{\frac {da_{i}}{dt}}\mathbf {e} _{i}+\sum _{i=1}^{3}a_{i}{\frac {{}^{\mathrm {N} }d\mathbf {e} _{i}}{dt}}} where the superscript N to the left of the derivative operator indicates the reference frame in which the derivative is taken. As shown previously, the first term on the right hand side is equal to the derivative of a in the reference frame where e1, e2, e3 are constant, reference frame E. It also can be shown that the second term on the right hand side is equal to the relative angular velocity of the two reference frames cross multiplied with the vector a itself. Thus, after substitution, the formula relating the derivative of a vector function in two reference frames is N d a d t = E d a d t + N ω E × a {\displaystyle {\frac {{}^{\mathrm {N} }d\mathbf {a} }{dt}}={\frac {{}^{\mathrm {E} }d\mathbf {a} }{dt}}+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {a} } where NωE is the angular velocity of the reference frame E relative to the reference frame N. One common example where this formula is used is to find the velocity of a space-borne object, such as a rocket, in the inertial reference frame using measurements of the rocket's velocity relative to the ground. The velocity NvR in inertial reference frame N of a rocket R located at position rR can be found using the formula N d d t ( r R ) = E d d t ( r R ) + N ω E × r R . {\displaystyle {\frac {{}^{\mathrm {N} }d}{dt}}(\mathbf {r} ^{\mathrm {R} })={\frac {{}^{\mathrm {E} }d}{dt}}(\mathbf {r} ^{\mathrm {R} })+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {r} ^{\mathrm {R} }.} where NωE is the angular velocity of the Earth relative to the inertial frame N. Since velocity is the derivative of position, NvR and EvR are the derivatives of rR in reference frames N and E, respectively. By substitution, N v R = E v R + N ω E × r R {\displaystyle {}^{\mathrm {N} }\mathbf {v} ^{\mathrm {R} }={}^{\mathrm {E} }\mathbf {v} ^{\mathrm {R} }+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {r} ^{\mathrm {R} }} where EvR is the velocity vector of the rocket as measured from a reference frame E that is fixed to the Earth. === Derivative and vector multiplication === The derivative of a product of vector functions behaves similarly to the derivative of a product of scalar functions. Specifically, in the case of scalar multiplication of a vector, if p is a scalar variable function of q, ∂ ∂ q ( p a ) = ∂ p ∂ q a + p ∂ a ∂ q . {\displaystyle {\frac {\partial }{\partial q}}(p\mathbf {a} )={\frac {\partial p}{\partial q}}\mathbf {a} +p{\frac {\partial \mathbf {a} }{\partial q}}.} In the case of dot multiplication, for two vectors a and b that are both functions of q, ∂ ∂ q ( a ⋅ b ) = ∂ a ∂ q ⋅ b + a ⋅ ∂ b ∂ q . {\displaystyle {\frac {\partial }{\partial q}}(\mathbf {a} \cdot \mathbf {b} )={\frac {\partial \mathbf {a} }{\partial q}}\cdot \mathbf {b} +\mathbf {a} \cdot {\frac {\partial \mathbf {b} }{\partial q}}.} Similarly, the derivative of the cross product of two vector functions is ∂ ∂ q ( a × b ) = ∂ a ∂ q × b + a × ∂ b ∂ q . {\displaystyle {\frac {\partial }{\partial q}}(\mathbf {a} \times \mathbf {b} )={\frac {\partial \mathbf {a} }{\partial q}}\times \mathbf {b} +\mathbf {a} \times {\frac {\partial \mathbf {b} }{\partial q}}.} === Derivative of an n-dimensional vector function === A function f of a real number t with values in the space R n {\displaystyle \mathbb {R} ^{n}} can be written as f ( t ) = ( f 1 ( t ) , f 2 ( t ) , … , f n ( t ) ) {\displaystyle \mathbf {f} (t)=(f_{1}(t),f_{2}(t),\ldots ,f_{n}(t))} . Its derivative equals f ′ ( t ) = ( f 1 ′ ( t ) , f 2 ′ ( t ) , … , f n ′ ( t ) ) . {\displaystyle \mathbf {f} '(t)=(f_{1}'(t),f_{2}'(t),\ldots ,f_{n}'(t)).} If f is a function of several variables, say of t ∈ R m {\displaystyle t\in \mathbb {R} ^{m}} , then the partial derivatives of the components of f form a n × m {\displaystyle n\times m} matrix called the Jacobian matrix of f. == Infinite-dimensional vector functions == If the values of a function f lie in an infinite-dimensional vector space X, such as a Hilbert space, then f may be called an infinite-dimensional vector function. === Functions with values in a Hilbert space === If the argument of f is a real number and X is a Hilbert space, then the derivative of f at a point t can be defined as in the finite-dimensional case: f ′ ( t ) = lim h → 0 f ( t + h ) − f ( t ) h . {\displaystyle \mathbf {f} '(t)=\lim _{h\to 0}{\frac {\mathbf {f} (t+h)-\mathbf {f} (t)}{h}}.} Most results of the finite-dimensional case also hold in the infinite-dimensional case too, mutatis mutandis. Differentiation can also be defined to functions of several variables (e.g., t ∈ R n {\displaystyle t\in \mathbb {R} ^{n}} or even t ∈ Y {\displaystyle t\in Y} , where Y is an infinite-dimensional vector space). N.B. If X is a Hilbert space, then one can easily show that any derivative (and any other limit) can be computed componentwise: if f = ( f 1 , f 2 , f 3 , … ) {\displaystyle \mathbf {f} =(f_{1},f_{2},f_{3},\ldots )} (i.e., f = f 1 e 1 + f 2 e 2 + f 3 e 3 + ⋯ {\displaystyle \mathbf {f} =f_{1}\mathbf {e} _{1}+f_{2}\mathbf {e} _{2}+f_{3}\mathbf {e} _{3}+\cdots } , where e 1 , e 2 , e 3 , … {\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3},\ldots } is an orthonormal basis of the space X ), and f ′ ( t ) {\displaystyle f'(t)} exists, then f ′ ( t ) = ( f 1 ′ ( t ) , f 2 ′ ( t ) , f 3 ′ ( t ) , … ) . {\displaystyle \mathbf {f} '(t)=(f_{1}'(t),f_{2}'(t),f_{3}'(t),\ldots ).} However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space. === Other infinite-dimensional vector spaces === Most of the above hold for other topological vector spaces X too. However, not as many classical results hold in the Banach space setting, e.g., an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases. == Vector field == == See also == Coordinate vector Curve Multivalued function Parametric surface Position vector Parametrization == Notes == == References == == External links == Vector-valued functions and their properties (from Lake Tahoe Community College) Weisstein, Eric W. "Vector Function". MathWorld. Everything2 article 3 Dimensional vector-valued functions (from East Tennessee State University) "Position Vector Valued Functions" Khan Academy module
Wikipedia:Vectorial Mechanics#0
Vectorial Mechanics (1948) is a book on vector manipulation (i.e., vector methods) by Edward Arthur Milne, a highly decorated (e.g., James Scott Prize Lectureship) British astrophysicist and mathematician. Milne states that the text was due to conversations (circa 1924) with his then-colleague and erstwhile teacher Sydney Chapman who viewed vectors not merely as a pretty toy but as a powerful weapon of applied mathematics. Milne states that he did not at first believe Chapman, holding on to the idea that "vectors were like a pocket-rule, which needs to be unfolded before it can be applied and used." In time, however, Milne convinces himself that Chapman was right. == Summary == Vectorial Mechanics has 18 chapters grouped into 3 parts. Part I is on vector algebra including chapters on a definition of a vector, products of vectors, elementary tensor analysis, and integral theorems. Part II is on systems of line vectors including chapters on line co-ordinates, systems of line vectors, statics of rigid bodies, the displacement of a rigid body, and the work of a system of line vectors. Part III is on dynamics including kinematics, particle dynamics, types of particle motion, dynamics of systems of particles, rigid bodies in motion, dynamics of rigid bodies, motion of a rigid body about its center of mass, gyrostatic problems, and impulsive motion. == Summary of reviews == There were significant reviews given near the time of original publication. G.J.Whitrow: Although many books have been published in recent years in which vector and tensor methods are used for solving problems in geometry and mathematical physics, there has been a lack of first-class treatises which explain the methods in full detail and are nevertheless suitable for the undergraduate student. In applied mathematics no book has appeared till now which is comparable with Hardy's Pure Mathematics. ... Just as in Hardy's classic, a new note is struck at the very start: a precise definition is given of the concept "free vector", analogous to the Frege-Russell definition of "cardinal number." According to Milne, a free vector is the class of all its representations, a typical representation being defined in the customary manner. From a pedagogic point of view, however, the reviewer wonders whether it might have been better to draw attention at this early stage to a concrete instance of a free vector. The student familiar with physical concepts which have magnitude and position, but not direction, should be made to realise from the very beginning that the free vector is not merely "fundamental in discussing systems of position vectors and systems of line-vectors", but occurs naturally in its own right, as there are physical concepts which have magnitude and direction but not position, e.g. the couple in statics, and the angular velocity of a rigid body. Although the necessary existence theorems must be established at a later stage, and Milne's rigorous proofs are particularly welcome, there is no reason why some instances of free vectors should not be mentioned at this point." Daniel C. Lewis: The reviewer has long felt that the role of vector analysis in mechanics has been much overemphasized. It is true that the fundamental equations of motion in their various forms, especially in the case of rigid bodies, can be derived with greatest economy of thought by use of vectors (assuming that the requisite technique has already been developed); but once the equations have been set up, the usual procedure is to drop vector methods in their solution. If this position can be successfully refuted, this has been done in the present work, the most novel feature of which is to solve the vector differential equations by vector methods without ever writing down the corresponding scalar differential equations obtained by taking components. The author has certainly been successful in showing that this can be done in fairly simple, though nontrivial, cases. To give an example of a definitely nontrivial problem solved in this way, one might mention the nonholonomic problem afforded by the motion of a sphere rolling on a rough inclined plane or on a rough spherical surface. The author's methods are interesting and aesthetically satisfying and therefore deserve the widest publication even if they partake of the nature of a tour de force. == References == E.A.Milne Vectorial Mechanics (New York: Interscience Publishers INC., 1948). PP. xiii, 382 ASIN: B0000EGLGX G.J.Whttrow Review of Vectorial Mechanics The Mathematical Gazette Vol. 33, No. 304. (May, 1949), pp. 136–139. D.C.Lewis Review of Vectorial Mechanics, Mathematical Reviews Volume 10, abstract index 420w, p. 488, 1949. == Notes ==
Wikipedia:Vectorization (mathematics)#0
In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a vector. Specifically, the vectorization of a m × n matrix A, denoted vec(A), is the mn × 1 column vector obtained by stacking the columns of the matrix A on top of one another: vec ⁡ ( A ) = [ a 1 , 1 , … , a m , 1 , a 1 , 2 , … , a m , 2 , … , a 1 , n , … , a m , n ] T {\displaystyle \operatorname {vec} (A)=[a_{1,1},\ldots ,a_{m,1},a_{1,2},\ldots ,a_{m,2},\ldots ,a_{1,n},\ldots ,a_{m,n}]^{\mathrm {T} }} Here, a i , j {\displaystyle a_{i,j}} represents the element in the i-th row and j-th column of A, and the superscript T {\displaystyle {}^{\mathrm {T} }} denotes the transpose. Vectorization expresses, through coordinates, the isomorphism R m × n := R m ⊗ R n ≅ R m n {\displaystyle \mathbf {R} ^{m\times n}:=\mathbf {R} ^{m}\otimes \mathbf {R} ^{n}\cong \mathbf {R} ^{mn}} between these (i.e., of matrices and vectors) as vector spaces. For example, for the 2×2 matrix A = [ a b c d ] {\displaystyle A={\begin{bmatrix}a&b\\c&d\end{bmatrix}}} , the vectorization is vec ⁡ ( A ) = [ a c b d ] {\displaystyle \operatorname {vec} (A)={\begin{bmatrix}a\\c\\b\\d\end{bmatrix}}} . The connection between the vectorization of A and the vectorization of its transpose is given by the commutation matrix. == Compatibility with Kronecker products == The vectorization is frequently used together with the Kronecker product to express matrix multiplication as a linear transformation on matrices. In particular, vec ⁡ ( A B C ) = ( C T ⊗ A ) vec ⁡ ( B ) {\displaystyle \operatorname {vec} (ABC)=(C^{\mathrm {T} }\otimes A)\operatorname {vec} (B)} for matrices A, B, and C of dimensions k×l, l×m, and m×n. For example, if ad A ⁡ ( X ) = A X − X A {\displaystyle \operatorname {ad} _{A}(X)=AX-XA} (the adjoint endomorphism of the Lie algebra gl(n, C) of all n×n matrices with complex entries), then vec ⁡ ( ad A ⁡ ( X ) ) = ( A ⊗ I n − I n ⊗ A T ) vec ( X ) {\displaystyle \operatorname {vec} (\operatorname {ad} _{A}(X))=(A\otimes I_{n}-I_{n}\otimes A^{\mathrm {T} }){\text{vec}}(X)} , where I n {\displaystyle I_{n}} is the n×n identity matrix. There are two other useful formulations: vec ⁡ ( A B C ) = ( I n ⊗ A B ) vec ⁡ ( C ) = ( C T B T ⊗ I k ) vec ⁡ ( A ) vec ⁡ ( A B ) = ( I m ⊗ A ) vec ⁡ ( B ) = ( B T ⊗ I k ) vec ⁡ ( A ) {\displaystyle {\begin{aligned}\operatorname {vec} (ABC)&=(I_{n}\otimes AB)\operatorname {vec} (C)=(C^{\mathrm {T} }B^{\mathrm {T} }\otimes I_{k})\operatorname {vec} (A)\\\operatorname {vec} (AB)&=(I_{m}\otimes A)\operatorname {vec} (B)=(B^{\mathrm {T} }\otimes I_{k})\operatorname {vec} (A)\end{aligned}}} More generally, it has been shown that vectorization is a self-adjunction in the monoidal closed structure of any category of matrices. == Compatibility with Hadamard products == Vectorization is an algebra homomorphism from the space of n × n matrices with the Hadamard (entrywise) product to Cn2 with its Hadamard product: vec ⁡ ( A ∘ B ) = vec ⁡ ( A ) ∘ vec ⁡ ( B ) . {\displaystyle \operatorname {vec} (A\circ B)=\operatorname {vec} (A)\circ \operatorname {vec} (B).} == Compatibility with inner products == Vectorization is a unitary transformation from the space of n×n matrices with the Frobenius (or Hilbert–Schmidt) inner product to Cn2: tr ⁡ ( A † B ) = vec ⁡ ( A ) † vec ⁡ ( B ) , {\displaystyle \operatorname {tr} (A^{\dagger }B)=\operatorname {vec} (A)^{\dagger }\operatorname {vec} (B),} where the superscript † denotes the conjugate transpose. == Vectorization as a linear sum == The matrix vectorization operation can be written in terms of a linear sum. Let X be an m × n matrix that we want to vectorize, and let ei be the i-th canonical basis vector for the n-dimensional space, that is e i = [ 0 , … , 0 , 1 , 0 , … , 0 ] T {\textstyle \mathbf {e} _{i}=\left[0,\dots ,0,1,0,\dots ,0\right]^{\mathrm {T} }} . Let Bi be a (mn) × m block matrix defined as follows: B i = [ 0 ⋮ 0 I m 0 ⋮ 0 ] = e i ⊗ I m {\displaystyle \mathbf {B} _{i}={\begin{bmatrix}\mathbf {0} \\\vdots \\\mathbf {0} \\\mathbf {I} _{m}\\\mathbf {0} \\\vdots \\\mathbf {0} \end{bmatrix}}=\mathbf {e} _{i}\otimes \mathbf {I} _{m}} Bi consists of n block matrices of size m × m, stacked column-wise, and all these matrices are all-zero except for the i-th one, which is a m × m identity matrix Im. Then the vectorized version of X can be expressed as follows: vec ⁡ ( X ) = ∑ i = 1 n B i X e i {\displaystyle \operatorname {vec} (\mathbf {X} )=\sum _{i=1}^{n}\mathbf {B} _{i}\mathbf {X} \mathbf {e} _{i}} Multiplication of X by ei extracts the i-th column, while multiplication by Bi puts it into the desired position in the final vector. Alternatively, the linear sum can be expressed using the Kronecker product: vec ⁡ ( X ) = ∑ i = 1 n e i ⊗ X e i {\displaystyle \operatorname {vec} (\mathbf {X} )=\sum _{i=1}^{n}\mathbf {e} _{i}\otimes \mathbf {X} \mathbf {e} _{i}} == Half-vectorization == For a symmetric matrix A, the vector vec(A) contains more information than is strictly necessary, since the matrix is completely determined by the symmetry together with the lower triangular portion, that is, the n(n + 1)/2 entries on and below the main diagonal. For such matrices, the half-vectorization is sometimes more useful than the vectorization. The half-vectorization, vech(A), of a symmetric n × n matrix A is the n(n + 1)/2 × 1 column vector obtained by vectorizing only the lower triangular part of A: vech ⁡ ( A ) = [ A 1 , 1 , … , A n , 1 , A 2 , 2 , … , A n , 2 , … , A n − 1 , n − 1 , A n , n − 1 , A n , n ] T . {\displaystyle \operatorname {vech} (A)=[A_{1,1},\ldots ,A_{n,1},A_{2,2},\ldots ,A_{n,2},\ldots ,A_{n-1,n-1},A_{n,n-1},A_{n,n}]^{\mathrm {T} }.} For example, for the 2×2 matrix A = [ a b b d ] {\displaystyle A={\begin{bmatrix}a&b\\b&d\end{bmatrix}}} , the half-vectorization is vech ⁡ ( A ) = [ a b d ] {\displaystyle \operatorname {vech} (A)={\begin{bmatrix}a\\b\\d\end{bmatrix}}} . There exist unique matrices transforming the half-vectorization of a matrix to its vectorization and vice versa called, respectively, the duplication matrix and the elimination matrix. == Programming language == Programming languages that implement matrices may have easy means for vectorization. In Matlab/GNU Octave a matrix A can be vectorized by A(:). GNU Octave also allows vectorization and half-vectorization with vec(A) and vech(A) respectively. Julia has the vec(A) function as well. In Python NumPy arrays implement the flatten method, while in R the desired effect can be achieved via the c() or as.vector() functions or, more efficiently, by removing the dimensions attribute of a matrix A with dim(A) <- NULL. In R, function vec() of package 'ks' allows vectorization and function vech() implemented in both packages 'ks' and 'sn' allows half-vectorization. == Applications == Vectorization is used in matrix calculus and its applications in establishing e.g., moments of random vectors and matrices, asymptotics, as well as Jacobian and Hessian matrices. It is also used in local sensitivity and statistical diagnostics. == Notes == == See also == Duplication and elimination matrices Voigt notation Packed storage matrix Column-major order Matricization == References ==
Wikipedia:Vedic square#0
In Indian mathematics, a Vedic square is a variation on a typical 9 × 9 multiplication table where the entry in each cell is the digital root of the product of the column and row headings i.e. the remainder when the product of the row and column headings is divided by 9 (with remainder 0 represented by 9). Numerous geometric patterns and symmetries can be observed in a Vedic square, some of which can be found in traditional Islamic art. == Algebraic properties == The Vedic Square can be viewed as the multiplication table of the monoid ( ( Z / 9 Z ) × , { 1 , ∘ } ) {\displaystyle ((\mathbb {Z} /9\mathbb {Z} )^{\times },\{1,\circ \})} where Z / 9 Z {\displaystyle \mathbb {Z} /9\mathbb {Z} } is the set of positive integers partitioned by the residue classes modulo nine. (the operator ∘ {\displaystyle \circ } refers to the abstract "multiplication" between the elements of this monoid). If a , b {\displaystyle a,b} are elements of ( ( Z / 9 Z ) × , { 1 , ∘ } ) {\displaystyle ((\mathbb {Z} /9\mathbb {Z} )^{\times },\{1,\circ \})} then a ∘ b {\displaystyle a\circ b} can be defined as ( a × b ) mod 9 {\displaystyle (a\times b)\mod {9}} , where the element 9 is representative of the residue class of 0 rather than the traditional choice of 0. This does not form a group because not every non-zero element has a corresponding inverse element; for example 6 ∘ 3 = 9 {\displaystyle 6\circ 3=9} but there is no a ∈ { 1 , ⋯ , 9 } {\displaystyle a\in \{1,\cdots ,9\}} such that 9 ∘ a = 6. {\displaystyle 9\circ a=6.} . === Properties of subsets === The subset { 1 , 2 , 4 , 5 , 7 , 8 } {\displaystyle \{1,2,4,5,7,8\}} forms a cyclic group with 2 as one choice of generator - this is the group of multiplicative units in the ring Z / 9 Z {\displaystyle \mathbb {Z} /9\mathbb {Z} } . Every column and row includes all six numbers - so this subset forms a Latin square. == From two dimensions to three dimensions == A Vedic cube is defined as the layout of each digital root in a three-dimensional multiplication table. == Vedic squares in a higher radix == Vedic squares with a higher radix (or number base) can be calculated to analyse the symmetric patterns that arise. Using the calculation above, ( a × b ) mod ( base − 1 ) {\displaystyle (a\times b)\mod {({\textrm {base}}-1)}} . The images in this section are color-coded so that the digital root of 1 is dark and the digital root of (base-1) is light. == See also == Latin square Modular arithmetic Monoid == References == Deskins, W.E. (1996), Abstract Algebra, New York: Dover, pp. 162–167, ISBN 0-486-68888-7 Pritchard, Chris (2003), The Changing Shape of Geometry: Celebrating a Century of Geometry and Geometry Teaching, Great Britain: Cambridge University Press, pp. 119–122, ISBN 0-521-53162-4 Ghannam, Talal (2012), The Mystery of Numbers: Revealed Through Their Digital Root, CreateSpace Publications, pp. 68–73, ISBN 978-1-4776-7841-1 Teknomo, Kadi (2005), Digital Root: Vedic Square Chia-Yu, Lin (2016), Digital Root Patterns of Three-Dimensional Space, Recreational Mathematics Magazine, pp. 9–31, ISSN 2182-1976
Wikipedia:Venansius Baryamureeba#0
Venansius Baryamureeba (born 18 May 1969) is a Ugandan mathematician, computer scientist, academic, and academic administrator. He was the Acting vice chancellor of the Uganda Technology and Management University, a private university in Uganda, from September 2013 until 28 September 2015. He left the position to join the presidential race in Uganda to take place in 2016. Before that, he served as the vice chancellor of Makerere University from November 2009 until August 2012. == Background and education == He was born in Kasharara Village, Kagongo Parish, Ibanda District, in the Western Region of Uganda. He holds a Bachelor of Science in mathematics, obtained in 1994 from Makerere University. He also holds a Master of Science and a Doctor of Philosophy, both in computer science and both from Bergen University in Norway, awarded in 1996 and in 2000, respectively. In 1997, he was awarded the postgraduate Diploma in the Analysis of Linear Programming Models by the University of Trondheim, also in Norway. == Career == His career in academia began soon after his first degree, when he worked as a teaching assistant in the Institute of Statistics and Applied Economics at Makerere University, from 1994 until 1998. He then worked as an assistant lecturer at the Institute of Teacher Education Kyambogo, which now is part of Kyambogo University, from 1995 until 1996. While pursuing graduate study in Norway, he worked as a teaching assistant in the Department of Informatics at Bergen University from 1997 until 2000. He also worked as a research fellow, in the same department and institution, from 1995 until 2000. Beginning in 1998 until 2000, he worked as a lecturer in the Department of Mathematics at Makerere University. He was a senior lecturer in the Institute of Computer Science at Makerere University, from 2001 until 2006 (which was transformed into the Department of Computer Science, Faculty of Computing and IT (FCI)). He then became an associate professor, and, in November 2006, he was made a professor, continuing to teach until August 2012 at FCI. From October 2005 until June 2010, he served as the dean of FCI. From November 2009 until August 2012, he was vice chancellor of Makerere University. At Uganda Technology and Management University, he has served since September 2012 as the vice chancellor and as a professor of computer science in the School of Computing and Engineering. == Works == The enhanced digital investigation process model Extraction of interesting association rules using genetic algorithms Cyber crime in Uganda: Myth or reality? The role of ICTs and their sustainability in developing countries Mining High Quality Association Rules Using Genetic Algorithms. Optimized association rule mining with genetic algorithms ICT as an engine for Uganda's economic growth: The role of and opportunities for Makerere University ICT-enabled services: a critical analysis of the opportunities and challenges in Uganda Towards domain independent named entity recognition Baryamureeba, Venansius; Steihaug, Trond; Zhang, Yin (April 1999). Properties of A Class of Preconditioners for Weighted Least Squares Problems (Report). hdl:1911/101921. Kitoogo, Fredrick Edward; Baryamureeba, Venansius (2007). A methodology for feature selection in named entity recognition. Fountain Publishers Kampala. hdl:10570/702. ISBN 978-9970-02-730-9. Baryamureeba, Venansius; Steihaug, Trond (2006). "On the Convergence of an Inexact Primal-Dual Interior Point Method for Linear Programming". Large-Scale Scientific Computing. Lecture Notes in Computer Science. Vol. 3743. pp. 629–637. doi:10.1007/11666806_72. ISBN 978-3-540-31994-8. Lowu, Francis; Baryamureeba, Venansius (2006). "On Efficient Distribution of Data in Multicast Networks: QoS in Scalable Networks". Large-Scale Scientific Computing. Lecture Notes in Computer Science. Vol. 3743. pp. 518–525. doi:10.1007/11666806_59. ISBN 978-3-540-31994-8. Baryamureeba, Venansius (March 2002). "Solution of large-scale weighted least-squares problems". Numerical Linear Algebra with Applications. 9 (2): 93–106. CiteSeerX 10.1.1.33.6217. doi:10.1002/nla.232. S2CID 18869972. Computational issues for a new class of preconditioners Mwebesa, Theodora Mondo T.; Baryamureeba, Venansius; Williams, Ddembe (2007). "Collaborative framework for supporting indigenous knowledge management". Proceedings of the ACM first Ph.D. Workshop in CIKM on - PIKM '07. p. 163. doi:10.1145/1316874.1316900. ISBN 978-1-59593-832-9. S2CID 14753623. The role of TVET in building regional economies Computer forensics for cyberspace crimes On the properties of preconditioners for robust linear regression On a class of preconditioners for interior point methods Wakabi-Waiswa, Peter P.; Baryamureeba, Venansius; Sarukesi, K. (2008). "Generalized association rule mining using genetic algorithms" (Document). Fountain Publisher Kampala. hdl:10570/1901. Logit analysis of socioeconomic factors influencing famine in Uganda. Solution of robust linear regression problems by preconditioned conjugate gradient type methods Baryamureeba, Venansius; Steihaug, Trond (1999). Properties and computational issues of a preconditioner for interior point methods. Dept. of Informatics, University of Bergen. OCLC 692263578. Baryamureeba, Venansius; Steihaug, Trond; Zhang, Yin (1999). Properties of A Class of Preconditioners for Weighted Least Squares Problems (Report). hdl:1911/101921. A new function for robust linear regression: An iterative approach Approaches towards effective knowledge management for small and medium enterprises in developing countries-Uganda Tushabe, Florence; Baryamureeba, Venansius; Bagyenda, Paul; Ogwang, Cyprian; Jehopio, Peter (2008). "The Status of Software Usability in Uganda". In Aisbett, Janet; Gibbon, Greg; Rodrigues, Anthony J.; Migga, Joseph Kizza; Nath, Ravi; Renardel, Gerald R (eds.). Strengthening the Role of ICT in Development. pp. 1–11. CiteSeerX 10.1.1.940.489. Baryamureeba, Venansius (1 December 2001). "The Impact of Equal Weighting of Low- and High-Confidence Observations on Robust Linear Regression Computations". BIT Numerical Mathematics. 41 (5): 847–855. doi:10.1023/A:1021912522498. ISSN 1572-9125. S2CID 118054436. Williams, Ddembe; Baryamureeba, Venansius, eds. (2006). Measuring Computing Research Excellence and Vitality. CiteSeerX 10.1.1.615.8916. ISBN 978-9970-02-592-3. Baryamureeba, Venansius; Steihaug, Trond; Zhang, Yin (1999). "Application of a Class of Preconditioners to Large Scale Linear Programming Problems". Euro-Par'99 Parallel Processing. Lecture Notes in Computer Science. Vol. 1685. pp. 1044–1048. doi:10.1007/3-540-48311-X_146. ISBN 978-3-540-66443-7. Kitoogo, Fredrick Edward; Baryamureeba, Venansius. "Meta-Knowledge as an engine in Classifier Combination". International Journal of Computing and ICT Research. 1 (2): 74–86. CiteSeerX 10.1.1.133.1815. Baryamureeba, Venansius (2004). "Solution of Robust Linear Regression Problems by Krylov Subspace Methods". Large-Scale Scientific Computing. Lecture Notes in Computer Science. Vol. 2907. pp. 67–75. doi:10.1007/978-3-540-24588-9_6. ISBN 978-3-540-21090-0. Tushabe, Florence; Venansius, Baryamureeba; Katushemererwe, Frida. "Translation of the Google Interface into Runyakitara" (PDF). Makerere University. S2CID 210706862. The role of academia in fostering private sector competitiveness in ICT development Angyeyo, Jennifer S.; Baryamureeba, Venansius; Jehopio, Peter (2006). "A Dynamic Framework for the Protection of Intellectual Property Rights in the Cyberspace". In Williams, Ddembe; Baryamureeba, Venansius (eds.). Measuring Computing Research Excellence and Vitality. pp. 228–245. CiteSeerX 10.1.1.615.8916. ISBN 978-9970-02-592-3. == See also == Education in Uganda Ugandan university leaders List of universities in Uganda == References == == External links == Website of Uganda Technology and Management University Baryamureeba Speaks Out On Life After Makerere - 15 October 2012 Professor Venansius Baryamureeba – Five Plus Interview: 24 August 2013 Baryamureeba Denies Defaming Makerere Don - 16 February 2013
Wikipedia:Venvaroha#0
Veṇvāroha is a work in Sanskrit composed by Mādhava of Sangamagrāma (c. 1350 – c. 1425), the founder of the Kerala school of astronomy and mathematics. It is a work in 74 verses describing methods for the computation of the true positions of the Moon at intervals of about half an hour for various days in an anomalistic cycle. This work is an elaboration of an earlier and shorter work of Mādhava himself titled Sphutacandrāpti. Veṇvāroha is the most popular astronomical work of Mādhava. == Etymology == The title Veṇvāroha literally means 'Bamboo Climbing' (Veṇu 'bamboo' + āroha 'climbing') and it is indicative of the computational procedure expounded in the text. The computational scheme is like climbing a bamboo tree, going up and up step by step at measured equal heights. == Overview == It is dated 1403 CE. Acyuta Piṣārati (1550–1621), another prominent mathematician/astronomer of the Kerala school, has composed a Malayalam commentary on Veṇvāroha. This astronomical treatise is of a type generally described as Karaṇa texts in India. Such works are characterized by the fact that they are compilations of computational methods of practical astronomy. The novelty and ingenuity of the method attracted the attention of several of the followers of Mādhava and they composed similar texts thereby creating a genre of works in Indian mathematical tradition collectively referred to as ‘veṇvāroha texts’. These include Drik-veṇvārohakriya of unknown authorship of epoch 1695 and Veṇvārohastaka of Putuman Somāyaji. In the technical terminology of astronomy, the ingenuity introduced by Mādhava in Veṇvāroha can be explained thus: Mādhava has endeavored to compute the true longitude of the Moon by making use of the true motions rather than the epicyclic astronomy of the Aryabhata tradition. He made use of the anomalistic revolutions for computing the true positions of the Moon using the successive true daily velocity specified in Candravākyas (Table of Moon-mnemonics) for easy memorization and use. Veṇvāroha has been studied from a modern perspective and the process is explained using the properties of periodic functions. == See also == Indian mathematics Indian mathematicians Kerala school of astronomy and mathematics Madhava of Sangamagrama == References == == Further reading == For a fuller technical account of the contents of Veṇvāroha see : K. Chandra Hari. "Computation of the true moon by Madhava of Sangamagrama" (PDF). Indian Journal of History of Science. 38 (3): 231–253. Retrieved 21 March 2010. Veṇvāroha with the Malayalam commentary of Achyuta Pisharati has been edited by K.V. Sarma and published by Sanskrit College, Thrippunithura, Kerala, India in 1956.
Wikipedia:Vera Huckel#0
Vera Huckel (1908–1999) was an American mathematician and aerospace engineer and one of the first female "computers" at NACA, now NASA, where she mainly worked in the Dynamic Loads Division. == Life and work == Huckel was born in 1908 and studied math at the University of Pennsylvania, graduating in 1929. After living in California for ten years, she visited a friend in Newport News and was hired as a ''junior computer,'' doing mathematical calculations for other researchers for $1,440 a year (a man with her background typically earned about $2,000 a year). Before the invention of electronic computers, these so-called "computers," who were mostly women, would do the time-consuming calculations necessary for successful flights. Huckel became one of the first female engineers at NASA and wrote the program for its first electronic computer. She also worked as a supervisory mathematician and aerospace engineer during her time at NACA/NASA. By 1945 she had been promoted to section head in charge of up to 17 other women. She was involved in helping researchers make the switch from using slide rules to do their complex calculations to super computers. She also worked on theories of aerodynamics. As a mathematician, she was involved in the testing of sonic booms in supersonic flight. Huckel retired from NASA in 1972 after working there for more than 33 years. She was active in the Soroptomist Organization, the AAUW, and volunteered with the Hampton United Way. Huckel died at 90 years of age on March 24, 1999, in Newport News, Virginia, where she had lived for more than 60 years. She was buried in West Laurel Hill Cemetery in Bala Cynwyd, Pennsylvania. == Selected publications == Morgan, Homer G., Harry L. Runyan, and Vera Huckel. "Theoretical considerations of flutter at high Mach numbers." Journal of the Aerospace Sciences 25, no. 6 (1958): 371-381. Morgan, Homer G., Vera Huckel, and Harry L. Runyan. Procedure for calculating flutter at high supersonic speed including camber deflections, and comparison with experimental results. No. NACA-TN-4335. 1958. Hilton, David Artland, Vera Huckel, Domenic J. Maglieri, and R. Steiner. Sonic-boom exposures during FAA community response studies over a 6-month period in the Oklahoma City area. No. NASA-TN-D-2539. 1964. Hilton, David A., Vera Huckel, and Domenic J. Maglieri. Sonic-boom measurements during bomber training operations in the Chicago area. Vol. 3655. National Aeronautics and Space Administration, 1966. == References == == See also == Harvard Computers
Wikipedia:Vera Kublanovskaya#0
Vera Nikolaevna Kublanovskaya (née Totubalina; November 21, 1920 – February 21, 2012 ) was a Russian mathematician noted for her work on developing computational methods for solving spectral problems of algebra. She proposed the QR algorithm for computing eigenvalues and eigenvectors in 1961, which has been named as one of the ten most important algorithms of the twentieth century. This algorithm was proposed independently by the English computer scientist John G.F. Francis in 1961. == Early life == Kublanovskaya was born in November 1920 in Krokhona, a village near Belozersk in Vologda Oblast, Russian Soviet Federative Socialist Republic. She was born in a farming and fishing family as one of nine siblings. She died at the age of 91 years old in February 2012. == Education == Kublanovskaya started her tertiary education in 1939 at the Gertzen Pedagogical Institute in Leningrad. There, she was encouraged to pursue a career in mathematics. She moved on to study mathematics at Leningrad State University in 1945 and graduated in 1948. Following her graduation, she joined the Leningrad Branch of the Steklov Mathematical Institute of the USSR Academy of Sciences. She remained there for 64 years of her life. In 1955, she got her first doctorate degree on the application of analytic continuation to numeric methods. In 1972 she obtained a secondary doctorate on the use of orthogonal transformations to solve algebraic problems. In October 1985, she was awarded an honorary doctorate at Umeå University, Sweden, with which she has collaborated. == Scientific works == During her first PhD, she joined Leonid Kantorovich's group that was working on developing a universal computer language in the USSR. Her task was to select and classify matrix operations that are useful in numerical linear algebra. Her subsequent works have been foundational in furthering mathematical research and software development. She is mentioned in the Book of Proofs. == Publications == On some algorithms for the solution of the complete eigenvalue problem On a method of solving the complete eigenvalue problem for a degenerate matrix Methods and algorithms of solving spectral problems for polynomial and rational matrices To solving problems of algebra for two-parameter matrices. V To solving problems of algebra for two-parameter matrices. IX == Notes == == References == Dongarra, Jack J.; Sullivan, Francis (2000), "Guest editors' introduction: The top 10 algorithms", Computing in Science & Engineering, 2 (1): 22–23, Bibcode:2000CSE.....2a..22D, doi:10.1109/MCISE.2000.814652, ISSN 1521-9615. Golub, Gene H.; Uhlig, Frank (2009), "The QR algorithm: 50 years later – its genesis by John Francis and Vera Kublanovskaya, and subsequent developments", IMA Journal of Numerical Analysis, 29 (3): 467–485, doi:10.1093/imanum/drp012, ISSN 0272-4979. Kon'kova, Ya.; Simonova, V.N.; Khazanov, V.B. (2000), "Vera Nikolaevna Kublanovskaya. Short Biography", Journal of Mathematical Sciences, 114 (6): 1755–56, doi:10.1023/A:1022491200674, S2CID 118551402. == External links == MacTutor History of Mathematics biography
Wikipedia:Vera Serganova#0
Vera Vladimirovna Serganova (Russian: Вера Владимировна Серганова) is a professor of mathematics at the University of California, Berkeley who researches superalgebras and their representations. Serganova graduated from Moscow State School 57 and Moscow State University. She defended her Ph.D. in 1988 at Saint Petersburg State University under the joint supervision of Dimitry Leites and Arkady Onishchik. She was an invited speaker at the International Congress of Mathematicians in 1998 and a plenary speaker at the ICM in 2014. In 2017, she was elected a member of the American Academy of Arts and Sciences. The Gelfand–Serganova theorem gives a geometric characterization of Coxeter matroids; it was published by Serganova and Israel Gelfand in 1987 as part of their research originating the concept of a Coxeter matroid. == References == == External links == "Vera Serganova, Lecture I - 20 January 2015". YouTube. SNS Sciences. February 6, 2015. (Lie theory and representation theory) "Vera Serganova: Capelli eigenvalue problem for Lie superalgebras and supersymmetric polynomials". YouTube. Centre International de Recontres Mathématiques. June 29, 2018. "Generalized Kac-Moody Superalgebras and Root Groupoid". YouTube. Fields Institute. May 3, 2022; The Pursuit of Symmetry: A Conference in Honour of the 80th Birthday of Robert V. Moody; Speaker: Vera Serganova, University of California, Berkeley on April 29th, 2022{{cite web}}: CS1 maint: postscript (link) "Vera Serganova. Similarities between representation theories of finite groups in positive characteristic and Lie superalgebras". YouTube. Мех-Мат КНУ. July 5, 2023; Invited Talk by Professor Vera Serganova (University of California, Berkeley, USA) at the XIV Ukrainian Algebraic Conference (July 3–7, 2023, Sumy, Ukraine){{cite web}}: CS1 maint: postscript (link)
Wikipedia:Vera Traub#0
Vera Traub is a German applied mathematician and theoretical computer scientist known for her research on approximation algorithms for combinatorial optimization problems including the travelling salesperson problem and the Steiner tree problem. She is a junior professor in the Institute for Discrete Mathematics at the University of Bonn. == Education and career == Traub earned a bachelor's degree at the University of Bonn in 2015. She completed her doctorate (Dr. rer. nat.) there in 2020, with the dissertation Approximation Algorithms for Traveling Salesman Problems supervised by Jens Vygen. She was a postdoctoral researcher for Rico Zenklusen at ETH Zurich before taking her present position at the University of Bonn. == Recognition == Traub was a recipient of the 2020 European Association for Theoretical Computer Science Distinguished Dissertation Award, and the Hausdorff Memorial Prize for best dissertation of the University of Bonn Mathematics Department. In 2022 she received the Richard Rado Prize of the Discrete Mathematics group of the German Mathematical Society, a biennial prize for outstanding dissertations. She was one of three recipients of the 2023 Maryam Mirzakhani New Frontiers Prize, given to her "for advances in approximation results in classical combinatorial optimization problems, including the traveling salesman problem and network design". She also received the 2023 Heinz Maier-Leibnitz Prize of the German Research Foundation, the foundation's "most important award for researchers in early career stages". == References == == External links == Home page Vera Traub publications indexed by Google Scholar
Wikipedia:Vera Šnajder#0
Vera Šnajder (née Popović, 1904–1976) was a Bosnian Serb mathematician known for being the first Bosnian to publish a mathematical research paper and the first female dean in Yugoslavia. Šnajder was born on 2 February 1904, in Reljevo, one of the neighborhoods of Sarajevo; her father directed an Orthodox seminary. She began her university studies at the University of Belgrade in 1922, and graduated in 1928. She took a position as a schoolteacher at a girl's gymnasium in Sarajevo, and married Marcel Šnajder, a Jewish philosopher who at that time was working at the same school. From 1929 to 1932 she traveled to Paris for advanced work in mathematics. It was during this time that she published her paper, the first mathematics paper written by a Bosnian. Entitled Sur l’extension de la méthode de Hele Shaw aux mouvements cycliques (The extension of Hele-Shaw's method to cyclic movements), the publication appeared in the journal Comptes rendus de l'Académie des Sciences in 1931, under the name V. Popovitch-Schneider, and concerned fluid dynamics. After returning from Paris, Šnajder worked as a schoolteacher again. Her husband was killed by the Nazis in 1941. When the University of Sarajevo was founded in 1949, Šnajder became a faculty member. There, she first served as dean in 1951. She died on February 14, 1976, in Sarajevo. The Vera Šnajder Award was launched in her honor by Bosnia & Herzegovina Futures Foundation. The award is given every year in the form of a travel grant to an outstanding student of natural/technical sciences in Bosnia and Herzegovina (undergraduate, master's or doctoral studies). == References ==
Wikipedia:Verdiana Masanja#0
Verdiana Grace Masanja (née Kashaga, born October 12, 1954) is a Tanzanian mathematician specializing in fluid dynamics. She is the first Tanzanian woman to earn a doctorate in mathematics. == Education == Masanja was born in Bukoba, at the time part of the United Nations trust territory of Tanganyika. She was a student at the Jangwani Girls Secondary School in Dar es Salaam and then at the University of Dar es Salaam, completing a degree in mathematics and physics in 1976 and a master's degree in 1981. Her master's thesis was Effect of Injection on Developing Laminar Flow of Reiner–Philippoff Fluids in a Circular Pipe. She earned a second master's degree in physics and completed her doctorate in fluid dynamics at Technische Universität Berlin. Her dissertation, A Numerical Study of a Reiner–Rivlin Fluid in an Axi-Symmetrical Circular Pipe, was jointly supervised by Wolfgang Muschik and Gerd Brunk. == Career == Already, while a master's student, Masanja had become a lecturer at the University of Dar es Salaam, and on her return from Germany she became a professor there, and remained on the university's faculty until 2010. In 2006 she began teaching as well at the National University of Rwanda, and in 2007 became a professor there, as well as being appointed as the university's director of research, and as deputy vice chancellor and senior advisor at the University of Kibungo in Rwanda. In 2018 she returned to Tanzania as a professor of applied and computational mathematics at the Nelson Mandela African Institute of Science and Technology in Arusha. Masanja has served as vice president for Eastern Africa of the African Mathematical Union, chaired the African Mathematical Union Commission on Women in Mathematics in Africa and the Tanzania Education Network, and has served as National Coordinator for Female Education in Mathematics in Africa. == Research == She has also published on the education and participation of women in science. Masanja is editor-in-chief of the Rwanda Journal. == References == == External links == Verdiana Masanja publications indexed by Google Scholar
Wikipedia:Verena Huber-Dyson#0
Verena Esther Huber-Dyson (May 6, 1923 – March 12, 2016) was a Swiss-American mathematician, known for her work on group theory and formal logic. She has been described as a "brilliant mathematician", who did research on the interface between algebra and logic, focusing on undecidability in group theory. At the time of her death, she was emeritus faculty in the philosophy department of the University of Calgary, Alberta. == Biography == === Early life and education === Huber-Dyson was born Verena Esther Huber in Naples, Italy, on May 6, 1923. Her parents, Karl (Charles) Huber (1893–1946) and Berthy Ryffel (1899–1945), were Swiss nationals who raised Verena and her sister Adelheid ("Heidi", 1925–1987) in Athens, Greece, where the girls attended the German-speaking Deutsche Schule, or German School of Athens, until forced to return to Switzerland in 1940 by the war. Charles Huber, who had managed the Middle Eastern operations of Bühler AG, a Swiss food-process engineering firm, began working for the International Committee of the Red Cross (ICRC), monitoring the treatment of prisoners of war in internment camps. As the ICRC delegate to India and Ceylon, he was responsible for Italian prisoners held in British camps, but also visited German and Allied camps in Europe. In 1945-46 he served as an ICRC delegate to the United States, which he described to Verena as a place she "definitely ought to experience at length and in depth but just as definitely ought not to settle in." She studied mathematics, with minors in physics and philosophy, at the University of Zurich, where she obtained her Ph.D. in mathematics in 1947 with a thesis in finite group theory under the supervision of Andreas Speiser. === Career === Huber-Dyson accepted a postdoctoral fellow appointment at the Institute for Advanced Study in Princeton in 1948, where she worked on group theory and formal logic. She also began teaching at Goucher College near Baltimore during this time. She moved to California with her daughter Katarina, began teaching at San Jose State University in 1959, and then joined Alfred Tarski's Group in Logic and the Methodology of Science at the University of California, Berkeley. Huber-Dyson taught at San Jose State University, the University of Zürich, Monash University, as well as at University of California, Berkeley, Adelphi University, University of California, Los Angeles, and the University of Illinois at Chicago, in mathematics and in philosophy departments. She accepted a position in the philosophy department of the University of Calgary in 1973, becoming emerita in 1988. ==== Academic affiliations prior to June 1968 ==== Cornell University Goucher College San Jose State University (September 1959) Adelphi University UCLA University of London ETH Zürich Warwick University University of Melbourne Monash University Australian National University in Canberra University of Zürich Mills College UC Berkeley ==== Academic affiliations after September 1968 ==== Department of Mathematics, University of Illinois at Chicago (September 1968 – June 1971) tenure-track Assistant Professor Department of Philosophy, University of Calgary (September 1971 –June 1972) nontenure-track Department of Mathematics, University of Illinois at Chicago (September 1972 – June 1973) tenured Associate Professor Department of Philosophy, University of Calgary (September 1973 – June 1975) tenure-track Assistant Professor Department of Philosophy, University of Calgary (September 1977 – June 1981) tenured Associate Professor. Department of Philosophy, University of Calgary (September 1981 – June 1988) Full Professor Department of Philosophy, University of Calgary (September 1988 – March 2016) Emerita Professor === Activities while at Calgary === Taught graduate courses on foundations of mathematics and the philosophy and methodology of the sciences Began work on the monograph, Gödel's theorems: a workbook on formalization === Non-academic employment === Consultant for Remington Rand (Univac) in Philadelphia Consultant for Hughes Aircraft in Los Angeles === Later life === After retiring from Calgary, Verena Huber-Dyson moved back to South Pender Island in British Columbia, where she lived for 14 years. She died on March 12, 2016, in Bellingham, Washington, at the age of 92. === Personal life === Verena married Hans-Georg Haefeli, a fellow mathematician, in 1942, and was divorced in 1948. Her first daughter, Katarina Halm (née Halm), was born in 1945. She subsequently married Freeman Dyson in Ann Arbor, Michigan, on August 11, 1950. They had two children together, Esther Dyson (born July 14, 1951, in Zurich) and George Dyson (born 1953, Ithaca, New York), and divorced in 1958. == Selected publications == "There is more to truth than can be caught by proof". === Monographs === === Articles === == References == === Notes === === Citations === === Sources ===
Wikipedia:Vertex-transitive graph#0
In the mathematical field of graph theory, an automorphism is a permutation of the vertices such that edges are mapped to edges and non-edges are mapped to non-edges. A graph is a vertex-transitive graph if, given any two vertices v1 and v2 of G, there is an automorphism f such that f ( v 1 ) = v 2 . {\displaystyle f(v_{1})=v_{2}.\ } In other words, a graph is vertex-transitive if its automorphism group acts transitively on its vertices. A graph is vertex-transitive if and only if its graph complement is, since the group actions are identical. Every symmetric graph without isolated vertices is vertex-transitive, and every vertex-transitive graph is regular. However, not all vertex-transitive graphs are symmetric (for example, the edges of the truncated tetrahedron), and not all regular graphs are vertex-transitive (for example, the Frucht graph and Tietze's graph). == Finite examples == Finite vertex-transitive graphs include the symmetric graphs (such as the Petersen graph, the Heawood graph and the vertices and edges of the Platonic solids). The finite Cayley graphs (such as cube-connected cycles) are also vertex-transitive, as are the vertices and edges of the Archimedean solids (though only two of these are symmetric). Potočnik, Spiga and Verret have constructed a census of all connected cubic vertex-transitive graphs on at most 1280 vertices. Although every Cayley graph is vertex-transitive, there exist other vertex-transitive graphs that are not Cayley graphs. The most famous example is the Petersen graph, but others can be constructed including the line graphs of edge-transitive non-bipartite graphs with odd vertex degrees. == Properties == The edge-connectivity of a connected vertex-transitive graph is equal to the degree d, while the vertex-connectivity will be at least 2(d + 1)/3. If the degree is 4 or less, or the graph is also edge-transitive, or the graph is a minimal Cayley graph, then the vertex-connectivity will also be equal to d. == Infinite examples == Infinite vertex-transitive graphs include: infinite paths (infinite in both directions) infinite regular trees, e.g. the Cayley graph of the free group graphs of uniform tessellations (see a complete list of planar tessellations), including all tilings by regular polygons infinite Cayley graphs the Rado graph Two countable vertex-transitive graphs are called quasi-isometric if the ratio of their distance functions is bounded from below and from above. A well known conjecture stated that every infinite vertex-transitive graph is quasi-isometric to a Cayley graph. A counterexample was proposed by Diestel and Leader in 2001. In 2005, Eskin, Fisher, and Whyte confirmed the counterexample. == See also == Edge-transitive graph Lovász conjecture Semi-symmetric graph Zero-symmetric graph == References == == External links == Weisstein, Eric W. "Vertex-transitive graph". MathWorld. A census of small connected cubic vertex-transitive graphs. Primož Potočnik, Pablo Spiga, Gabriel Verret, 2012. Vertex-transitive Graphs On Fewer Than 48 Vertices. Gordon Royle and Derek Holt, 2020.
Wikipedia:Vertical line test#0
In mathematics, the vertical line test is a visual way to determine if a curve is a graph of a function or not. A function can only have one output, y, for each unique input, x. If a vertical line intersects a curve on an xy-plane more than once then for one value of x the curve has more than one value of y, and so, the curve does not represent a function. If all vertical lines intersect a curve at most once then the curve represents a function. == See also == Horizontal line test == Notes ==
Wikipedia:Vertical tangent#0
In mathematics, particularly calculus, a vertical tangent is a tangent line that is vertical. Because a vertical line has infinite slope, a function whose graph has a vertical tangent is not differentiable at the point of tangency. == Limit definition == A function ƒ has a vertical tangent at x = a if the difference quotient used to define the derivative has infinite limit: lim h → 0 f ( a + h ) − f ( a ) h = + ∞ or lim h → 0 f ( a + h ) − f ( a ) h = − ∞ . {\displaystyle \lim _{h\to 0}{\frac {f(a+h)-f(a)}{h}}={+\infty }\quad {\text{or}}\quad \lim _{h\to 0}{\frac {f(a+h)-f(a)}{h}}={-\infty }.} The graph of ƒ has a vertical tangent at x = a if the derivative of ƒ at a is either positive or negative infinity. For a continuous function, it is often possible to detect a vertical tangent by taking the limit of the derivative. If lim x → a f ′ ( x ) = + ∞ , {\displaystyle \lim _{x\to a}f'(x)={+\infty }{\text{,}}} then ƒ must have an upward-sloping vertical tangent at x = a. Similarly, if lim x → a f ′ ( x ) = − ∞ , {\displaystyle \lim _{x\to a}f'(x)={-\infty }{\text{,}}} then ƒ must have a downward-sloping vertical tangent at x = a. In these situations, the vertical tangent to ƒ appears as a vertical asymptote on the graph of the derivative. == Vertical cusps == Closely related to vertical tangents are vertical cusps. This occurs when the one-sided derivatives are both infinite, but one is positive and the other is negative. For example, if lim h → 0 − f ( a + h ) − f ( a ) h = + ∞ and lim h → 0 + f ( a + h ) − f ( a ) h = − ∞ , {\displaystyle \lim _{h\to 0^{-}}{\frac {f(a+h)-f(a)}{h}}={+\infty }\quad {\text{and}}\quad \lim _{h\to 0^{+}}{\frac {f(a+h)-f(a)}{h}}={-\infty }{\text{,}}} then the graph of ƒ will have a vertical cusp that slopes up on the left side and down on the right side. As with vertical tangents, vertical cusps can sometimes be detected for a continuous function by examining the limit of the derivative. For example, if lim x → a − f ′ ( x ) = − ∞ and lim x → a + f ′ ( x ) = + ∞ , {\displaystyle \lim _{x\to a^{-}}f'(x)={-\infty }\quad {\text{and}}\quad \lim _{x\to a^{+}}f'(x)={+\infty }{\text{,}}} then the graph of ƒ will have a vertical cusp at x = a that slopes down on the left side and up on the right side. == Example == The function f ( x ) = x 3 {\displaystyle f(x)={\sqrt[{3}]{x}}} has a vertical tangent at x = 0, since it is continuous and lim x → 0 f ′ ( x ) = lim x → 0 1 3 x 2 3 = ∞ . {\displaystyle \lim _{x\to 0}f'(x)\;=\;\lim _{x\to 0}{\frac {1}{3{\sqrt[{3}]{x^{2}}}}}\;=\;\infty .} Similarly, the function g ( x ) = x 2 3 {\displaystyle g(x)={\sqrt[{3}]{x^{2}}}} has a vertical cusp at x = 0, since it is continuous, lim x → 0 − g ′ ( x ) = lim x → 0 − 2 3 x 3 = − ∞ , {\displaystyle \lim _{x\to 0^{-}}g'(x)\;=\;\lim _{x\to 0^{-}}{\frac {2}{3{\sqrt[{3}]{x}}}}\;=\;{-\infty }{\text{,}}} and lim x → 0 + g ′ ( x ) = lim x → 0 + 2 3 x 3 = + ∞ . {\displaystyle \lim _{x\to 0^{+}}g'(x)\;=\;\lim _{x\to 0^{+}}{\frac {2}{3{\sqrt[{3}]{x}}}}\;=\;{+\infty }{\text{.}}} == References == Vertical Tangents and Cusps. Retrieved May 12, 2006.